Biometric data is classified by the GDPR as a special category of personal data, subject to enhanced protection. This means processing biometric data is prohibited unless there is a valid legal basis for doing so.
In accordance with GDPR, biometric data can be processed under specific conditions, such as obtaining the explicit consent of the data subject or when processing is essential for substantial public interest.
GDPR’s jurisdiction extends globally, applying not only to entities based in the EU but also to those processing the data of EU residents, regardless of their location.
As such, organizations dealing with biometric identifiers must carefully define their legal grounds for processing and ensure compliance with GDPR requirements. This is crucial for both EU-based and non-EU entities handling biometric data.
What is Biometric Data?
Biometric data refers to personal data resulting from the specific technical processing of an individual’s physical, physiological, or behavioral characteristics. It can be used to identify or verify the unique identity of an individual.
Some common examples of special category biometric data include:
- Fingerprint: Distinct ridge patterns found on fingertips
- Facial Recognition: Analyzing facial features such as eye distance, nose shape, and jawline
- Iris Recognition: Evaluating patterns in the iris for unique identification
- Voiceprint: Assessing vocal characteristics like pitch and tone
- Retina Recognition: Analyzing the eye’s vascular patterns
- Hand Geometry: Measuring dimensions and shape of the hand
- DNA: Genetic information analysis
To comply with GDPR, organizations must determine if their use of biometric data fits within one of the legal grounds specified in the regulation. These grounds may include obtaining explicit consent from individuals or processing for purposes that fulfill public interests, such as law enforcement or national security.
The GDPR’s Stringent Conditions For Processing Biometric Data
1. Explicit Consent
The data subject must provide explicit, informed consent for the processing of their biometric data. This means they must fully understand what data is being collected, why, and how it will be used. Article 9(2)(a)
2. Legal Obligation or Public Interest
Biometric data may be processed if it is necessary for the fulfillment of a legal obligation or the performance of a task carried out in the public interest, such as law enforcement or national security. Article 9(2)(g) & Article 9(2)(f)
3. Vital Interests
Processing can occur if it is necessary to protect someone’s vital interests, such as in emergency situations where biometric data is required to protect life. Article 9(2)(c)
4. Employment Law
Biometric data may be processed in the context of employment law, provided the processing is necessary for fulfilling obligations or exercising specific rights in the field of employment. Article 9(2)(b)
5. Substantial Public Interest
Processing is permitted when necessary for reasons of substantial public interest, which should be specified by law (e.g., health and safety concerns or public health management). Article 9(2)(g)
6. Health and Social Care
Biometric data may be processed when necessary for healthcare or social protection purposes. This includes medical research, health diagnosis, treatment, or the provision of healthcare services, which may reveal information about a person’s physical or mental health status. Article 9(2)(h)
7. Data Minimization and Purpose Limitation
Only the minimum necessary biometric data should be collected, and it should only be processed for specified, legitimate purposes that are clearly defined in advance. The data should not be used for any purposes other than those for which it was initially collected. Article 5(1)(c) & Article 5(1)(b)
8. Data Security and Protection
Biometric data must be processed securely, ensuring measures like encryption, access controls, and pseudonymization are in place to protect the data from unauthorized access or breaches. Article 32
9. Transparency and Accountability
Organizations must be transparent with data subjects, providing clear information about how their biometric data will be used. This includes maintaining a record of processing activities and ensuring the rights of the data subjects are respected. Articles 12, 13, and 14
10. Compliance with Data Protection Impact Assessment (DPIA)
For high-risk processing (such as biometric data collection), organizations must conduct a Data Protection Impact Assessment (DPIA) to evaluate the risks and implement measures to mitigate those risks. Article 35
11. Right to Object
Data subjects have the right to object to the processing of their biometric data, especially when the processing is based on legitimate interests or public interest. Article 21
The Risks of Processing Biometric Data Without GDPR Compliance
Failing to adhere to GDPR when handling biometric data can result in severe penalties and reputational damage. Recent cases show the high stakes involved:
Mercadona:
The Spanish supermarket chain was fined €2.52 million for using facial recognition technology in its stores without proper consent, violating GDPR principles such as necessity and transparency.
Swedish School Incident:
A Swedish school was fined for using facial recognition to track attendance, as the processing did not meet the legal criteria under the GDPR. Parental consent was deemed invalid due to the imbalance of power between the institution and parents.
Clearview AI in France:
The French DPA imposed a €20 million fine on Clearview AI for collecting biometric data from over 20 billion online photos without consent. The company was ordered to cease its data collection and delete the collected information.
These examples emphasize the importance of transparency, necessity, and lawful consent when processing biometric data, underlining the necessity for organizations to carefully assess their practices.
How Biometric Data Can Be Safely Stored and Processed in Line with GDPR
Organizations processing biometric data must implement strict security measures to ensure compliance with GDPR.
One of the primary strategies for securing biometric data is encryption. Encryption should be applied to protect biometric data both during storage and transmission, ensuring that even if data is accessed by unauthorized parties, it remains unreadable.
For example, AES (Advanced Encryption Standard) can be used to encrypt data at rest, while TLS (Transport Layer Security) is ideal for encrypting data in transit. It’s also important that organizations securely manage encryption keys, allowing only authorized individuals to access them.
Access controls are another critical element in protecting biometric data. Access to sensitive biometric data should be strictly controlled and limited to authorized personnel only. This can be achieved by using role-based access controls (RBAC), where access is granted based on the user’s role and the need to know.
Additionally, multi-factor authentication (MFA) should be employed to secure access further. This requires users to provide multiple forms of identification, such as a password combined with a smart token or authentication via a smartphone.
Businesses should consider anonymizing biometric data whenever possible to further reduce the risks associated with data breaches. Anonymization removes identifying features from the data, ensuring that it can no longer be attributed to a specific individual without additional information.
For instance, organizations can store biometric data in the form of hashed templates rather than storing the actual facial or fingerprint data. This ensures that even if the data is accessed, it cannot be reversed back to identify the individual. Pseudonymization is another technique where biometric data is stored separately from personally identifiable information (PII), reducing the privacy risks.
GDPR’s data minimization principle is another critical consideration. This principle requires businesses only to collect and store the minimum amount of biometric data necessary for the specific purpose at hand.
Additionally, businesses should regularly evaluate whether biometric data is the only option for achieving a desired outcome or if there are less intrusive alternatives. For example, using ID cards, PIN codes, or RFID tags for authentication may be sufficient in some situations and pose fewer privacy risks than biometric data.
Organizations should also conduct regular security audits and monitoring of their data processing activities. These audits help identify vulnerabilities, ensure that access to biometric data is being properly managed, and confirm that the data is being processed according to GDPR guidelines.
By implementing these security measures—encryption, access controls, anonymization, data minimization, regular audits, and employee training—organizations can significantly reduce the risks associated with processing biometric data. These practices not only ensure compliance with GDPR but also help to build trust with data subjects by demonstrating a strong commitment to safeguarding their personal information.
GDPR’s Impact on the Use of Biometric Data in AI and Surveillance Technologies
As biometric data is increasingly utilized in AI and surveillance technologies, the GDPR plays a crucial role in regulating its use. The EU’s Artificial Intelligence (AI) regulations aim to ensure that AI systems involving biometric recognition adhere to GDPR principles.
The European Commission’s proposed AI regulatory framework classifies AI systems based on their potential risks, with higher risks requiring stricter compliance measures. AI systems for real-time biometric identification, such as facial recognition, may face heightened scrutiny, especially regarding transparency and the explicit consent of individuals.
GDPR and AI: The Future of Biometric Data Regulation
As part of its digital strategy, the EU seeks to regulate artificial intelligence (AI) to foster responsible development and deployment of this technology. The regulation of AI systems is evolving, with different risk categories determining the level of regulatory scrutiny.
AI systems, particularly those using biometric data for identification or categorization, are classified as high-risk and will be subject to stringent assessments before being placed on the market. The EU’s AI Act has introduced new obligations for providers and users of AI systems that process biometric data, focusing on safety, transparency, and non-discrimination.
AI Act: Different Rules for Different Risk Levels
The new AI regulations introduced by the EU establish obligations for providers and users based on the level of risk associated with artificial intelligence systems. AI systems are assessed for risk, with minimal-risk systems requiring less regulation, while high-risk systems face stricter scrutiny and compliance measures.
Unacceptable Risk
AI systems considered to pose an unacceptable risk to individuals or society will be banned.
These systems include:
- Cognitive Behavioral Manipulation: AI systems that manipulate vulnerable groups or individuals, such as voice-activated toys that encourage harmful behaviors in children.
- Social Scoring: AI that classifies people based on personal characteristics, behavior, or socio-economic status.
- Real-Time and Remote Biometric Identification: This category includes AI systems like facial recognition used for surveillance or identification without explicit consent.
High Risk
AI systems that pose high risks to safety or fundamental rights are categorized into two groups:
- AI in Products Regulated by EU Safety Legislation: This includes areas like aviation, medical devices, cars, and lifts, which are already subject to existing safety standards.
- Specific AI Applications: These systems must be registered in an EU database and include:
- Biometric identification and categorization of individuals
- Critical infrastructure management
- Employment and worker management
- Access to essential services and benefits
- Law enforcement, migration, asylum, and border control
- Legal assistance and interpretation
High-risk AI systems must undergo thorough assessments before entering the market and will be monitored throughout their lifecycle to ensure compliance.
Generative AI: Compliance with Transparency Standards
Generative AI technologies, like ChatGPT, are subject to transparency requirements under the EU’s AI regulations. These include:
- Disclosure: The system must clearly indicate when content is AI-generated.
- Content Moderation: AI models should be designed to avoid generating illegal or harmful content.
- Copyright Transparency: Providers must disclose summaries of copyrighted data used for AI training.
Limited Risk AI
For AI systems that fall under the category of limited risk, there are minimal transparency requirements. These systems, which include AI technologies that manipulate audio, video, or image content (e.g., deepfakes), must ensure users are aware they are interacting with AI. Users should be given the choice to continue using the system after being informed of its nature.
Conclusion
In conclusion, the processing of biometric data under GDPR requires businesses to adhere to strict legal, technical, and organizational standards to ensure compliance with data protection law.
With the increasing integration of biometric data into AI and surveillance technologies, it’s more important than ever for businesses to have the right tools to manage and protect this sensitive information. By adopting best practices like encryption, anonymization, and data minimization, businesses can reduce the risks associated with biometric data processing.
To ensure seamless compliance and safeguard your organization’s data, consider using GDPR Register’s Compliance Software. It offers a comprehensive suite of tools designed to help you manage compliance, track your processing activities, and maintain data security, all while adhering to the latest GDPR and AI regulations.
Article Sources: