Skip to content Skip to footer

AI Act and The Scope of GDPR

The General Data Protection Regulation (GDPR) is a cornerstone of privacy law within the European Union (EU), setting strict rules for the handling of personal data. Notably, its reach extends beyond the EU’s borders, imposing obligations on companies worldwide that process the personal data of individuals within the EU. This extraterritorial scope ensures that entities outside the EU cannot circumvent GDPR requirements simply by being geographically distant. A similar global influence is expected from the forthcoming EU AI Act, which will regulate artificial intelligence (AI) systems.

While the GDPR and the EU AI Act target different types of entities—data controllers and processors under the GDPR, and AI providers and users under the AI Act—both regulations necessitate careful consideration by organizations. Companies must map out their obligations under each regime to determine which aspects of their operations fall under the GDPR, the AI Act, or both. This mapping is critical because these legal frameworks, while distinct, do intersect in key areas. For instance, both address issues related to bias and discrimination, require comprehensive risk assessments and regulate solely automated decision-making processes. Therefore, organizations must be diligent in understanding how these overlaps may affect their compliance strategies, ensuring they adhere to the requirements of both regulations without conflict or oversight.

As companies navigate the complexities of the GDPR and the EU AI Act, they must be aware of the potential overlaps between the two regulations and take a proactive approach to ensure compliance. The GDPR’s focus on protecting personal data and the EU AI Act’s emphasis on ensuring the responsible use of artificial intelligence mean that organisations must consider how these two regimes intersect and impact their operations. For instance, when it comes to addressing bias and discrimination, companies must not only ensure that their AI systems are fair and unbiased, as required by the EU AI Act, but also that they are processing personal data in a way that respects the rights of individuals under the GDPR. This means that organisations must implement robust measures to detect and mitigate bias in their AI systems, while also ensuring that they are transparent about their data processing activities and providing individuals with meaningful control over their personal data.

Furthermore, when conducting risk assessments, companies must consider both the potential risks to individuals’ personal data and the broader societal risks associated with AI systems. This requires a comprehensive approach that takes into account the potential consequences of AI-driven decisions on individuals and society as a whole. By carefully mapping the requirements of both regulations, organisations can ensure compliance and avoid potential pitfalls, such as reputational damage, regulatory fines, and legal liabilities. Additionally, organisations must also consider the potential benefits of integrating the GDPR and EU AI Act requirements, such as enhanced trust and confidence in their AI systems, improved data quality, and increased efficiency in their data processing operations. By adopting a holistic approach to compliance, organisations can turn regulatory requirements into a competitive advantage and establish themselves as leaders in the responsible use of AI and data protection.

The intersection of the GDPR and the EU AI Act raises complex questions about the processing of special category data, which includes sensitive information such as racial or ethnic origin, health data, and biometric data. The GDPR’s Article 9 imposes strict prohibitions on the processing of such data, allowing for exceptions only in specific circumstances. However, the European Court of Justice’s recent ruling in Case C-184/20 has expanded the scope of special category data to include information that can be used to infer or deduce sensitive characteristics, even if it does not explicitly reveal them. This decision has significant implications for machine learning applications, where proxy variables may be considered special category data under the GDPR. In contrast, the EU AI Act appears to provide a more permissive approach, exempting providers of high-risk AI systems from the Article 9 prohibition when processing special category data is necessary for bias monitoring, detection, and correction. However, this exemption is subject to the implementation of “appropriate” safeguards, which may be open to interpretation.

The potential conflict between the two regulations creates a challenging landscape for entities that must navigate both frameworks. On one hand, the GDPR’s strict prohibitions on special category data processing may require entities to implement robust safeguards and obtain explicit consent from individuals. On the other hand, the EU AI Act’s exemption may enable entities to process special category data for bias monitoring and correction, but only if they can demonstrate that such processing is strictly necessary and subject to appropriate safeguards. To reconcile these conflicting requirements, entities must conduct a careful analysis of their data processing activities and implement a nuanced approach that balances the need to ensure fairness and transparency in AI systems with the need to protect sensitive personal data. This may involve implementing additional safeguards, such as data anonymization or pseudonymization, and ensuring that data subjects are informed about the processing of their special category data. Ultimately, entities must prioritize transparency, accountability, and data protection by design to ensure compliance with both regulations and maintain trust with their stakeholders.

The interplay between the GDPR and the EU AI Act creates a complex landscape for risk management and assessment, particularly with regard to data protection impact assessments (DPIAs). Article 35 of the GDPR requires data controllers to conduct DPIAs when processing is likely to result in a high risk to individuals’ rights and freedoms. However, the EU AI Act introduces a new layer of complexity, as providers of AI systems may not always be able to assess all possible uses of a system. This means that a provider’s initial risk assessment for the purposes of determining whether a system is high-risk under the AI Act may not be sufficient to exclude the need for a subsequent DPIA by the user. In fact, the same system could be subject to different risk management requirements and classifications under each law, depending on the specific use case and context.

This creates a challenge for organisations that must navigate both frameworks, as they may need to conduct multiple risk assessments and implement different risk management measures to comply with both regulations. For example, a provider of an AI system may conduct an initial risk assessment and determine that the system is not high-risk under the AI Act, but the user of the system may still need to conduct a DPIA under the GDPR if they plan to use the system in a way that could result in a high risk to individuals’ rights and freedoms. To manage this complexity, organisations will need to develop a nuanced understanding of both regulations and implement a risk management framework that takes into account the specific requirements of each law. This may involve conducting regular risk assessments, implementing robust risk mitigation measures, and ensuring that all stakeholders are aware of the potential risks and benefits associated with the use of AI systems. By taking a proactive and transparent approach to risk management, organisations can ensure compliance with both regulations and build trust with their stakeholders.

The intersection of the GDPR and the EU AI Act raises important questions about the role of human oversight in decision-making processes that involve automated processing. Article 22(1) of the GDPR establishes a fundamental right for individuals to not be subject to decisions based solely on automated processing, which produces legal or similarly significant effects. This provision is echoed in the AI Act, which requires human oversight of high-risk systems. However, the AI Act’s risk-based approach creates a potential loophole, as it appears to allow low or minimal-risk AI systems to make solely automated decisions that could still have significant effects on individuals. This raises concerns about the potential for biased or discriminatory decision-making, particularly in contexts such as credit scoring, where AI systems are increasingly used to evaluate individuals’ creditworthiness.

The European Court of Justice’s consideration of whether creating a credit score constitutes a decision in and of itself will have significant implications for the development and deployment of AI systems that score, rank, or assess individuals. If the Court determines that credit scoring is a decision-making process that requires human oversight, it could have far-reaching consequences for the use of AI in a range of applications, from employment screening to healthcare diagnosis. On the other hand, if the Court determines that credit scoring is not a decision-making process, it could create a precedent for the use of solely automated decision-making in other contexts, potentially undermining the protections established by the GDPR. Either way, the outcome will have significant implications for the future of AI regulation in the EU and will likely shape the development of AI systems that involve decision-making processes. As the use of AI continues to grow, it is essential that regulators, policymakers, and industry leaders work together to ensure that these systems are designed and deployed in ways that respect the rights and dignity of individuals.

Leave a comment