EU Privacy Law EU/EEA

AI Impact Assessments: Merging DPIA Requirements with EU AI Act Obligations

A combined methodology for conducting data protection impact assessments and AI Act-required conformity assessments simultaneously.

Regulation

EU AI Act

Max Penalty

Up to EUR 35M or 7% of global annual turnover

Enforcing Authority

EU AI Office

Official Source

digital-strategy.ec.europa.eu

Executive Summary

  • The EU AI Act mandates AI Impact Assessments that align with GDPR DPIA requirements.
  • Organizations must categorize AI systems based on risk levels and conduct thorough assessments.
  • Non-compliance can result in penalties up to EUR 35 million or 7% of global turnover.
  • A robust compliance program should integrate DPIA and AIA processes while fostering stakeholder engagement.
  • Ongoing training and monitoring are essential for adapting to the evolving regulatory landscape.

The EU AI Act introduces a comprehensive framework for regulating artificial intelligence within the European Union, mandating organizations to conduct AI Impact Assessments (AIAs) that align with Data Protection Impact Assessments (DPIAs) under the General Data Protection Regulation (GDPR). This guide outlines the key compliance requirements, penalties, and practical implementation strategies for organizations navigating these intertwined obligations.

RegulationEU AI Act
Max PenaltyUp to EUR 35M or 7% of global annual turnover
Enforcing AuthorityEU AI Office
Official SourceEU AI Act

What Is EU AI Act?

The EU AI Act, proposed by the European Commission, aims to establish a regulatory framework for artificial intelligence that ensures safety and fundamental rights while fostering innovation. It categorizes AI systems based on risk levels — from minimal to unacceptable — and imposes varying compliance obligations accordingly. The act emphasizes transparency, accountability, and human oversight, particularly for high-risk AI applications that could significantly impact individuals’ rights and freedoms.

The act is part of a broader strategy to position the EU as a leader in the ethical and responsible use of AI technologies. Organizations deploying AI systems must assess their compliance with the act, particularly concerning the implications for data protection and privacy. This intersection with the GDPR necessitates a nuanced understanding of both frameworks to effectively manage compliance risks.

Who Must Comply

All organizations operating within the EU or providing AI services to EU citizens must comply with the EU AI Act. This includes both public and private sector entities, regardless of their size. The act specifically targets high-risk AI systems, which are defined by their potential to cause significant harm to individuals or society. Organizations must evaluate whether their AI systems fall into this category, as the compliance obligations are more stringent for high-risk applications.

Moreover, organizations that process personal data in conjunction with AI systems must also adhere to GDPR requirements, particularly Article 35, which mandates DPIAs for high-risk processing activities. This dual obligation necessitates a comprehensive approach to compliance, integrating the requirements of both the EU AI Act and the GDPR.

Core Compliance Requirements

Risk assessment and categorization. Organizations must conduct a thorough risk assessment to categorize their AI systems based on the potential risks they pose. This assessment should consider the likelihood and severity of harm to individuals, as well as the broader societal implications of the AI system’s deployment.

AI Impact Assessments. For high-risk AI systems, organizations are required to perform AI Impact Assessments that evaluate the potential impact on fundamental rights and freedoms. This process should align with DPIA requirements under the GDPR, ensuring that data protection considerations are integrated into the assessment of AI risks.

Documentation and record-keeping. Comprehensive documentation is essential for demonstrating compliance with both the EU AI Act and GDPR. Organizations must maintain records of their AI Impact Assessments, risk assessments, and any measures taken to mitigate identified risks. This documentation should be readily available for review by regulatory authorities.

Stakeholder engagement. Engaging with stakeholders, including affected individuals and relevant experts, is crucial for identifying potential risks and impacts associated with AI systems. Organizations should establish mechanisms for ongoing dialogue and feedback to ensure that their assessments remain relevant and effective.

Human oversight and control. High-risk AI systems must incorporate mechanisms for human oversight to ensure that decisions made by AI systems can be challenged and reviewed. Organizations should implement processes that allow for human intervention, particularly in cases where AI decisions significantly affect individuals’ rights.

Penalties and Enforcement

The EU AI Act imposes significant penalties for non-compliance, with fines reaching up to EUR 35 million or 7% of an organization’s global annual turnover, whichever is higher. The enforcement of these penalties will be overseen by the newly established EU AI Office, which will have the authority to investigate compliance and impose sanctions.

Organizations must be aware that the act’s enforcement mechanisms are designed to ensure accountability and transparency in AI deployment. Non-compliance not only risks substantial financial penalties but can also damage an organization’s reputation and erode public trust. Therefore, proactive compliance measures are essential to mitigate these risks.

Building a Defensible Compliance Program

To effectively navigate the complexities of the EU AI Act and GDPR, organizations should establish a robust compliance program. This program should encompass the following steps:

  1. Conduct a comprehensive inventory of all AI systems in use within the organization.

  2. Categorize AI systems based on risk levels — minimal, limited, high, or unacceptable.

  3. Develop and implement AI Impact Assessments for all high-risk AI systems.

  4. Integrate DPIA processes to ensure alignment with GDPR requirements.

  5. Establish documentation protocols to maintain records of assessments and compliance measures.

  6. Engage with stakeholders to gather insights and feedback on AI system impacts.

  7. Implement human oversight mechanisms for high-risk AI systems.

  8. Regularly review and update compliance programs to reflect changes in regulations and technology.

Practical Implementation Priorities

Integration of DPIA and AIA processes. Organizations should streamline their compliance efforts by integrating DPIA and AI Impact Assessment processes. This approach will reduce duplication of efforts and ensure that both data protection and AI-specific risks are adequately addressed.

Training and awareness. It is essential to provide training for employees on the requirements of the EU AI Act and GDPR. Raising awareness about compliance obligations and best practices will foster a culture of accountability and responsibility within the organization.

Monitoring and auditing. Organizations should establish ongoing monitoring and auditing processes to evaluate the effectiveness of their compliance programs. Regular audits will help identify potential gaps and areas for improvement, ensuring that compliance efforts remain effective and up-to-date.

Collaboration with legal and compliance teams. Engaging legal and compliance experts is crucial for navigating the complexities of the EU AI Act and GDPR. Organizations should foster collaboration between technical teams, legal advisors, and compliance officers to ensure a holistic approach to compliance.

Adapting to evolving regulations. The regulatory landscape surrounding AI is rapidly evolving. Organizations must stay informed about changes to the EU AI Act and related regulations, adapting their compliance strategies accordingly to mitigate risks associated with non-compliance.

Run a Free Privacy Scan

Before building a compliance program, an automated scan of your public-facing properties identifies the gaps that carry the most immediate regulatory risk — undisclosed trackers, consent mechanism failures, data sharing without adequate notice, and policy misalignments. BD Emerson’s privacy scanner produces a detailed findings report against EU AI Act requirements within minutes.

Run your free scan or speak with a privacy expert to discuss your compliance obligations under EU AI Act and build a prioritized remediation plan.

Regulatory Crosswalk

Organizations subject to this regulation often operate under these overlapping frameworks: GDPR Art. 35, ISO 42001, NIST AI RMF. BD Emerson maps controls across frameworks to reduce duplicated compliance effort.

Regulatory Crosswalk

GDPR Art. 35ISO 42001NIST AI RMF

Organizations subject to this regulation often operate under these overlapping frameworks. BD Emerson maps controls across frameworks to reduce duplicated compliance effort.

Evaluate your compliance posture now

BD Emerson's automated scanner audits your public-facing properties against your applicable regulations in minutes, not weeks.