The European Union has taken a historic step forward with political agreement on the Artificial Intelligence Act (AI Act or Act), heralding a new era of digital governance.

This landmark legislation is poised to establish the most comprehensive AI regulatory framework to date, with profound implications for Artificial Intelligence (AI) development and deployment within the EU and beyond. We await sight of the final approved text of the AI Act, but the political agreement reached on 8 December 2023, represents a very significant step forward. The following summary is based on our current understanding of the AI Act and is subject to change pending the publication of the final version.

Prohibited AI Systems

The following systems will be prohibited, with just six months for companies to ensure compliance:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

High-Risk AI Systems

The AI Act has two types of high-risk AI systems.

The first type, identified in Annex II, is where the AI system is a safety component of a product which is subject to specified EU regulations.

The second type comprises the activities listed in Annex III. This includes:

  • AI systems used for biometric identification and categorisation, including emotion recognition, but excluding certain cases outlined in Article 5.
  • Systems crucial for managing critical infrastructure, such as traffic, digital infrastructure, and utility supply, are also classified as high-risk.
  • In the realm of education and vocational training, AI applications used for significant decisions regarding admissions, assessments, and monitoring student behaviour are deemed high-risk.
  • The Act similarly categorises AI systems used in employment settings for recruitment, employment decisions, and performance monitoring.
  • AI applications used by public authorities for assessing eligibility for public assistance, healthcare services, and emergency response are included.
  • Additionally, law enforcement uses of AI for individual risk assessments, emotional state detection, deep fake detection, and criminal profiling are considered high-risk.
  • AI systems in migration, asylum, and border control for emotional state detection, risk assessment, and document verification.
  • AI systems aiding judicial authorities and influencing democratic processes, like electoral outcomes and social media recommender systems, also fall under this high-risk category.

High-Risk AI Systems Provider Obligations

High-risk AI systems providers will be subject to several key requirements:

  1. Risk Management: The AI Act mandates a rigorous risk management framework that encompasses the identification and mitigation of risks anticipated from both the intended use and potential misuse of AI systems.
  2. Registration: High-risk AI systems must be registered within a publicly accessible database to enhance regulatory supervision.
  3. Data Governance: The Act requires the establishment of data governance measures that ensure monitoring for biases, the use of representative and suitable data sets, and data sets that accurately reflect the demographics of the intended user base. High-risk AI systems must be designed and developed to manage biases effectively, ensuring that they are non-discriminatory and respect fundamental rights.
  4. Transparency: It calls for transparency in AI operations, mandating that user guidelines and in-depth technical documentation are readily available.
  5. Human Oversight: It enforces the principle of human oversight over AI systems, which must incorporate features such as the capacity for AI explanations and the creation of detailed operational logs. The AI Act requires human oversight for high-risk systems to minimise risks, ensuring that human discretion is part of the AI system's deployment.
  6. Accuracy, Robustness, and Cybersecurity: The Act enforces standards of precision, resilience, and digital security, obligating the execution of stringent testing and continuous monitoring, along with the adoption of robust cybersecurity measures.
  7. Record Keeping: Automated logging of events will be required for high-risk AI systems. Providers of high-risk AI systems must maintain thorough documentation to demonstrate their compliance with the regulation. This includes records of programming and training methodologies, data sets used, and measures taken for oversight and control.
  8. Data Protection Impact Assessments and Fundamental Rights Impact Assessments: All high-risk AI systems must undergo Fundamental Rights Impact Assessments to confirm their alignment with human rights principles, and conform to GDPR rules on carrying out data protection impact assessments where necessary.

High-Risk Systems User Obligations

Under the AI Act, users of high-risk AI systems also have obligations. While the Act primarily targets providers of AI systems, users are also subject to certain rules, particularly when they are utilising high-risk AI applications. Many organisations may erroneously believe that just because they are using an AI product as a subscriber or user, the AI Act will not apply to them. This is not the case. Users of high-risk AI systems will have the following obligations under the AI Act:

  1. Adherence to Human Oversight Requirements: Users must ensure adherence to human oversight requirements as stipulated by the AI Act
  2. Competence and Training of Oversight Personnel: Users must confirm that personnel responsible for human oversight are competent, properly qualified and trained, and that they have access to necessary resources for effective AI system supervision in line with Article 14.
  3. Cybersecurity and Robustness Measures: Users are responsible for ensuring that robustness and cybersecurity measures are in place, are regularly monitored for effectiveness, and are updated as needed.
  4. Data Relevance and Representativeness: Input data relied on by users should be relevant and sufficiently representative if they have control over the data used by the high-risk AI system.
  5. Operational Monitoring and Reporting: Users must monitor the operation of the high-risk AI system according to usage instructions and promptly inform providers about any operational issues as per Article 61. Should this use lead to potential risks, users must inform the provider or distributor, and the relevant national supervisory authorities, and cease system use until the issue is resolved.
  6. Log Retention and Compliance: Users will need to preserve logs generated by the high-risk AI system, as far as they are within their control, to demonstrate compliance with the AI Act, for post-use audits of malfunctions, incidents, or misuse, and to ensure proper functioning throughout the system's lifecycle. These logs should be retained for at least six months or as determined by industry standards and legal obligations.
  7. Worker Consultation and Notification: Before deploying a high-risk AI system in the workplace, users must consult with worker representatives to reach an agreement per Directive 2002/14/EC and inform affected employees about the system's implementation.
  8. Data Protection Impact Assessments: Users must use technical information provided with the AI system for data protection impact assessments, (per Article 35 of the GDPR or Article 27 of Directive (EU) 2016/680), publishing a summary suited to the specific use and context of the AI system.
  9. Informing Individuals Subject to AI Systems: Users will need to inform individuals subject to high-risk AI systems about its use, purpose, and the nature of decisions it assists in making, along with their right to an explanation.

Foundation Models and General Purpose AI (GPAI)

GPAI and foundation models must adhere to specific and rigorous standards reflecting their wide-ranging applications and possible effects. This includes comprehensive transparency protocols, the necessity for models with high-risk functionality to be evaluated for systemic risks, and the duty to clearly communicate to users when they are engaging with generative AI systems. This represents a big step back from what was initially proposed by the EU Parliament in June 2023. Foundation models will be regulated based on compute power. Following President Biden's Executive Order approach, it will apply to models whose training required 10^25 flops of compute power – the largest of the large language models.

Penalties and Enforcement: Upholding the AI Act

The AI Act introduces a stringent penalty regime for non-compliance, with fines of up to 7% of global annual turnover or €35 million for prohibited AI violations. Lesser, yet substantial, fines apply to other violations, with caps in place to protect small and medium-sized enterprises (SMEs).

Business Impacts and Strategic Shifts

Businesses that are heavily invested in prohibited technologies, such as biometric categorisation and emotion recognition, may need to consider major strategic shifts. Additionally, enhanced transparency requirements might challenge the protection of intellectual property, necessitating a balance between disclosure and maintaining trade secrets.

Companies may also need to invest in higher-quality data and advanced bias management tools, potentially increasing operational costs but enhancing AI systems' fairness and quality.

The documentation and record-keeping requirements will impose a significant administrative burden, potentially affecting the time to market for new AI products.

Integrating human oversight into high-risk AI systems will require system design and deployment changes, along with potential staff training.

The substantial fines for non-compliance represent a significant financial risk.

Timelines

Implementation periods will commence when the final wording of the text is approved by the EU, which is expected to happen in early 2024. The timelines currently suggested are six months for compliance with Prohibited AI Systems, 12 months for GPAI and foundation models, 24 months for high-risk systems based on Annex III, and 48 months for high-risk systems based on Annex II.

Conclusion

The AI Act sets a new global standard for the ethical development and use of AI technologies. With its comprehensive scope, explicit prohibitions, and strong enforcement mechanisms, the Act not only reshapes the European AI landscape but also signals a shift in the global dialogue on AI governance. As companies prepare for the changes necessitated by the Act, and as the EU moves from agreement to implementation, finalising the text by early 2024, the AI Act promises to usher in a future where AI is developed and used with the highest regard for fundamental rights.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.