The European Commission's (the Commission) decision establishing the AI Office enters into force on 21 February 2024.

The AI Office will implement the future AI Act at EU level and it should become a central coordination body for AI policy at EU level and cooperate with other Commission departments, EU bodies, Member States and the stakeholder community. It will promote the EU approach to AI governance and contribute to the EU's international activities on AI. More generally, the AI Office should build up knowledge and understanding on AI and foster AI uptake and innovation.

The AI Office will also play a key role in the preparation of secondary legislation, guidance, standards and codes of practice to facilitate the uniform application of the AI Act, as well as in the promotion of innovation ecosystems and the development and uptake of trustworthy and beneficial AI in the EU.

The AI Office will collaborate with a range of actors and bodies, such as a scientific panel of independent experts, an advisory forum of stakeholders, the AI Board, the European Data Protection Supervisor, the European Centre for Algorithmic Transparency, and international partners and initiatives on AI governance.

The AI Office: a new agency for AI governance

The AI Office is a new agency within the European Commission that will be responsible for the implementation and enforcement of the AI Act, the comprehensive and ambitious law that will regulate AI across the EU. The AI Act, which was politically agreed in December 2023 and has been passing all of the necessary votes since then, is expected to be formally signed into law in or around April. It will introduce a risk-based approach to AI, imposing stricter rules and obligations on high-risk AI systems that affect fundamental rights, safety or security.

The AI Office will play a pivotal role in the enforcement architecture of the AI Act, as it will have exclusive powers to supervise and investigate providers of General Purpose AI (GPAI) models, and to request or impose measures to ensure compliance with the AI Act, such as risk mitigation, withdrawal or recall. GPAI models are AI models that can perform multiple tasks across different domains and contexts, such as OpenAI's GPT-4, which powers ChatGPT. GPAI models are considered to pose a high level of risk and complexity, as they can potentially affect a wide range of areas and applications, and may be difficult to understand, control or predict.

The AI Act also introduces the following governance structures in addition to the AI Office:

  • The European Artificial Intelligence Board: Comprising representatives from the Member States, this board plays a crucial role in guiding the implementation of the AI Act at both national and EU levels.
  • A Scientific Panel: Integrating the scientific community into the governance process, this panel of independent experts is tasked with providing impartial and objective insights into AI's evolving landscape.
  • An Advisory Forum: This entity facilitates stakeholder input, ensuring that the voices of those impacted by the AI Act are heard and considered.

Codes of Practice

Central to the EU's governance framework is a collaborative approach that encourages the drawing up of codes of practice to enhance compliance. This not only involves regulatory bodies and AI providers but also opens the door for broad stakeholder engagement, ensuring that the regulatory environment is both comprehensive and adaptable.

The AI Office will have a supporting role in the preparation of secondary legislation, guidance, standards and codes of practice to facilitate the uniform application of the AI Act. It will provide the secretariat for the AI Board, a body composed of representatives of the member states that will advise and assist the Commission on AI matters. It will also collaborate with a scientific panel of independent experts and an advisory forum of stakeholders to integrate their input and expertise in the AI governance. The scientific panel will provide scientific and technical advice on AI, such as the definition and classification of AI systems, the identification and assessment of risks, and the development and evaluation of methods and techniques. The advisory forum will provide opinions and recommendations on AI, taking into account the views and interests of various groups, such as civil society, academia, industry, consumers, workers and public authorities.

The codes of practice are designed to be more than just guidelines; they are envisioned as actionable frameworks with clear objectives, commitments, and key performance indicators. This structured approach aims to translate broad regulatory goals into tangible outcomes, fostering a culture of accountability and continuous improvement among AI practitioners.

The AI Office, together with the AI Board, is tasked with ensuring that the codes of practice not only address specific obligations outlined in the AI Act but also tackle broader issues such as systemic risks, information accuracy, and risk management strategies. This approach underscores the EU's recognition of the multifaceted challenges posed by AI, from evolving market and technological landscapes to the nuanced nature of systemic risks within the EU.

Participants in the codes of practice are expected to regularly report on their implementation efforts, providing a basis for ongoing evaluation by the AI Office and AI Board. This feedback loop is essential for assessing the codes' effectiveness and their contribution to the overarching goal of regulatory compliance. The AI Office's role in facilitating review and adaptation of the codes in response to emerging standards further exemplifies the EU's commitment to agile governance, responsive to technological advancements.

Furthermore, the AI Office and Member States are encouraged to promote codes of conduct for AI systems not classified as high-risk. These voluntary codes aim to extend ethical guidelines and best practices to a broader range of AI applications, highlighting the EU's holistic approach to AI governance.

GPAI Models

The AI Office plays a pivotal role in ensuring the compliance, monitoring, and governance of GPAI systems within the EU's regulatory framework. Tasked with overseeing the adherence of GPAI systems to established codes of practice, the AI Office engages a broad spectrum of stakeholders—including providers, national authorities, civil society, and academia—in the collaborative development of these codes to address the specific challenges and obligations associated with GPAI systems. It is responsible for evaluating compliance through regular reporting and key performance indicators, facilitating standardised regulatory templates, and raising awareness about obligations under the AI regulation. Furthermore, the AI Office collaborates with national market surveillance authorities to enforce regulations, responding to alerts of systemic risks posed by GPAI systems with the necessary investigative and enforcement actions. This comprehensive approach underscores the AI Office's central role in balancing innovation with safety, transparency, and accountability in the deployment of GPAI technologies across the EU.

To address the challenges posed by general-purpose AI models, particularly those with systemic risks, the AI Act establishes a framework for monitoring, compliance, and enforcement. This includes:

  • Scientific Panel of Independent Experts: Tasked with supporting the AI Office's monitoring activities, this panel can issue alerts on potential risks associated with AI models.
  • Joint Investigations: The AI Act facilitates joint activities between market surveillance authorities and the Commission to promote compliance and identify non-compliance across Member States.
  • Centralised Supervision: For AI systems based on general-purpose models provided by the same entity, supervision is centralised at the EU level through the AI Office, streamlining the regulatory process and avoiding overlapping competences.

Regulatory Collaboration

Collaboration is at the heart of the AI Office's operations. By working closely with stakeholders, including experts from the scientific community and AI developers, the AI Office aims to harness collective expertise to advance best practices. This collaborative ethos extends to the Commission's Directorate-Generals and relevant EU bodies, fostering a cross-sectoral cooperation that underscores the multifaceted nature of AI and its implications across different policy domains.

The AI Office will also coordinate the enforcement of the AI Act on AI systems that are already covered by other EU legislation, such as social media platforms and search engines, which are subject to the Digital Services Act and the Digital Markets Act, or online advertising and content moderation, which are subject to the e-Commerce Directive and the Audiovisual Media Services Directive. The AI Office will ensure consistency and coherence between the AI Act and the other EU laws and will cooperate with the national competent authorities and the other EU bodies that are responsible for the supervision and enforcement of the other EU laws.

A key feature of this regulatory strategy is its inclusive nature. The AI Office is set to invite a diverse range of participants to contribute to the codes of practice, from AI model providers to national authorities, civil society, industry stakeholders, and academia. This collaborative effort is crucial for capturing the breadth of challenges and opportunities presented by AI, ensuring that regulations are both effective and reflective of the ecosystem's diversity.

The AI Act also encourages cross-border cooperation and the establishment of regulatory sandboxes, fostering innovation while ensuring regulatory oversight. By making information on AI sandboxes publicly available, the EU aims to stimulate interaction and learning across borders, enhancing the collective understanding and management of AI technologies.

Market Monitoring

An integral part of the AI Office's role is to keep a vigilant eye on the evolution of AI markets and technologies. This includes developing tools for evaluating AI models, especially those posing systemic risks, and monitoring their implementation and potential infringements. The AI Office is also charged with the critical task of identifying unforeseen risks, ensuring that AI systems adhere to the EU's legislative framework, and supporting the enforcement of rules on prohibited AI practices and high-risk systems.

The AI Act delineates a collaborative framework for supervision and enforcement, empowering the AI Office to monitor compliance and engage with national market surveillance authorities. This cooperative model is designed to ensure that AI systems, especially those posing high risks, meet the EU's stringent requirements, safeguarding public interest.

Conclusion

The establishment of the AI Office marks a significant milestone in the EU's journey towards a harmonised AI regulatory framework which balances innovation with accountability. This forward-looking initiative not only reflects the EU's commitment to leading in AI governance but also its dedication to securing a future where AI serves the common good, reinforcing the principles of trust and safety that are paramount in the digital age.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.