What is Artificial Intelligence?

Put simply, artificial intelligence (AI) refers to the ability of machines to learn and make decisions based on data and analytics. One of the first sectors to deploy AI was the financial services sector. AI has been and is being used to monitor transactions for suspicious activity and fraud, anti-money laundering, carrying out risk assessments, assessing a customer's creditworthiness and offering tailored products and investment advice.

While the economic and societal benefits of AI to industry and society are widely recognized, so too are the inherent risks, including regulatory risks, and negative consequences associated with such technology. To balance these positions, the EU Commission proposed the draft Artificial Intelligence Act (AI Act) which is currently going through the EU legislative process. See our latest article on the AI Act here.

The AI Act, once enforced, is expected to have broad applicability. It will apply to all sectors, public and private. It will also apply to natural or legal persons who are providers, deployers, importers and distributors of AI systems. Further, it will apply to product manufacturers and authorised representatives of providers of AI systems as well as persons adversely impacted by AI systems.

The proposed rules will be enforced through a governance system at Member State level, building on already existing structures, and a cooperation mechanism at European Union level. In the paragraphs below, we consider this further as well as the regulatory implications on regulated entities and the interplay between regulated entities and regulators.

The AI Act and Governance Implications

1. Regulators

From the regulators' perspective, the AI Act will establish the European Artificial Intelligence Board (EAIB). The EAIB will be composed of representatives from the Member States and the Commission. The purpose of the EAIB is to facilitate a harmonised implementation of this regulation by contributing to the effective cooperation of national supervisory authorities and the European Commission (Commission). It will provide advice, recommendations, guidance and expertise to the Commission on specific questions related to AI.

The AI Act also seeks to enhance the role of National Competent Authorities (NCAs). Members States will be under an obligation to ensure that its NCA has adequate financial and human resources to fulfil their tasks. NCAs must have a sufficient number of personnel permanently available whose competencies and expertise include an in-depth understanding of AI technologies, data and data computing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements.

The recently leaked final draft of the AI Act (see our recent article here) refers to the AI Office. The AI Office will encourage and facilitate the drawing up of codes of practice at EU level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The AI Office may invite the providers of general-purpose AI models, as well as relevant NCAs, to participate in drawing up codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process. The AI Office will aim to ensure that participants in the codes of practice report regularly to the AI Office on the implementation of the commitments, the measures taken and, the outcomes. The AI office will also develop and maintain a single information platform providing information for all operators in the EU and will organise campaigns raising awareness about obligations regarding the AI Act.

The AI Act will also seek to involve existing authorities like Data Protection Authorities (DPAs), Market Surveillance Authorities (MSAs), and sector-specific regulators. Further, the Commission will also have substantial responsibilities in overseeing the consistent application of the AI Act across Member States. This structure will necessitate close coordination across various regulatory bodies, each with distinct yet interconnected roles in overseeing the AI ecosystem.

2. Regulated Entities

The AI Act will introduce a risk-based classification system where the level of AI regulatory scrutiny corresponds to the level of risk posed by an AI system. High-risk AI applications will undergo stringent conformity assessments and continuous post-market monitoring, while lower-risk applications will be subject to less onerous transparency and information obligations.

Regulated entities will be required to maintain extensive documentation and records, ensuring their AI systems are transparent and allowing for human oversight. The emphasis on high-quality, non-discriminatory datasets for training AI systems, means that entities will need to invest in robust data governance practices. The AI Act also imposes obligations to address biases and prevent discrimination, creating a need for new design methodologies.

Moreover, the AI Act's cross-border nature means that entities operating in multiple EU Member States will need to ensure that their AI systems comply with a consistent set of rules. Compliance with the AI Act could therefore become a competitive advantage, as adherence to ethical AI use is increasingly valued by consumers.

3. Financial Services in the AI Act

Streamlining AI Quality Management with Financial Governance

Article 17(3) of the AI Act encapsulates a pragmatic approach to quality management systems for AI, specifically for financial institutions already under the purview of EU financial services legislation. This provision acknowledges the robustness of existing financial governance frameworks, suggesting that compliance with internal governance rules suffices as adherence to AI quality management requirements, with certain exclusions. It represents an attempt to avoid duplicative regulatory burdens, recognising the stringent standards that financial institutions are already subject to. The reference to harmonised standards further underscores the commitment to a high level of quality in AI systems deployed within financial services, ensuring they are both safe and effective.

Maintaining Documentation and Logs within Established Frameworks

Articles 18(2) and 20(2) extend this integrated approach to the maintenance of technical documentation and logs generated by high-risk AI systems. By embedding these requirements within the documentation frameworks already mandated for financial institutions, the AI Act leverages existing compliance mechanisms. This strategy not only facilitates a streamlined regulatory process but also enhances the traceability and accountability of AI systems, crucial aspects in managing the risks associated with their deployment in sensitive financial operations.

Monitoring and Reporting: Ensuring AI System Integrity

The obligations for deployers (users), as outlined in Article 29, highlight a vigilant stance on monitoring high-risk AI systems and reporting anomalies. For financial institutions, these duties are deemed fulfilled through adherence to financial service legislation's governance arrangements. This provision recognises the comprehensive nature of financial regulatory frameworks in safeguarding against risks, while also mandating a proactive approach to identifying and mitigating any AI-related issues. The emphasis on immediate reporting and the suspension of AI systems upon identification of serious incidents or risks underscores the commitment to consumer protection and system integrity within the financial sector.

Facilitating Consistency and Minimising Burden

Articles 61(4) and 63(4) address the integration of post-market monitoring and the role of market surveillance authorities, respectively. By allowing financial institutions to integrate AI system monitoring within existing frameworks and designating financial supervisory authorities as the relevant market surveillance bodies, these provisions ensure a consistent and sector-specific approach to AI regulation. This tailored oversight mechanism minimises additional burdens on financial institutions, facilitating a smoother integration of AI technologies while maintaining a vigilant regulatory stance.

Implications and Path Forward

The AI Act's provisions on financial services seem to exemplify a regulatory ethos that values integration over duplication, recognising the comprehensive nature of existing financial governance frameworks. This approach not only facilitates the seamless adoption of AI technologies in financial services but also ensures that the regulatory focus remains sharply on managing the unique risks posed by AI. For financial institutions, this means navigating a regulatory landscape that is both familiar and evolving, requiring a deep understanding of both financial and AI-specific regulations.

As AI continues to transform financial services, the AI Act offers a blueprint for how regulatory frameworks can adapt to technological advancements without compromising on safety, efficacy, or consumer protection. The success of this approach will hinge on the continuous dialogue between regulators, financial institutions, and AI developers, ensuring that as AI technologies evolve, so too will the frameworks that govern them. In this dynamic interplay, the goal remains clear: harnessing the potential of AI to enhance financial services while safeguarding against its risks.

Collaboration between Regulators and Regulated Entities

The collaborative dynamics between regulators and regulated entities under the AI Act will be vital. Entities will likely need to engage in ongoing dialogue with regulators, adapting to guidance and ensuring that their AI systems are aligned with legal requirements. This engagement will be critical, not only for compliance, but also for shaping a market environment that fosters innovation while protecting fundamental rights and safety.

The AI Act thus represents a dual paradigm shift: regulators will need to expand their capabilities and collaborate to enforce the AI Act effectively, while regulated entities will have to embed compliance into their AI development and deployment processes, ensuring they meet the EU's ethical and technical standards for AI systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.