EU AI Act's 'Deployers' Definition Has Wide-ranging Significance For Life Sciences

dentifying businesses as deployers of high-risk AI systems is crucial for understanding new regulatory responsibilities
European Union Food, Drugs, Healthcare, Life Sciences
To print this article, all you need is to be registered or login on Mondaq.com.

Identifying businesses as deployers of high-risk AI systems is crucial for understanding new regulatory responsibilities

The European Union's pioneering legal framework for artificial intelligence (AI), which was approved by the Council of the EU in May, has signalled a significant shift in AI regulation internationally.

Unlikeprevious legislation on industrial goods, the EU AI Act extends its reach to include system deployers and imposes a comprehensive set of obligations, particularly affecting the life sciences and healthcare sectors. The new legislation's oversight extends beyond product delivery to monitoring AI system performance in real-world healthcare settings.

Deployers defined

An AI deployer is defined as any individual or entity that uses an AI system within their professional scope, excluding personal and non-professional activities. This expansive definition captures a wide array of businesses that utilise AI, whether as part of their core operations or for ancillary activities such as organisational management or recruitment.

Healthcare professionals and healthcare institutions using AI will fall within this new concept when using AI. And so will any organisation deploying AI, including contract research organisations, contract manufacturing organisations, and pharma or medtech businesses at large.

The EU AI Act is designed to be comprehensive, applying to deployers based both within the EU and in third countries, provided the AI system's output is employed within the EU.

In the context of life sciences, AI has the potential to revolutionise areas such as patient selection for clinical trials or clinical investigations and the analysis of data critical for marketing authorisation and CE-marking processes. The scope of AI applications is vast, spanning the entire medicinal product and medical device lifecycle from development to post-market analysis.

Regulatory landscape for deployers

Deployers are tasked with ensuring that their teams possess an adequate level of AI literacy, taking into account their technical expertise and the context in which the AI systems will be employed. The regulatory burden intensifies with the deployment of high-risk AI systems, following a risk-based approach.

Deployers of high-risk AI systems are obligated to adhere to the systems' instructions for use and implement both technical and organisational measures to that effect. This approach is in sharp contract with current EU-wide pharmaceutical and medical device legislation that does not typically place usage obligations upon the end users of regulated health products. It remains to be seen how this new requirement will align with the occasional off-label use of health technologies in healthcare settings, which typically falls under the discretion of physicians or hospitals.

Deployers are also required to ensure human oversight, reflecting the legislator's intent to maintain a human-centric AI framework. These duties are in addition to other legal responsibilities under EU or national law.

For high-risk AI systems, deployers must guarantee the data used is relevant and representative. They are also responsible for ongoing post-market monitoring, reporting any risks to providers and keeping detailed logs of AI system usage. Employers have a duty to inform their workforce about the deployment of high-risk AI systems. Additionally, private entities in the healthcare sector providing public services must conduct a thorough fundamental rights impact assessment before deploying high-risk AI systems.

Legislative rationale

The EU AI Act addresses the burgeoning use of AI in medicine, preparing stakeholders for the regulatory challenges ahead. Deployers are uniquely positioned to monitor AI systems in action, identifying risks that may not have been apparent during the development phase, thus ensuring the safety and protection of fundamental rights.

The legislation promotes a widespread understanding of AI beyond the realm of providers that equips deployers with the knowledge necessary to make informed decisions about AI in their professional activities. This includes interpreting AI outputs and comprehending the implications of AI-assisted decisions on affected individuals.

Balancing compliance with clarity

To mitigate the compliance burden on deployers, the EU AI Act emphasises the importance of transparency, especially for high-risk AI systems.

Providers must communicate effectively, enabling deployers to fully grasp how the AI system operates, assess its functionality, and understand its capabilities and limitations.

Providers are encouraged to maintain a continuous, iterative risk management system throughout the AI system's lifecycle, incorporating performance metrics and benchmarks to foster transparency and fairness.

The instructions for use supplied by the provider must be clear and easily understood, avoiding any potential for misunderstanding or misleading statements, thus ensuring deployers are well-informed when utilising high-risk AI systems.

Osborne Clarke comment

The broad definition of "deployer" necessitates a vigilant assessment of high-risk AI uses in pharma and medtech. Any life sciences business using a high-risk AI system under its authority will be considered a deployer, except where the AI system is used in the course of a personal non-professional activity. Clinical professionals, regulatory teams, quality specialists, medical affairs representatives, data scientists, marketing departments and even legal teams within healthcare businesses will be impacted.

The obligations imposed upon deployers are extensive. They range from compliance with instructions for use, to assign qualified individuals for oversight, and maintaining control over input data's relevance. Other requirements include monitoring AI system operations, reporting risks and serious incidents promptly and keeping system-generated logs for at least six months. Deployers must inform workers about AI use in the workplace, ensure public authorities comply with registration requirements and use provided information to conduct necessary data protection impact assessments.

With the EU AI Act, the pharmaceutical industry faces a novel regulatory landscape, contrasting with the medical devices sector's existing familiarity with such frameworks. Both types of businesses are advised to proactively review their AI product portfolios, with a focus on high-risk categories, to devise strategies that are resilient to the unfolding AI regulation narrative.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More