On 18 June 2021, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) issued a joint opinion on the European Union's proposed AI Regulation, which was announced earlier this year.

While welcoming the proposal, the EDPB and EDPS have also called for several significant modifications to the existing draft. This includes addressing issues connected to the future supervision of the AI Regulation, suggesting tougher prohibitions on the use of AI in certain contexts, and asking for better alignment with the existing data protection framework.

The most important recommendations put forward in the opinion include:

  • Data protection authorities should be responsible for supervision

In its current form, the AI Regulation envisages that each Member State shall designate a national supervisory authority to take on primary responsibility for supervision of the law within its jurisdiction. The proposal does not specify who this authority should be, with the implication being that Member States can, at their discretion, decide whether to confer the relevant powers on an existing authority or a new authority. However, the opinion suggests that data protection authorities (DPAs) should be given the mantle across the EU, in order to ensure better harmonisation with existing data protection laws.

  • Improved alignment with the data protection framework

The need for improved harmonisation between the AI Regulation and the EU's data protection framework is a common theme throughout the opinion. The EDPB and EDPS highlight the strong interconnected nature between the two areas and call for there to be a clearly defined relationship between them. This includes the need for the protection of personal data and fundamental rights to be embedded into the proposal, with data protection compliance as a pre-condition to a declaration of conformity being provided in connection with high-risk AI systems.

  • Organisations should be able to benefit from a one-stop-shop

One of the most notable gaps in the current proposal is the absence of a robust cooperation mechanism, in order to determine the relevant competent authority for cross-border matters and the process for how authorities in different Member States will work together. The opinion suggests addressing this through the introduction of a "single point of contact" for organisations and, where an organisation has operations in over half of EU Member States, they also suggest a sole national supervisory authority be considered competent. However, it is currently unclear how the EDPB and EDPS envisage these two concepts would work in conjunction with each other.

  • Biometric identification in public spaces should be prohibited

The EDPB and EDPS take a stricter view on the scope of the current prohibition on the use of remote biometric identification (e.g. facial recognition) in public spaces. Their view is that all such technologies have the potential to result in "irreversible and severe" effects on populations and can never be considered proportionate. Therefore, while the current proposal includes a number of exceptions to the general prohibition, the opinion calls for these to be removed. This would result in there being a complete ban on any use of AI for the automated recognition of individuals in public spaces.

  • Additional prohibitions should be introduced

The opinion also recommends that other prohibitions relating to certain AI systems be expanded and strengthened. This includes suggesting that the ban on the use of "social scoring" applications for determining trustworthiness be expanded to apply to private companies, as well as public authorities. There are also proposals for additional prohibitions on the use of AI systems for inferring the emotions of individuals and for clustering individuals into groups (e.g. based on their ethnicity) using biometric data.

  • Additional AI systems should be considered high-risk

As part of the existing proposal, where AI systems are considered by the Commission to be suitably high-risk, they are explicitly designated as such. Examples include certain examination-marking software, credit scoring/assessment applications, and recruitment tools. The EDPB and EDPS are proposing that this list of high-risk systems be expanded further to include AI systems that are used to determine insurance premiums, assessing medical treatments, and for health research purposes.

  • Users should be expected to undertake granular risk assessments

At present, the most onerous obligations in relation to high-risk AI systems apply to providers of the technology, as opposed to the organisations that are using it. The opinion asks for greater alignment between the roles of the parties under the AI Regulation and the concepts of a controller and processor under the GDPR. Equally, it suggests that because developers of AI systems will generally not have control, or be aware of, the specific purposes for which a system will be utilised, the users should be expected to undertake more detailed risk assessments that take into account their specific use-case. If adopted, this would result in a significant transfer of responsibility from providers to users.

In conclusion, this opinion from the EDPB and EDPS further emphasises the significance and interconnected nature of the AI Regulation to current data protection laws. Therefore, the extent to which these recommendations are taken on board by the European Commission will be of great interest to many organisations who are either using or developing artificial intelligence technologies, particularly those that involve the use of personal data.

The Commission's current public consultation phase on the proposed AI Regulation remains open until 6 August 2021.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.