On May 19, 2021, as part of our  AdvanceTM program, McCarthy Tétrault hosted the second event in our two-part series, "Deep Learning: AI Regulation Comes Into Focus". The first event, summarized  here, focused on emerging trends and issues in regulating artificial intelligence ("AI"). The second event, summarized below, focused on the European Commission's proposed AI Regulation (the "Proposed Regulation") (which we summarized in an  earlier blog post).

This well-attended event moderated by  Charles Morgan, national co-leader of our  Cyber/Data Group and a Partner in our Montreal office, featured a panel discussion with two world-leading experts on AI and AI regulation: John Buyers, Partner and Head of AI/Machine Learning at the UK law firm Osborne Clarke LLP; and  Patricia Shaw, CEO and Founder of Beyond Reach and the incoming Chair of the UK Society for Computers and Law.

John and Patricia offered their insights on the history, content, and implications of the proposed Regulation, particularly as it relates to businesses that participate in the AI ecosystem. Below, we highlight some of the key points and themes that emerged from this informative discussion.

Policy Background and History of the Proposed Regulation

Patricia traced the history of the Proposed Regulation. Its history dates back to March 2018, when the EU started moving towards a framework for trustworthy AI. Since then, we have seen a progressive evolution from voluntary ethical principles to proposed mandatory regulations with serious teeth. Over the course of this evolution, we have seen a lively debate over the ethics of AI, how - if at all - to regulate AI, and in particular how to mitigate risk while fostering innovation. While many questions remain unanswered, one point is clear: the "race to regulate" AI is on.

High-Level Overview of the Proposed Regulation

John provided a high-level overview of the Proposed Regulation, focusing on key points of interest to businesses that participate in the AI ecosystem.

Risk-Based Approach. The Proposed Regulation adopts a risk-based approach with separate rules tailored to three categories of AI: prohibited AI; high-risk AI; and other AI:

  • Prohibited AI includes practices deemed unacceptable given EU values. These include subliminal distorting techniques, exploitation of vulnerable groups, and social scoring by public authorities.
  • High-risk AI is the broadest category of AI and the main focus of the Proposed Regulation. It encompasses AI in products such as machinery, toys, and medical devices, as well as AI in educational and vocational training systems, employment systems, credit assessment systems, and more. High-risk AI is subject to stringent compliance obligations that may entail high compliance costs. These obligations include lifecycle risk management and quality management, testing and conformity assessments, data governance, human oversight, transparency, logging by design (record keeping), accuracy, and robustness.
  • Other AI  includes any forms of AI not listed above. These forms of AI may be subject to voluntary codes of conduct intended to replicate the mandatory rules for high-risk AI.

Transparency Requirements. The Proposed Regulation imposes transparency requirements on all AI. These rules require notification where AI is interacting with natural persons, involved in emotion recognition or biometric categorization, or used to produce "deepfake" content.

Extraterritorial Effect. The Proposed Regulation applies not only to EU companies, but also to foreign companies that participate in the AI supply chain leading to the availability or use of AI in the EU, including providers, importers, distributors, and users.

Regulatory Sandbox. The Proposed Regulation provides for the creation of a regulatory sandbox to promote innovation and enable businesses to trial AI systems.

Enforcement and Penalties. The Proposed Regulation gives enforcement authorities full access to AI system data and documents to facilitate enforcement. In addition, it contemplates significant penalties: up to €30 million or 6% of worldwide turnover for the use of prohibited AI or a failure to meet data quality standards, and up to €20 million or 4% of worldwide turnover for other breaches.

GDPR. The Proposed Regulation is designed to operate harmoniously with the EU  General Data Protection Regulation ("GDPR"), which regulates the collection, use, and disclosure of personal data in the EU.

The Path Ahead. The Proposed Regulation marks the beginning, not the end, of a lengthy process that could take years to complete. EU countries have not been unanimous in their views on how to regulate AI, so we can expect this debate to continue as the Proposed Regulation moves through the various stages of discussion, amendment, and approval.

Panel Discussion

Following Patricia and John's opening remarks, both speakers participated in a panel discussion moderated by Charles that focused on three topics relating to the Proposed Regulation: (i) policy critiques; (ii) practical implications; and (iii) geopolitical implications.

Policy Critiques

Achievements. Beginning with achievements, Patricia observed that the Proposed Regulation is a bold move that puts human flourishing at the heart of AI applications, establishes outcomes-based requirements without dictating the means by which those outcomes are to be achieved, and promotes good governance. Similarly, John observed that the Proposed Regulation sets out a systematic vision that creates a level playing field and avoids regulatory inconsistency, advances individuals' fundamental rights, and reflects a level of best-practice maturity.

Critiques.  Turning to critiques, John noted that the Proposed Regulation establishes certain requirements that, given the current state of technology, may be practically unachievable; adopts the approach reflected in the GDPR without addressing some of its shortcomings; and imposes significant compliance costs, which may hinder innovation. Similarly, Patricia noted that the Proposed Regulation raises questions about how it will be adhered to in practice, creates a real risk of duplication of effort between organizations, creates significant compliance costs (which may be particularly difficult for small- and medium-sized enterprises to bear), and does not appear to provide a space for meaningful end user stakeholder engagement.

Practical Implications

"Providers". John observed that the Proposed Regulation's focus on "providers" is already outmoded because markets that involve the supply of modular open-source machine learning tools, rather than full end-to-end AI systems, are increasingly gaining favour. The proposed Regulation risks creating overlapping compliance obligations for the providers, integrators, and users of these tools.

Compliance. Both Patricia and John remarked that the Proposed Regulation would impose significant compliance obligations and, by extension, significant compliance costs. In addition, it would create a heightened focus on risk management, quality assurance, testing, risk and bias mitigation, AI audits, training, and more.

Penalties. Both Patricia and John also remarked that the Proposed Regulation contemplates significant penalties. While uncertainty remains, these penalties would likely apply to a single entity, rather than a group of entities. However, a single entity may be exposed to penalties under both the GDPR and the Proposed Regulation for the same alleged breach. Businesses will want to manage their risk, and one way of doing so may be to set up a special purpose vehicle - not only to limit exposure, but also to avoid duplication of compliance obligations.

Geopolitical Implications

Shaping Global Standards. John and Patricia both observed that the Proposed Regulation, like the GDPR, represents an effort by the EU to shape global regulatory standards. In the GDPR's case, this effort proved largely successful. For example, the GDPR has had a major influence in the United States, where many large companies have in-house GDPR compliance teams. However, there is a risk that the Proposed Regulation may result in a multi-tiered market, with businesses developing different AI tools for different markets based on their relative compliance obligations. Moreover, there is a risk that some non-EU countries that lack AI regulations may come to serve as jurisdictional testing grounds for AI, which would raise ethical concerns. Only time will tell how the AI "race to regulate" will unfold.

Implications for the UK. John noted that the UK currently has no plans to adopt the Proposed Regulation. However, as noted, UK businesses that participate in the AI supply chain leading to the availability or use of AI in the EU would be subject to the Proposed Regulation.

Conclusion

AI regulation - like AI itself - is a rapidly developing area. The Proposed Regulation marks the first major regulatory foray into AI regulation and has the potential to influence global standards in the AI space, much like how the GDPR has influenced global standards in the data/privacy space. Canadian businesses involved in the AI ecosystem would be well-advised to stay up-to-date on developments in this area as the global regulatory landscape continues to evolve.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.