This article is part three of a four-part series by Mayer Brown on the latest trends in digital transformation. Read part one here and part two here.

The rapid advancement of AI technologies in recent years means that regulators are engaged in a game of catch up. While the existing regulatory landscape is sparse regarding AI-specific regulations, lawyers should seek to understand how different AI systems work and begin building a framework for evaluating AI systems, figuring out how to apportion risk, and making sure their AI solutions comply with existing laws where they do govern their use and deployment.

[This article analyses the issues from the perspective of a customer licensing an AI solution; though many of these issues are equally as relevant to technology companies developing these technologies; distinct considerations can apply when AI is used directly to process, infer, or make decisions directly about consumers.]

Look Beyond the Label to Identify AI

One can debate what came first—the rise in the use of artificial intelligence systems or omnipresent marketing of software as "AI". But the reality is that the "AI" label is not a reliable way to determine whether you are contracting for a solution that raises AI-specific regulatory issues. AI is an ecosystem with numerous pieces of software, tools, and technology that include various elements of AI, such as machine learning and deep learning. Machine learning is emerging as one of the most popular technical methodologies underpinning modern "AI" solutions, and is the subject of some of the most intense regulatory scrutiny. Machine learning involves the training of an algorithm, often using enormous quantities of data, to draw connections and find patterns. From there, a trained model or algorithm can be used to make predictions or inform decisions when evaluating new datasets.

To determine if the software or service at issue in a particular instance is truly "AI" and raises distinct, AI-specific regulatory issues, the lawyer advising on the transaction needs to determine whether this software or service offering is actually leveraging a model that is continuously learning, or one that relies on static decision trees. It is imperative to understand how the solution works, what data is collected by the tool, how it is trained, and what role (if any) the humans may have in the decisions made by AI. Typically, this analysis requires in-depth discussions with technical experts to help evaluate a particular AI tool. As technology continues to evolve and, for example, deep learning gains greater adoption, the lawyers advising on AI contracts will need to remain agile in considering how these changes in technology impact the approach to contracting and related regulatory landscape.

Assess the Regulatory Landscape

While the concept of machine learning in general, and algorithm-driven decision making in particular, raises significant regulatory concern, the United States does not yet have a comprehensive AI regulatory framework. (While the federal government itself does have an AI strategy, and is working at developing private sector resources, it is not necessarily directly applicable to the broader development and deployment of AI in the private sector.) Companies deploying AI technologies are well advised to ensure their proposed use of these technologies do not run afoul of restrictions set out in laws and regulations relating to privacy, anti-discrimination, data security, and other related frameworks.

Taken together, it is clear that "AI compliance" cannot be outsourced—much as it cannot with any other technology. Companies that use AI systems may be held accountable for the actions of these systems. For example, if finance software led to discriminatory lending decisions, or an AI system used across an industry created an anti-competitive algorithm, then the companies using the software could face legal action.

In the US, one of the key agencies thinking about AI issues is the Federal Trade Commission, which has served in recent years as the de facto technology and data regulator at the federal level. While the FTC's statutory mandate does not include direct oversight of AI, they have leveraged existing authorities, including "rulemaking by blog post," to demonstrate their interest in the field, including as to how they may extend their authorities to these new contexts. This guidance is instructive on FTC's thinking, but not (yet) binding in the absence of formal rulemaking.

The European Union has started to think about AI as a set of technologies that have the possibility to present significant risks, including from the human rights perspective. Existing EU frameworks, including the General Data Protections Regulation (GDPR), establish requirements that people cannot be subject to a decision with a legal impact on them if that decision is made solely by automated processes. The word "solely," however, carries a lot of weight; the role of a "human in the loop" can reduce the likelihood that a decision informed by AI technologies may result in a finding of non-compliance with GDPR's prohibitions on "automated decision-making."

While the GDPR is the most directly applicable law currently on the books in the EU, the EU announced in April 2021 that it is considering directly regulating use of these technologies through a proposed "Artificial Intelligence Act", which would adopt a risk-based framework to imposing rules around the deployment of AI tools in the context of "union values" such as individual rights. The proposed AI Act, which faces an uncertain legislative future, proposes to carry with it the potential for steep regulatory penalties.

Lead a Multi-Disciplinary Team to Identify and Address High-Value and High-Risk Regulatory Issues in the Contract

Technology lawyers advising on AI contracts have a unique leadership opportunity because regulatory and other issues raised in these agreements tend to be intertwined, and not clearly aligned to existing corporate functions, such as procurement, compliance, or IT. Consider an AI tool designed to help optimize spare parts inventory to address supply chain risks, which makes predictions based on the analysis of data from various companies in the same industry about the sales of their products using these parts and related spare part inventory needs (shortage or surplus). Information related to the customers might raise data privacy concerns and questions as to the right to use customer information in this manner. Availability of competitive pricing information (either through the original or future use case, or as part of training of the AI algorithm) may raise competition issues, resulting reputational risk, and, ultimately, the question of whether a particular AI tool is appropriate for its intended purpose. The issues in this example (which are representative of what we are seeing with other AI contracts) include legal, compliance, procurement, and business issues, as well as questions regarding PR and marketing (pertaining to the use of customer data in this manner) and corporate strategy (whether to permit competitors to benefit from data regarding company sales in this manner). As such, lawyers should consider when to bring in relevant stakeholders to consider implementation and contracts for AI solutions in a comprehensive manner.

With respect to evolving regulatory issues related to AI, lawyers are usually expected to advise the business on compliance strategy and appropriate allocation of risks. Since, as we noted above, compliance relating to AI cannot be outsourced, it usually falls on the customer to identify all of the steps that are needed to achieve regulatory compliance with respect to AI, and then clearly allocate areas in which customer will rely on the technology provider to achieve compliance (e.g., compliance with certain data security standards, segregation of competitive information, etc.). With respect to risk allocation, the best practice is to allocate most of the liability to the party that is best able to understand risk and implement compliance. For example, if most of the data used to train the system comes from the customer, the customer may be in a better place to understand the risks involved in using that data, and it would be fair for the customer to assume most of the risk. However, if a technology company is gathering similar data from many customers, the corresponding risk should fall on the technology company.

Given the breadth of potential issues and expanding use of AI, to give effective counseling to the business, we recommend that technology lawyers lead a collaborative multi-disciplinary effort to develop and then regularly (e.g., every 6 months) revisit an agile risk-based framework that can help identify areas of high risk and, potentially, high value for the company that may involve AI technology now or in the near future. By looking for signals of risk and working backwards to their source, lawyers can be better prepared to mitigate these risks with specific AI technology that the company is looking to leverage. While it can be difficult to evaluate an AI system holistically in the short timeframe permitted to complete a contract for an AI solution, such a framework can help quickly evaluate and address risks as they develop.

Marina Aronchik is a partner in Mayer Brown's Chicago office, where she is a member of the Technology Transactions practice and the U.S. leader of Mayer Brown's Global Chemical Industry Group. Marina advises leading companies on critical technology and sourcing agreements.

Vivek Mohan is a partner in Mayer Brown's Cybersecurity & Data Privacy practice in Northern California, advising clients across industry sectors on legal, regulatory, compliance, and policy issues on a global scale.

Adam Cusick is an associate in Mayer Brown's Palo Alto office.

Reprinted with permission from the January 04, 2022 edition of Legaltech News © 2022 ALM Properties, Inc. All rights reserved. Further duplication without permission is prohibited.

To read this complete article visit Law.com (subscription required).

Visit us at mayerbrown.com

Mayer Brown is a global legal services provider comprising legal practices that are separate entities (the "Mayer Brown Practices"). The Mayer Brown Practices are: Mayer Brown LLP and Mayer Brown Europe - Brussels LLP, both limited liability partnerships established in Illinois USA; Mayer Brown International LLP, a limited liability partnership incorporated in England and Wales (authorized and regulated by the Solicitors Regulation Authority and registered in England and Wales number OC 303359); Mayer Brown, a SELAS established in France; Mayer Brown JSM, a Hong Kong partnership and its associated entities in Asia; and Tauil & Chequer Advogados, a Brazilian law partnership with which Mayer Brown is associated. "Mayer Brown" and the Mayer Brown logo are the trademarks of the Mayer Brown Practices in their respective jurisdictions.

© Copyright 2020. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.