Advancements in artificial intelligence (AI) have led to a wide range of innovations in many aspects of our society and economy, including in a wide range of industry verticals such as healthcare, transportation, and cybersecurity. Recognizing that there are limitations and risks that must be addressed, AI has garnered the attention of regulators and legislators worldwide.

In 2020, Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework with the public and private sectors.  Last week, pursuant to its mandate, and following initial requests for information and workshops on AI it held in 2021, NIST released two documents relating to its broader efforts on AI. First, it published an initial draft of the AI Risk Management Framework on March 17. Public comments on the framework are open through April 29. In addition, the agency is holding a public workshop March 29-31.  Second, it updated a special publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. While it is unclear whether NIST's efforts will lead to a broader consensus or federal legislation on AI, the Federal Trade Commission (FTC) and state legislatures are already focused on it in the immediate term.

As we have previously reported here on CPW (here), the FTC is focused on AI and has indicated consideration of promulgating AI-related regulations.  Though statements by Commissioner Wilson seem to have casted doubt on the Commission's likelihood of issuing AI-focused regulations in the first half of this year, its recent settlement in the Weight Watchers case reinforces the agency's commitment to consumer privacy and related issues and the effects that AI has on them.

AI and State Privacy Laws

AI is also a focus at the state level as well. Starting in 2023, AI, profiling, and other forms of automated decision-making will become regulated under the broad and sweeping privacy laws in California, Virginia, and Colorado, providing corresponding rights for consumers to opt-out of certain processing of their personal information by AI and similar processes.  We can expect to see AI and profiling concepts fleshed out substantially in regulations promulgated pursuant to the California Privacy Rights Act (CPRA). As of now, the CPRA is very light on details regarding profiling and AI, but seemingly will require businesses, in response to consumer requests to know/access "to include meaningful information about the logic involved in such decision-making processes" – in other words, information about the algorithms used in AI and automated decision-making. For now, we can expect to see regulations issued pursuant to the Colorado Privacy Act as well (in Virginia, it's less clear as the Attorney General was not given rulemaking authority). Organizations should understand the requirements as to AI, profiling, and automated decision-making in these quickly approaching privacy regimes, and continue to pay attention as rulemaking in California and Colorado progresses.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.