As artificial intelligence (AI) becomes increasingly embedded into products, services, and business decisions, state and local lawmakers have been considering and passing a range of laws addressing AI. These vary from laws that promote AI to more regulatory approaches that impose obligations on AI in specific areas. In a development that parallels the evolution of privacy laws, states and localities have moved ahead with initiatives on their own. However, unlike in privacy, where a set of legislative approaches has been debated for years, approaches to dealing with AI have been far more varied and scattershot. This kind of a patchwork approach, if it continues, may create issues with managing regulatory compliance for many uses of AI across jurisdictions. 

States and Localities Are Beginning to Move Forward with a Piecemeal Approach to AI

In 2021, five jurisdictions – Alabama, Colorado, Illinois, Mississippi, and New York City – enacted legislation specifically directed at the use of AI. Their approaches varied, from creating bodies to study the impact of AI to regulating the use of AI in contexts where governments have been concerned about increased risk of harm to individuals.

Some of these laws have focused on promoting AI. For instance, Alabama's law establishes a council to review and advise the Governor, the legislature, and other interested parties on the use and development of advanced technology and AI in the state. The Mississippi law implements a mandatory K-12 curriculum that includes instruction in AI.

Conversely, some laws are more regulatory and skeptical of AI. For example, Illinois has adopted two AI laws – one that develops a task force to study the impact of emerging technologies, including AI, on the future of work and another that mandates notice, consent, and reporting obligations for employers that use AI in hiring. Under existing Illinois law, an employer that asks applicants to record video interviews and uses an AI analysis must: (1) notify the applicant that AI may be used to analyze the applicant's video interview and consider the applicant's fitness for the position; (2) provide each applicant with information explaining how the AI works and what general types of characteristics the AI uses to evaluate applicants; and (3) obtain consent from the applicant. The law also limits the sharing of the videos and extends to applicants a right to delete the videos. A 2021 amendment imposes reporting requirements on an employer that relies solely upon an AI analysis of a video interview to determine whether an applicant will be selected for an in-person interview. The state Department of Commerce and Economic Opportunity is required to annually analyze certain demographic data reported and report to the Governor and General Assembly whether the data discloses a racial bias in the use of AI.

Colorado's law takes a sectoral approach, prohibiting insurers from using any information sources as well as any algorithms or predictive models in a way that produces unfair discrimination. Unfair discrimination includes "the use of one or more external consumer data and information sources, as well as algorithms or predictive models using external consumer data and information sources, that have a correlation to race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression, and that use results in a disproportionately negative outcome for such classification or classifications, which negative outcome exceeds the reasonable correlation to the underlying insurance practice, including losses and costs for underwriting." This law comes in addition to Colorado's comprehensive privacy law, the Colorado Privacy Act, set to go into effect on July 1, 2023, which provides consumers with a right to opt out of the processing of their personal data for purposes of targeted advertising, the sale of personal data, or automated profiling in furtherance of decisions that produce legal or similarly significant effects.

In late 2021, New York City notably enacted a specific algorithmic accountability law, becoming the first jurisdiction in the United States to require algorithms used by employers in hiring or promotion to be audited for bias. New York City's law bars AI hiring systems that do not pass annual audits checking for race- or gender-based discrimination. The bill would require the developers of such AI tools to disclose the job qualifications and characteristics that will be used by the tool and would provide employment candidates the option of choosing an alternative process for employers to review their application. The law imposes fines on employers or employment agencies of up to $1,500 per violation.

California's Privacy Regulations May Also Target AI

California's California Privacy Protection Agency (CPPA), the new agency charged with rulemaking and enforcement authority over the California Privacy Rights Act (CPRA), is expected to issue regulations governing AI by 2023. The statute specifically addresses a consumer's right to understand and opt out of automating decision-making technologies such as AI and machine learning. In particular, the agency is charged with "[i]ssuing regulations governing access and opt-out rights with respect to businesses' use of automated decisionmaking technology, including profiling and requiring businesses' response to access requests to include meaningful information about the logic involved in those decisionmaking processes, as well as a description of the likely outcome of the process with respect to the consumer."

In September 2021, the CPPA released an Invitation for Preliminary Comments on Proposed Rulemaking (Invitation) and accepted comments through November 8, 2021. The Invitation to comment issued by the CPPA asked four questions regarding interpretation of the agency's automated decision-making rulemaking authority:

  1. What activities should be deemed to constitute "automated decisionmaking technology" and/or "profiling"
  2. When consumers should be able to access information about businesses' use of automated decision-making technology and what processes consumers and businesses should follow to facilitate access
  3. What information businesses must provide to consumers in response to access requests, including what businesses must do in order to provide "meaningful information about the logic" involved in the automated decision-making process
  4. The scope of consumers' opt-out rights with regard to automated decision-making, and what processes consumers and businesses should follow to facilitate opt-outs.

While the statute calls for final rules to be adopted by July 2022, at a February 17 CPPA board meeting, Executive Director Ashkan Soltani announced that draft regulations will be delayed. As we've previously discussed, this effort in California to regulate certain automated decision-making processes may open the door to greater regulation of AI and should be watched closely. 

Even as the federal government looks more closely at AI, some states and localities appear to be poised to jump ahead. Indeed, many other states continue to debate AI proposals in 2022. Companies developing and deploying AI should continue to monitor this area as the regulatory landscape develops.

© 2022 Wiley Rein LLP

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.