Colorado Enacts AI Consumer Protection Legislation

JD
Jones Day

Contributor

Jones Day is a global law firm with more than 2,500 lawyers across five continents. The Firm is distinguished by a singular tradition of client service; the mutual commitment to, and the seamless collaboration of, a true partnership; formidable legal talent across multiple disciplines and jurisdictions; and shared professional values that focus on client needs.
On May 17, 2024, Colorado enacted S.B. 24-205 (the "Act"), which imposes a duty of reasonable care on developers and deployers of high-risk artificial intelligence ("AI") systems to protect consumers...
United States Consumer Protection
To print this article, all you need is to be registered or login on Mondaq.com.

On May 17, 2024, Colorado enacted S.B. 24-205 (the "Act"), which imposes a duty of reasonable care on developers and deployers of high-risk artificial intelligence ("AI") systems to protect consumers from risks of algorithmic discrimination.

Colorado is the first state to enact comprehensive legislation of high-risk AI systems in the United States. Effective February 1, 2026, the Act imposes sweeping compliance requirements on developers and deployers of high-risk AI systems. The Act will be enforced by the Colorado Attorney General ("AG"). Violations of the Act will constitute unfair and deceptive trade practices under Colorado's Consumer Protection Act.

A high-risk AI system means any AI system that makes a "consequential decision," meaning a decision that has a material legal effect, or similarly significant effect, on the provision, denial, cost, or terms to a consumer who resides in Colorado in the areas of education, employment, financial services, essential government services, health care, housing, insurance, or legal services. AI systems intended to perform narrow procedural tasks, detect decision-making patterns or deviations from prior patterns, and certain enumerated technologies, such as antivirus software or firewalls, are not considered high risk.

Developers and deployers must use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from intended and contracted uses of high-risk AI systems. Compliance with the Act creates a rebuttable presumption that the developer or deployer used reasonable care.

The Act imposes different requirements on developers and deployers. A developer must comply with notice and documentation requirements and conduct impact assessments. Deployers must implement a risk management policy and program to identify and mitigate known or reasonably foreseeable risks of algorithmic discrimination. They may consult nationally or internationally recognized guidance, such as the AI Risk Management Framework from the National Institute of Standards and Technology, to assess reasonableness. Deployers also must conduct an impact assessment, review deployment of high-risk AI systems annually, and provide certain notice and rights to consumers. Except where "obvious," the Act requires deployers of AI systems (not just high-risk AI systems) that are intended to interact with consumers to disclose to each such consumer that they are interacting with an AI system.

Developers and deployers have separate reporting requirements. Developers must report to the Colorado AG within 90 days after learning of any known or reasonably foreseeable risks of algorithmic discrimination related to a high-risk AI system that has been deployed. A deployer must report to the Colorado AG within 90 days of discovering that a high-risk AI system caused algorithmic discrimination.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More