Colorado Enacts First-In-The-Nation Legislation Comprehensively Regulating Development And Use Of Artificial Intelligence

DG
Davis Graham & Stubbs LLP

Contributor

Davis Graham & Stubbs LLP, one of the Rocky Mountain region’s preeminent law firms, serves clients nationally and internationally, with a strong focus on corporate finance and governance, mergers and acquisitions, natural resources, environmental law, real estate, and complex litigation. Our lawyers have extensive experience working with companies in the energy, mining, technology, hospitality, private equity, and asset management industries. As the exclusive member firm in Colorado for Lex Mundi, the world’s leading network of independent law firms, DGS has access to in-depth experience in 125+ countries worldwide.
On May 17, 2024, Governor Jared Polis signed the Consumer Protections for Artificial Intelligence Act (SB 24-205) (the "Act") into law.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On May 17, 2024, Governor Jared Polis signed the Consumer Protections for Artificial Intelligence Act (SB 24-205) (the “Act”) into law. Colorado is the first state in the nation, and one of the first jurisdictions in the world, to enact comprehensive legislation regulating high-risk artificial intelligence (“AI”) systems. The Act goes into effect on February 1, 2026, but businesses subject to the Act's compliance scheme will need to begin preparing much sooner given the law's complexity and scope.

This legislation targets developers and users of AI systems that are deemed to be “high-risk.” High risk systems are those that make or are a “substantial factor” in making decisions that materially affect the provision, cost, or terms of employment, housing, health care, financing, essential government services, insurance, or legal services. Businesses that develop or use AI in decision-making for any such services or operational areas are subject to the comprehensive oversight, disclosure, and transparency requirements that the Act imposes.

Algorithmic Discrimination

The Act's stated purpose is to prevent “algorithmic discrimination,” which is defined as any condition in which the use of AI systems results in differential treatment or impact to individuals or groups on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other state- or federally-protected classification. To achieve this purpose, the Act imposes on developers and users of high-risk AI systems a duty to take “reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” These developers and users are entitled to a rebuttable presumption that they have taken such reasonable care if they comply with a host of reporting, oversight, and transparency requirements set out in the Act. Notably, algorithmic discrimination does not include using a high-risk AI system to expand a participant pool to increase diversity or redress historical discrimination.

Scope

The Act regulates any person doing business in Colorado who deploys, develops, or intentionally and substantially modifies a high-risk AI system. It is important to note that “doing business in the state” is generally interpreted broadly and is a fact-specific analysis that depends on the nature and duration of the business activity. Therefore, even companies with no physical presence in Colorado but that engage in business activities in the state will likely need to comply with the Act. For deployers, some (but not all) of the compliance requirements are waived if the deployer has less than 50 full-time equivalent employees and does not use its own data to train the high-risk AI system.

Developer Requirements

Developers of high-risk AI systems must disclose certain information to both deployers using the AI systems and to the broader public.

  • Disclosures to deployers and other developers: Developers of a high-risk AI system must provide to the deployers and other developers of the system various data and information about the known and foreseeable risks associated with the high-risk AI system along with other information about the system such as its intended benefits and uses and a summary of the data used to train the system. The developers must also provide documentation regarding how the system was evaluated for risks of algorithmic discrimination, related mitigation measures, data governance measures, and how the system should be used and monitored in connection with consequential decision-making.
  • Public disclosures: Developers must provide public disclosures on their websites or in a public use case inventory. These disclosures must include the types of AI systems the developer has developed and how the developer manages risks associated with any high-risk AI systems.
  • Disclosures and self-reporting to the Attorney General: Developers must self-disclose to the Colorado Attorney General (the "AG") when the developer knows that their high-risk AI system has caused or is "reasonably likely" to have caused algorithmic discrimination or when the developer receives a credible report from a deployer that a system has caused algorithmic discrimination. The AG may also require, at the AG's discretion, the developer to submit additional documentation to ensure compliance.

Deployer Requirements

The Act requires deployers of high-risk AI systems to create processes to mitigate and manage risks associated with using high-risk AI systems. It also requires deployers to make general public disclosures, as well as disclosures directly to consumers who are impacted by the deployers' use of those systems.

  • Risk management framework: Deployers must create and implement a risk management policy and program to oversee the use of the high-risk AI system. They must also regularly review and update their policies and programs. The Act provides various requirements for the risk management policy and program and further requires that such programs must be reasonable considering industry guidance and standards such as those reflected in the Artificial Intelligence Risk Management Framework created by the National Institute of Standards and Technology, as well as considering the size and complexity of the deployer, the nature and scope of the high-risk AI system, and the sensitivity and volume of data processed in connection with use of the AI system.
  • Impact assessment: Deployers must complete annual impact assessments of the high-risk AI systems they deploy. The Act provides minimum requirements for the information that deployers must include in their impact assessments. This information includes disclosures about the nature of their use of the systems, a risk analysis, descriptions of the data inputs and outputs, transparency measures, and the safeguards that the deployer has instituted.
  • Public disclosures: Deployers must provide public disclosure on their websites of their use of high-risk AI systems and the nature of the information used.
  • Notice to consumers: When a deployer uses a high-risk AI system to make a consequential decision concerning a consumer, it must provide notice to that consumer. This notice must include a disclosure statement about the nature of the use and information about how the consumer may opt-out of the processing of personal data in certain circumstances. If the consequential decision is adverse to the consumer, the deployer must provide additional information about the decision-making process and the type of data used and must also provide the consumer an opportunity to correct any incorrect personal data used and to appeal the adverse decision.
  • Self-reporting violations: Deployers must self-report to the AG any identified cases of algorithmic discrimination within 90 days of discovery. As with developers, the AG can request from deployers additional disclosures to ensure compliance.

Enforcement

Noncompliance with the Act constitutes an unfair trade practice under C.R.S. § 6-1-105(1)(hhhh). However, only the AG can bring enforcement actions, meaning neither private individuals nor district attorneys can bring a lawsuit to enforce the Act. A developer or deployer who engages in algorithmic discrimination could still potentially face lawsuits filed by private individuals or other government enforcement agencies under state or federal anti-discrimination laws.

Extraterritorial Impacts

While the Act will most directly impact companies doing business in Colorado, there are likely to be significant extraterritorial effects too. Although Colorado's law is the first of its kind, many other state legislatures and regulators are working on their own AI regulatory concepts, and some will incorporate and mimic elements in Colorado's law. Even as other states begin to act, for now Colorado's law sets the floor on which companies that do business throughout the country will build their compliance efforts.

The only similar already-enacted regulatory regime is the European Union (“EU”) AI Act, which will apply to businesses operating in any jurisdiction in the EU. With more states and nations likely to develop their own compliance regimes, businesses operating across borders have a high risk of facing a patchwork of regulations.

Next Steps in Colorado

Public officials in Colorado have indicated that changes are likely to be made to the Act before it becomes effective in 2026. Indeed, facing a backlash by the business community following enactment, the bill's primary sponsor along with Colorado's Governor and AG issued a joint statement promising potentially-significant changes in the 2025 legislative session. The statement identified several areas for amendment efforts, including narrowing the definition of high-risk AI systems, focusing the regulatory scheme on developers rather than deployers (especially small businesses), reducing the proactive disclosure requirements, and modifying the consumer right to appeal.

In addition to potential legislative amendments, the Act will see significant regulatory development through rule-making by the AG, to whom the Act delegates broad rule-making authority. Both the legislative and rule-making processes will give stakeholders an opportunity to provide input regarding the Act and help shape whatever final product emerges from those processes.

Meanwhile, there are many steps businesses can and should take now to ensure they will be prepared to comply once the Act takes effect and to mitigate and address other operational risks associated with the use of AI.

Should you have any questions about the content of this Legal Alert, please contact Mark ChampouxCaitlin Cronin Woodward, or a member of the DGS AI Group.1

Footnote

1. This article was authored with the assistance of DGS Summer Law Clerk, Sarah Walker. Ms. Walker is a 2L at the University of Colorado School of Law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More