Proceed With Caution When Taking The Human Out Of Human Resources: The Colorado Artificial Intelligence Act Will Have Immediate Impact On Employers

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
The role of artificial intelligence (AI) has been increasing in our daily lives, from customer service chatbots and digital assistants to ubiquitous smart home devices.
United States Law Practice Management
To print this article, all you need is to be registered or login on Mondaq.com.

The role of artificial intelligence (AI) has been increasing in our daily lives, from customer service chatbots and digital assistants to ubiquitous smart home devices. Likewise, the reach of the technology is expanding quickly into human resources. Many companies use AI to track employee performance: For example, AI can be used to evaluate employees' sales performance through electronic data or assess project outcomes. It is more common than ever for employers to utilize AI tools to screen applicants or promotion candidates, including by reviewing resumes and applications. The EEOC reports that up to 99% of Fortune 500 companies use some form of AI to screen or rank candidates for hire. While these tools can often save time and streamline information access, some uses are coming under scrutiny by federal agencies (such as the Department of Labor) and state legislatures.

Colorado entered the chat on May 17, 2024, when Governor Jared Polis signed the Colorado Artificial Intelligence Act (CAIA). Set to go into effect on February 1, 2026, the law broadly addresses the use of AI in consumer settings, including employment. The CAIA includes provisions that require both developers and deployers of AI tools to use reasonable care to avoid discrimination through the use of "high risk" AI systems. To understand the considerations, some statutory definitions are helpful:

  • Deployers includes those doing business in Colorado utilizing a "high risk" AI system
  • A "high risk" system is any AI system that "makes, or is a substantial factor in making, a consequential decision," including decisions with respect to employment or employment opportunities
  • A "substantial factor" refers to the use of an AI system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a "consequential decision" regarding that consumer
  • "Consumer" means an individual who is a resident of Colorado

The CAIA applies to all Colorado employers, exempting only those with fewer than 50 employees that do not use their own information or data to teach or improve the AI system in use, or those who utilize systems that meet certain criteria. One goal of the law is to ensure that AI algorithms do not return discriminatory results based on actual or perceived age, color, disability, ethnicity, or other protected characteristics. Users of AI systems in recruiting and hiring are charged with using reasonable care to protect applicants and candidates from any known or reasonably foreseeable risks of discrimination. While this is in line with current laws requiring employers to use nondiscriminatory hiring practices, the CAIA goes further, requiring employers who use AI in this process to take certain affirmative steps.

  • Specifically, the CAIA imposes a number of action items on deployers who use AI and are not otherwise exempt. Some of these more prominent obligations include requirements that a deployer implement a risk-management policy and program to govern the use of the AI system. The policy must be regularly reviewed and updated.
  • Complete an impact assessment for the AI system.
  • Notify the consumer that AI is being used to make a consequential decision before the decision is made and disclose the purpose of the system, contact information for the deployer, a description of the program, and how to access additional information.
  • Notify the attorney general within 90 days if it discovers the algorithms used by the AI system cause a discriminatory result.

In the event of an adverse action to the consumer, the deployer must, among other things:

  • Disclose to the consumer the principal reason(s) for the decision, including:
    • The degree to which and the manner in which the AI system contributed to the decision
    • The type of data that was processed by the AI system relating to the decision
    • The source of the data considered
  • Provide an opportunity to correct any incorrect data used by the AI system
  • Offer an opportunity to appeal the decision using human review

Importantly, where a deployer complies with its obligations under the CAIA, it will be entitled to a rebuttable presumption that it did, in fact, use reasonable care to protect consumers from foreseeable risks of discrimination through the AI system.

The CAIA also provides for public enforcement mechanisms; however, the law expressly provides that individuals do not have a right to sue for violations of the CAIA. Colorado's attorney general is charged with enforcing the statute. Deployers who act promptly to cure violations and generally maintain systems in compliance with generally accepted AI risk management practices may have an affirmative defense to enforcement actions. When challenged, the deployer bears the burden of showing compliance with these defenses.

While the law is slated to go into effect in early 2026, it is still subject to further rulemaking. Governor Polis — who notably was a dot.com entrepreneur, having founded an internet access provider and several well-known online retailers before entering politics — also advised the legislature that the long lead time before the law takes effect is intended to permit the government to further consider and refine the statute, which is inherently complicated both in technical compliance and with respect to navigating a potentially complex landscape of national regulation.

Employers who use AI tools in recruiting or employing Colorado residents should use the next 18 months to fine-tune their systems, approaches, and processes for avoiding discrimination through technology. Time flies, especially when compliance deadlines loom. Taking a proactive view will help avoid potential AI pitfalls in recruiting.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More