Many companies now use AI-powered technologies in hiring, promotion, and other employment decisions. Arnold & Porter attorneys explain what employers need to understand about new guidance from the EEOC and the Justice Department on avoiding disability discrimination when using these tools and look at enforcement trends in the states and abroad.

Artificial intelligence offers the prospect of freeing decisions from human biases. All too often, however, it can wind up unintentionally reflecting and reinforcing these biases despite its presumed objectivity.

On May 12, the Equal Employment Opportunity Commission and the Department of Justice released guidance addressing disability discrimination when using artificial intelligence to make employment decisions. The EEOC's guidance is a part of its larger initiative to ensure that AI and "other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces."

AI-Enabled Employment Technologies Can Violate the ADA

Many companies now use AI-powered technologies in hiring, promotion, and other employment decisions. Examples include tools that screen out applications or resumes using certain keywords; video-interviewing software that evaluates facial expressions and speech patterns; and software that scores "job fit" based on personalities, aptitudes, or skills.

The Americans with Disabilities Act prohibits businesses with 15 or more employees from discriminating on the basis of disability. Among other things, the ADA requires employers to provide reasonable accommodations to individuals with disabilities, including during the application process, unless doing so would create an undue hardship. An AI-enabled system may violate the ADA if it improperly screens out an individual based on a disability, whether or not the employer intends this result.

For example, the employment technology may itself be difficult for someone with a disability to use. Employers should accordingly provide reasonable accommodations to applicants or employees as necessary so the technology can evaluate them fairly and accurately. If a computer-based test requires candidates to write an essay, for instance, the employer might need to offer voice-recognition software to visually impaired candidates.

Second, technology that seeks medical or disability-related information also may implicate the ADA. Inquiries in connection with employment decisions that are "likely to elicit information about a disability"-such as seeking information about the applicant's physical or mental health-are especially problematic.

Finally, AI-powered technologies might violate the ADA by screening out applicants with disabilities who could do the job with or without a reasonable accommodation. To illustrate, hiring technologies predicting prospective performance by comparing applicants to current successful employees may unintentionally exclude fully qualified candidates with disabilities (which also can happen with candidates from other protected classes if employers are not careful).

Avoiding ADA Violations

Whether AI-powered employment technology is internally or externally developed, employers must ensure they do not unlawfully screen out individuals with disabilities. As the EEOC explains, employers should only "develop and select tools that measure abilities or qualifications that are truly necessary for the job-even for people who are entitled to an on-the-job reasonable accommodation."

An employer that develops its own test may be able to protect itself by using experts on various types of disabilities throughout the development process-a reminder that diversity of backgrounds among those developing and reviewing AI guards against bias.

An employer using purchased technology still may be liable if the tool's assessment process violates the ADA. The guidance suggests speaking to the vendor to ensure appropriate precautions against disability discrimination.

As we explained in a Bloomberg Law Perspective, AI systems frequently consist of multiple components created by different people or companies. Really understanding the risk posed by a developer's product may require "peeling the onion" layer by layer. Procurement contracts should allocate responsibilities for assessing and mitigating risk clearly, and purchasers should satisfy themselves of the rigor of the vendor's risk-management processes.

The EEOC recommends "promising practices" for avoiding ADA violations. Employment technology should: (a) clearly indicate that reasonable accommodations are available to people with disabilities; (b) provide clear instructions for requesting accommodations; and (c) ensure that requesting a reasonable accommodation does not diminish the applicant's opportunities.

The agency also suggests that, before the assessment, an employer should provide full information about the hiring technology, including (i) which traits or characteristics the tool is designed to measure, (ii) the methods of measurement, and (iii) factors that may affect the assessment.

The Broader Context

The EEOC and DOJ guidance on the ADA is part of a rising tide of global regulation of algorithms and AI. The Federal Trade Commission is planning a rulemaking "to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination." The White House Office of Science and Technology Policy is formulating an "AI Bill of Rights."

States and localities have also begun to oversee algorithmic decision-making more aggressively. Illinois requires employers using AI to vet video interviews to notify job applicants and obtain their consent. New York City recently passed its own law requiring automated employment decision tools to undergo an annual "bias audit" from an independent auditor.

AI-enabled employment applications may also be subject to privacy laws, including ones governing automated decision-making, collection of biometric or other personal information, or use of facial-recognition technology.

Abroad, the European Union is considering a comprehensive AI act, which was proposed in April 2021. The UK government will offer its plan for governing AI this year.

In addition, the Cyberspace Administration of China adopted its internet information service algorithmic recommendation management provisions in January and is completing another regulation-on algorithmically created content, including technologies such as virtual reality, text generation, text-to-speech, and "deep fakes." Brazil, too, is developing AI regulation.

The EEOC and DOJ guidance remind us that even longstanding laws like the ADA have plenty to say about proper use of cutting-edge technologies.

Previously published in Bloombery Law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.