New York, N.Y. (January 3, 2022) -  As artificial intelligence (AI) continues to cement itself in society, an increasing number of employers are relying on smart technology in the hiring and recruitment processes. Because AI can analyze the skills and behavior of applicants, some employers are using AI to review applicants' resumes and video interviews.

AI has also been the subject of some critical commentary given concerns that algorithms that act as the arbiters of hiring decisions may unfairly disadvantage applicants based on race, ethnicity, and gender. As a result of these concerns, there has been a push for transparency so that employers, employees, and applicants can understand the processes used to analyze individuals' skills and personalities. New York City has gotten out ahead of the pack to address potential discrimination caused by the use of AI in hiring and other employer decision-making.

In an effort to mandate transparency and minimize the potential for social inequalities exacerbated by AI, the New York City Council passed Int. No. 1894-A (the Law) on November 10, 2021. Effective January 2, 2023, the New York City Administrative Code is amended to regulate employers' use of “automated employment decision tools,” with the goal of combatting bias in hiring decisions by weeding out technology that enables unlawful discrimination. Those tools are defined as technology that uses “machine learning, statistical modeling, data analytics, and artificial intelligence… to substantially assist or replace discretionary decision making.” The Law will regulate employers' use of AI in the hiring, recruitment, and promotion processes.

The Law bans NYC employers and employment agencies from using an automated decision tool to either screen a candidate or make an employment decision about an employee (e.g., promotion), unless it can show that doing so will not discriminate against an individual based on their race, ethnicity, or gender.

The Law imposes certain requirements for employers to continue using automated employment decision tools such as AI. In order to use such tools, the employer or employment agency must undertake an annual bias audit, defined as an “impartial evaluation by an independent auditor,” to assess the tool's disparate impact on and discrimination of protected persons. The employer or employment agency must make the results and date of the audit publicly available on their website. However, an employer or employment agency does not need to publicly provide information about the type of data collected. Nonetheless, the source of the data and the employer or agency's data retention policy must be made available to a requesting candidate or employee within 30 days of their request.

Additionally, employers and employment agencies using these tools must notify the subject candidates or employees that AI is being used, as well as the job qualifications and characteristics it will evaluate, at least 10 business days before use. Employers must provide candidates and current employees the choice of opting out of the automated process and using an alternative review process, such as human review.

By imposing these audit and disclosure requirements on employers and employment agencies, the Law is designed to achieve transparency by forcing the creators and vendors of automated employment decision tools to disclose how their processes work. This transparency is designed to trickle down to candidates and employees alike, who will become aware of the uses of their data.

Penalties for violations can be imposed on employers and employment agencies. Monetary penalties consist of $500 for a first violation of the bias audit or notice requirements, and $1,500 for each violation thereafter.

Even at this early stage, some commentators have expressed concern that the Law does not go far enough because it does not encompass age and disability discrimination. Critics also express concern over the Law's lack of specificity regarding the standard for compliant bias audits, which need only consist of an “impartial evaluation.”

Employers and employment agencies that use AI should reach out to their AI vendors to ensure their hiring, recruitment, and promotion tools meet the Law's standards. Employers are placed in the difficult position of being required to ensure compliance on pain of penalty even though the vendors are ultimately responsible for making their products compliant.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.