During its latest open meeting, the Federal Trade Commission introduced and voted 4-1 to publish a report to Congress warning about the use of artificial intelligence to combat various online harms and urging policymakers to "exercise great caution" in mandating or over-relying on these tools.

According to the FTC, while the deployment of AI tools intended to detect or otherwise address harmful online content is accelerating, "it is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment." Reflecting the subject's importance, in November 2021, FTC Chair Lina M. Khan announced that the agency had hired its first-ever advisors on AI.

As background, in the 2021 Appropriations Act, Congress directed the Commission to examine how AI "may be used to identify, remove, or take any other appropriate action necessary to address" a wide variety of specified "online harms"-including online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections. Before discussing each harm listed by Congress, the FTC noted that only a few fall within its consumer protection ambit and indicated a preference to defer to other government agencies on the topics as to which they are more engaged and knowledgeable.

Ultimately, the FTC's report cautions against relying on AI as a policy solution and notes that its broad adoption could introduce a range of additional harms, including:

  • Inaccurate results. According to the FTC, AI tools' detection capabilities regarding online harms are significantly limited by inherent flaws in their design such as unrepresentative datasets, faulty classifications, failure to identify new phenomena (e.g., misinformation about COVID-19), and lack of context and meaning.
  • Bias and discrimination. The FTC report found that AI tools can reflect biases of their developers that may lead to unfair results and discrimination against protected classes of people.
  • Invasive surveillance. AI tools may incentivize and enable invasive commercial surveillance and data extraction practices because they require vast amounts of data to be developed, trained, and used.

Although Congress instructed the FTC to recommend laws that could advance the use of AI to address online harms, the report instead urged lawmakers to consider focusing on the development of legal frameworks that would ensure that AI tools do not cause additional harm.

Among other key considerations, the FTC report advises that human intervention is still needed in connection with monitoring the use and decisions of AI tools; AI use must be meaningfully transparent, especially when people's rights or personal data are involved; platforms that rely on AI tools must be accountable for both their data practices and their results; and data scientists and their employers who build AI tools should strive to hire and retain diverse teams to help reduce inadvertent bias or discrimination. According to the FTC, "[p]utting aside laws or regulations that would require more fundamental changes to platform business models, the most valuable direction in this area-at least as an initial step-may be in the realm of transparency and accountability," which are "crucial for determining the best courses for further public and private action."

Chair Kahn and Commissioners Slaughter, Bedoya and Wilson voted in favor of sending the report to Congress, issuing separate statements. Commissioner Phillips issued a dissenting statement, "generally agree[ing] with the topline conclusion" but expressing concern that the report does not sufficiently grapple with the benefits and costs of using AI to combat online harms as tasked.

www.fkks.com

This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.