On July 21, 2023, the White House announced that it had secured commitments from the leading artificial intelligence companies to manage the risks posed by AI. As stressed in the press release and in news articles since, these commitments are just the beginning of a longer process to ensure the "safe, secure, and transparent" development of AI.

The press release (and articles) also emphasized the voluntary nature of the commitments, noting that the Administration is currently developing an executive order and will pursue bipartisan legislation, presumably to expand on the commitments and make them compulsory. Advocacy groups and some members of Congress, in turn, heralded the announcement as a "good first step" but stressed the need for guardrails that would actually be enforceable.

Not enforceable? Actually, the FTC can enforce these pledges. True, the commitments provide wiggle room, using words like "developing" and "prioritizing" and, in some cases, reflecting practices that are already common among these companies. (See this critique in the New York Times.) And true, the tech companies only agreed to commitments they wanted to agree to – other issues may have been left on the cutting room floor. For example, there don't appear to be commitments regarding the data inputs that "teach" the algorithm how to "think."

However, the FTC can still enforce these pledges for what they are, using its authority under the FTC Act to challenge statements shown to be false or misleading to consumers. (I should note here that the States have virtually identical authority under their so-called "UDAP" laws.)

Consider the following:

  • Here, high-level officials from each company (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) stood at the White House and publicly affirmed their agreement to eight principles published on the White House's website. While many of the principles are indeed vague or refer to future actions, at least some of them are actionable now, such as the commitment to perform internal and external testing for a host of listed risks, and the commitment to publicly report system capabilities and limitations to users. (Note that the White House's press release links to a more specific list of commitments.)
  • At least some of the companies announced the commitments on their own websites, thus amplifying them and/or explaining how they apply to that particular company. See Microsoft website (includes commitments about, e.g., testing, cybersecurity, transparency, and compliance with the NIST AI Risk Management Framework); Google (discusses various frameworks and programs it has put in place to promote safe and secure AI); Open AI (posts commitments and explains their importance).
  • Under the FTC Act, the Commission can take action against companies that make promises to consumers (whether in a privacy policy, terms of service, blogpost, public forum, or other means of communication) and then fail to deliver on them. This includes promises to adhere to voluntarily principles. For example, the FTC has brought numerous cases against companies that falsely claimed they complied with the (now-defunct) US-EU Safe Harbor and Privacy Shield programs governing the transfer of EU citizens' data to the US. Similarly, the FTC has challenged companies' statements that they complied with self-regulatory principles governing advertising. (See here and here). The FTC's ability to challenge a company's failure to adhere to voluntary pledges also underlies the FTC-administered Safe Harbor program under the Children's Online Privacy Protection Act (COPPA).
  • Finally, when interpreting the statements that companies make to consumers, the FTC will consider both "express" and "implied" claims; view such claims from the perspective of a "reasonable consumer"; and analyze the "net impression" of the statement(s) made. (See the FTC's Policy Statement on Deception) In other words, even if there is wiggle room in the language, the FTC will examine the overall message conveyed to an ordinary consumer (not to a contracts lawyer). That is how the FTC has been able to bring hundreds of cases challenging statements in all of those privacy policies famous for being opaque and/or overly complex. (Admittedly, though, most of the FTC's privacy cases are settlements.)

Now, I'm not saying that the voluntary commitments made by these AI companies are a substitute for legislation, regulation, or more specific requirements covering the full set of issues raised by AI. I'm just saying that the FTC can find ways to enforce them, and probably will. After all, the FTC has emphasized repeatedly, in one way or another, that it has the tools to regulate AI and it intends to use them. See, for example, the Joint Statement by DOJ, CFPB, EEOC, and FTC on AI; Lina Khan's New York Times Op Ed; and the press leak revealing that the FTC is investigating OpenAI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.