Artificial Misconduct: Morals Clauses In The Age Of AI

PC
Pryor Cashman LLP

Contributor

A premier, midsized law firm headquartered in New York City, Pryor Cashman boasts nearly 180 attorneys and offices in both Los Angeles and Miami. From every office, we are known for getting the job done right, and doing it with integrity, efficiency and élan.
One of the key considerations when engaging with talent (a celebrity, actor, singer, or other public figure) is that the value of the association between a company...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

One of the key considerations when engaging with talent (a celebrity, actor, singer, or other public figure) is that the value of the association between a company or person and that talent is wholly dependent on the talent's reputation. Of course, when entering into an agreement, one's reputation can be vetted and appear intact. It is what might happen later, over the course of the relationship, that can be a cause for concern. To shield against that risk, talent agreements of all kinds (endorsement, co-branding, license, and sponsorship agreements), especially since the MeToo era, will typically include a "Morals" or "Conduct" clause (the "Morals Clause").

The Morals Clause's purpose is to provide an express remedy against talent whose conduct adversely affects the association with the relevant talent and, accordingly, the potential success of a motion picture, television series, marketing campaign, or product (thus jeopardizing the value of the accompanying investment and the goodwill associated with a corporate brand).

The dawn of artificial intelligence (AI) technology has seen the birth of "deep fakes" (or as SAG-AFTRA calls them, "digital replicas"), which are (at a high level) AI-generated versions of a celebrity, typically a video or photograph that appears to be real with the celebrity expressing a particular view or encouraging consumers to unwittingly fall victim to a scam. The possibility of a deep fake creates a potential complication as to what constitutes (or, rather, should constitute) "conduct" in order to ensure that the purpose of the Morals Clause is met.

For the first time, it is possible for bad actors to create extremely authentic-seeming photographs, videos, and audio/voice recordings in a matter of seconds using AI – and to easily disseminate misleading or misrepresentative content without the talent having actually engaged in any "conduct" per se. Take, for example, deep fakes of Taylor Swift and Selena Gomez announcing to fans that there is an excess in inventory for Le Creuset Dutch ovens and encouraging fans to act quickly to claim one by clicking a button and providing certain information (including a credit card for the shipping cost). Numerous consumers believed these videos to be real and provided this information. When they did not receive the product, they blamed Le Creuset, thereby damaging the company's good will, among other consequences. Or look at the recent example of a high school athletics director in Baltimore who was arrested for allegedly using AI to fake the voice of the school's principal engaged in offensive hate speech.

What does this mean for Moral Clauses? Here are three likely scenarios:

1. A Challenge for Talent

It seems inevitable that most famous talent will be subject to some kind of AI misrepresentation – ranging from the humorous and relatively benign, to the sensitive and intrusive (which may unfortunately disproportionately affect women; e.g., deepfake nudes), to the downright offensive and reputation-threatening. Some of this could certainly fall within the sphere of subjecting the talent and their employers to "public disrepute, contempt, scandal, or ridicule."

Expect to see talent push for ever-tighter legislation and regulation (state and federal, as well as at the guild level) to protect them. Contractually, it seems likely that talent reps will seek express language to ensure that their clients are not unfairly penalized for misuse of AI by third-party bad actors.

2. A Challenge for Companies Engaging Talent

Companies will likely also be thinking more expansively about how a Morals Clause is drafted and what events should be anticipated. Because reputational harm is the cornerstone of the Morals Clause, and such harm can now be created without any actual conduct of the talent, we expect to see a push for broader/ wider-reaching Morals Clauses that will cover public accusations and perceptions of misconduct as opposed to mere misconduct.

We can also expect to see companies try to expand the timeframe during which the accusations or alleged conduct took place. That is, AI deep fakes can be made to appear to have taken place in the far past but be disseminated and the fake "conduct" allegedly "discovered" only today. Similarly, the power of AI as a search tool might also change how easily one might uncover truthful past misconduct otherwise buried on the Internet. This might help companies better vet the talent with whom they work or, alternatively, can provide another consideration about past conduct to be addressed in the morals clause.

3. A Complication for Investigations

An alleged violation of a Morals Clause (particularly when involving discrimination, harassment, or sexual misconduct) is usually followed by an investigation. AI threatens to complicate those investigations. Take the example of talent who is alleged to have said something discriminatory or threatening that was allegedly recorded by a third party. An obvious defense is that the talent did not make the alleged statement and that the recording was synthesized via AI (easy to do for any public figure with hours or even just minutes of easily accessible performances and interviews).

Accordingly, investigators will need to familiarize themselves with AI technology, and probably partner with technology companies that can detect AI usage (or lack thereof) by technical analysis and use of watermarks. Expect this to become a reasonably routine part of investigations (and, potentially, any related litigation).

Setting the Record Straight

Even if it is proven that damaging content was generated by AI, the damage to a brand, company, or production may already be done – especially given how quickly misinformation can spread on the Internet. Accordingly, we are starting to see examples of brands requiring that talent in influencer, sponsorship, and endorsement contracts take reasonable remedial measures to correct the misinformation via social media channels or elsewhere. Typically, we can expect to see an obligation that talent actively engage in public discussions about the misinformation and that they reasonably cooperate with the company in undertaking any remedial campaign necessary.

The major question becomes: at whose cost should such additional services be performed? Unlike a typical breach of contract situation where the burden of "curing" misconduct and all associated costs can equitably be imposed on the party who engaged in that behavior, where the harm is caused by a third party, ideas of fairness would call for a shared burden on company and talent.

The advent of AI has brought new challenges for talent and companies of all kinds engaging talent, each of whom must recognize that false and deceptive information about and/or involving talent and which can therefore harm his/her/their reputation has become not just possible, but rather incredibly easy. As a result, it is now more important than ever for both parties to an agreement involving the use of someone's persona to anticipate and guard against such concerns and preemptively establish remedial protocols to address such issues in the unfortunate event that they occur.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More