Artificial Intelligence Use Cases Are Trending But There Are More Questions Than Answers

WL
Withers LLP

Contributor

Trusted advisors to successful people and businesses across the globe with complex legal needs
A "use case" focuses on the practical applications of a technology. The age of use cases for generative artificial intelligence (AI) seems to be upon us. The emergence of OpenAI...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

A "use case" focuses on the practical applications of a technology. The age of use cases for generative artificial intelligence (AI) seems to be upon us. The emergence of OpenAI, Chat GPT4 and the impact it is having are everywhere. The latest research on generative AI and productivity that generative AI has the potential to generate value equivalent to $4.4 trillion in profits annually.

Generative AI, like generative pre-trained transformer (GPT), a type of artificial intelligence that is based on the transformer architecture and other machine learning models, poses various legal questions and challenges and they must be addressed.

Clear answers will propel innovation and productivity. Left unanswered, these issues will deflate profits and make generative AI implementation legally and financially problematic. Let's examine the licensing and potential liability of generative AI content providers and users.

Licensing the use of models

While licensing should facilitate generative AI implementation, there are issues arising around how AI models are accessed and utilized. For instance, OpenAI has terms of service that restrict certain uses of their API. Users are restricted from using "output to develop models that compete with OpenAI". Determining who owns the intellectual property (IP) of an AI model can be complex. Licensing agreements will need to clearly address IP ownership and usage rights.

AI models use large amounts of data, which may include sensitive information. Compliance with data protection regulations like the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) is crucial. There's always a risk that an AI model might not perform as expected or might produce harmful outcomes. Licensing agreements need to address liability issues, including who is responsible for damages caused by the AI's actions or inaccuracies.

AI models raise ethical concerns, such as bias in generating content. Licenses must consider who is responsible for them. Licensing will also need to differentiate between open-source and proprietary AI models. Open-source AI models comprise AI software that is free to modify and enhance. Proprietary AI models are owned by an organization and the source code is kept secret. Open-source models come with fewer restrictions but require adherence to open-source licenses, while proprietary models might offer more support but come with stricter usage terms. AI models often require updates for maintenance and improvement. Licensing agreements should address how these updates are handled, and who is responsible for them.

Liability

One of the most important issues to address is what to do if AI-generated content harms someone or causes damage. Who should be held liable? Should the responsibility lie with the user, the AI developer, or the platform, or a combination of the three? The liability can depend on the nature of the harm, the actions of the user, the AI developer, the platform it runs on, and the applicable legal framework.

If the user directly uses the AI-generated content in a way that causes harm e.g., improper use of autonomous driving functionality, they could be held liable. Much depends on the user's actions regarding how they use the AI's output. Developers could be liable if there are flaws or negligence in the design and development of the AI system that contributed to the harm. For instance, if the AI was programmed without adequate safeguards, the developer might be responsible. The platform hosting the AI service might also be held liable, if they fail to implement reasonable controls over the content generated by their AI. However, many platforms have legal protections, especially in jurisdictions like the United States under laws such as Section 230 of the Communications Decency Act, which generally protects platforms from liability for user-generated content.

Determining who is legally responsible for AI-generated content can be challenging. There have been several recent reports of documented hallucinations by various AI products, including made up legal cites in legal briefs, and made-up histories of various individuals which caused significant reputational and other harm.

In May 2023, the law firm of Levidow, Levidow & Oberman admitted the use of generative AI platform to produce six non-existent court decisions as citations during their representation in a personal injury case against Avianca Airlines. U.S. District Judge P. Kevin Castel of the Southern District of New York wrote in his opinion that "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance ... but existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings". See, Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023). At the same time, AI companies appear to be taking the position that they are not responsible for such hallucinations and any of the problems caused by such hallucinations.

Although AI software and its use is generally considered a "service", AI-generated content might be considered a "product" and principles of product liability could apply, e.g. Tesla Full Self-Driving (FSD). If the AI-generated content is defective and causes harm, the principle of strict liability might hold the manufacturer or distributor responsible, regardless of fault. Liability can also arise if any party involved in the creation, distribution, or use of AI-generated content breaches a duty of care. This breach must be a direct cause of the harm for liability to be established. If there are agreements or contracts in place, i.e., such as terms of service or licensing agreements, these documents might specify liability in the event of harm caused by AI-generated content.

Two kinds of lawsuits are being filed against Tesla: lawsuits which claim that the company made misleading promises concerning Autopilot to purchasers who paid extra for this feature and lawsuits involving people who have been injured or killed while using Autopilot. Tesla has prevailed in each case because the users did not follow the Tesla "terms of service" for safe and proper use. In summary, liability for harm caused by AI-generated content could fall on the user, the AI developer, the platform, or a combination of these, depending on the circumstances. The determination of liability will involve a multifaceted analysis of legal principles like negligence, product liability, contractual obligations, and specific regulations applicable to AI.

Conclusion

Realizing productivity and profits from AI, while staying in compliance with legislative and judicial developments, involves an approach that includes both strategic and ethical considerations. Users of AI must keep abreast of AI-related legislation and regulations. Developers need to build AI systems with ethical considerations in mind and avoid bias and discrimination, prioritize transparency, and ensure that AI models adhere to fairness and accountability principles. All those that use or develop AI must comply with data protection laws like GDPR and implement robust security measures to protect AI systems from cyber threats and ensure implementers are well-educated about AI ethics and regulations. Those organizations that use AI must establish a responsible AI governance framework.

To survive, all must measure the impact of AI on productivity and profits. By proactively addressing legal and regulatory considerations and prioritizing ethical AI development, organizations can position themselves to realize the productivity and profitability benefits of AI. The bottom line is that AI can be profitable through its the strategic and effective use by harnessing its capabilities to enhance productivity, reduce costs, improve decision-making, and drive revenue growth.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More