The launch of ChatGPT by OpenAI in 2022, has initiated a transformative phase in the domain of Artificial Intelligence (AI). AI language models like ChatGPT exemplify this profound shift, giventheir ability to generate human-like text and their wide-ranging applications in customer service andcontent creation1.Nevertheless, as AI technology progresses, the legal implication associated withits usage also evolves.

There are various ways in which an individual can initiate legal action against ChatGPT. The mostfrequent basis for such claims has been the infringement of legal rights. Some of the potentialconcerns are:

  • Violation of copyright
  • Data privacy breaches
  • Presence and reproduction of biased or erroneous information

Due to its extensive training on an enormous dataset gathered from the Internet, ChatGPT's abilityto trace the sources of its information becomes virtually impossible. In essence, this implies that themodel itself could potentially be responsible for various copyright infringement. This was exhibitedwhen authors Mona Awad and Paul Tremblay jointly filed a class action complaint for copyrightinfringement at the US District Court for the Northern District of California against OpenAI, thecompany which created the AI tool ChatGPT. The plaintiffs alleged that OpenAI violated copyrightlaws by "training" its AI model i.e. ChatGPT on novels without obtaining prior authorization fromthe authors2.

OpenAI faced further legal challenge in the form of an extensive lawsuit, which alleged that theirAI models, ChatGPT and DALL-E, underwent training utilizing the data of hundreds of millions ofindividuals without obtaining proper consent. The lawsuit contended that OpenAI collectedpersonal data directly from individuals who interacted with their AI systems and other applicationsthat incorporated ChatGPT3. The complainant argued that such data collection and usage were inviolation of privacy laws 4 , particularly concerning the data of children.

It is important to highlight that ChatGPT retains conversations as training data for potential use infuture models to provide better user experience. This practice could potentially result in legalchallenges if users input confidential or sensitive information that later gets reproduced by the AItool.For example, if a junior lawyer were to use ChatGPT to draft a contract, it's possible that personaldata such as clients' names, addresses, and other confidential information could be saved as futuretraining material, which will then be made available to prospective users of ChatGPT, therebybreaching such clients' right of privacy.

Due to the limitations of its training data and available information, ChatGPT is prone toinaccuracies. The AI tool can only provide responses based on the information it has at a giventime, lacking a deep understanding of the subject matter. In this regard, OpenAI's website containesa heading titled "Limitations" which specifically states that the tool sometimes generates plausible-sounding yet incorrect or nonsensical answers. 5 Recently, ChatGPT incorrectly labelled Australianpolitician Brian Hood as a criminal. 6 In the United States, OpenAI is facing a lawsuit involvingradio host Mark Walters. 7 ChatGPT falsely identified Walters as being accused of embezzling fundsfrom a non-profit organization called the Second Amendment Foundation, despite no suchaccusation ever being made against him.

Furthermore, all information or data uploaded to ChatGPT, as well as any output generated by thesystem, remain the property of the user, as long as they adhere to OpenAI's terms of use. Uponperusal of OpenAI's Terms of Use, it is apparent that ChatGPT bears no responsibility or liabilityfor any of its outputs. The sole responsibility for any resulting outputs lies with the user, includingany potential liability towards OpenAI itself.

The Government of Bangladesh is yet to implement any regulations to govern the use of AI in linewith local laws. Although there are no specific laws solely dedicated to regulating AI usage, someexisting laws, such as the Digital Security Act 2018, offer guidance on regulating digital activitiesand addresses issues like the misuse of digital devices, cyber-terrorism, hacking, and thedissemination of false information through digital media.

As mentioned before, the sole responsibility for any resulting outputs lies with the users ofChatGPT. Therefore, should the user employ ChatGPT in a manner resulting in any form ofinfringement of rights, they shall be held accountable for such actions. Sections 25 and 26 of theDigital Security Act 2018 address specific offenses related to digital data and information. Section25 pertains to the transmission, publication, or propagation of offensive, false, or threatening datavia digital means, while Section 26 deals with the unauthorized collection, use, or possession ofidentity information without lawful authority. Complying with these legal provisions is crucial toensure responsible usage of ChatGPT and avoid any legal repercussion.

Footnotes

1. Siddharth K; 'Explainer: ChatGPT – what is OpenAI's chatbot and what is it used for?'; Reuters;available at https://www.reuters.com/technology/chatgpt-what-is-openais-chatbot-what-is-it-used-2022-12-05/

2. Tremblay et al v. OpenAI, INC

3. PM v. OpenAI LP

4. Gerrit De Vynck; 'ChatGPT maker OpenAI faces a lawsuit over how it used people's data'; TheWashington Post; available at https://www.washingtonpost.com/technology/2023/06/28/openai-chatgpt-lawsuit-class-action

5. https://openai.com/blog/chatgpt

6. https://www.washingtonpost.com/technology/2023/04/06/chatgpt-australia-mayor-lawsuit-lies/

7. Mark Walters v. Open AI, LLC (https://www.courthousenews.com/wp-content/uploads/2023/06/walters-openai-complaint-gwinnett-county.pdf

Originally published 07 August 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.