"Artificial intelligence is one of the most profound things we're working on as humanity. It is more profound than fire or electricity."

- Sundar Pichai, CEO Alphabet Inc. and Google

"We're at the beginning of a golden age of AI."

- Jeff Bezos, Founder and Executive Chairman of Amazon

"Artificial intelligence is the new electricity."

- Andrew Ng, Co-founder of Google Brain

Artificial Intelligence (AI) is a rapidly evolving facet of modern technology that has already had a significant impact on the way that we live our lives. Its use is already present in a number of industries, such as retail, healthcare, finance and even law. AI models have been created for a whole host of applications: systems that can generate images; those that write complex software code; and even write songs and melodies based on input provided by users. The applications for AI systems seem endless, and there is no end to this innovation in sight. Despite this, no other AI bot in recent memory has garnered quite as much attention and widespread use as ChatGPT, having reportedly amassed more than one million registered users within one month of release.1

ChatGPT is a large language AI model that was reportedly trained on a large amount of data, and as a result is able to generate human-like responses to prompts by users. It can write you a poem, or an article on string theory, and do so in a way that makes it nearly indistinguishable from human works. When asked to introduce itself, ChatGPT's response was:

"I am ChatGPT, a language model developed by OpenAI. I am trained to understand and generate human language in order to assist with a wide range of tasks, such as answering questions, generating text, and more. I am here to help with any questions or information you may need".

While this new technology and its potential applications are incredibly exciting, it also presents a number of novel issues, both ethically and legally that will need to be considered in the coming years with further development and use. Questions such as:

  1. Who owns the output from these systems?
  2. What happens when these systems are trained on content subject to copyright restrictions or when that content is unknown?
  3. Is the use of ChatGPT ethical?
  4. How do educational institutions deal with the increasing use of these systems to plagiarise content for assessment?

Amongst these issues, and perhaps the question that you might be asking is, can I use ChatGPT, and if so, what consequences might I face?

The answer to this question is, as with most things, it depends. There are a number of factors you should consider before using AI to create content or use that content.

Who Owns the Output of an AI bot?

If you are looking to use ChatGPT to write an article for you, it is important to consider whether you will actually be the author of that work, and by extension whether you actually own it or have the rights to use it.

The question of ownership of AI generated works has not been extensively considered in Australia. The current position is that AI, as something that is not a natural person or a citizen of Australia, is unable to own copyright.2 Further than this, the law in Australia may not yet be sufficiently developed to deal with the issue of ownership and copyright over AI generated content beyond the confines of current copyright law.

Could an argument be made that the person who directs the AI with the input is the owner? Possibly. There are certainly some examples of this occurring in everyday life. Phone cameras, for example, utilise some AI technology when taking photos to assist the user. Despite this, it is the person who has taken the photo that would own the copyright. The important distinction between this, and having ChatGPT write you an entire adventure novel, is that the phone AI is simply assisting in the creation of the content, being the photo. ChatGPT and other AI language models like it, essentially do the work for you, with very little input required from the human. As such, it may be difficult to argue that they are the same.

While there have been some significant cases in recent years considering this specific topic, Australia has yet to catch up. Despite this, the ownership of AI generated content will no doubt be a hot topic in legislature and the courts in the years to come.

For now though, it is important to consider if the AI is being used under any specific service agreements, or terms and conditions, and if so what those say regarding the ownership of the content. If you are at all unsure, you should seek legal advice prior to attempting to claim ownership or use any AI generated content for commercial purposes.

Am I Plagiarising?

Possibly. If you choose to submit an article written by ChatGPT to your university or school claiming that it is your own work, you are more than likely engaging in academic misconduct, among other things. This is an issue that educational institutions are facing all over the world right now, with some turning to more in class or in person assessment pieces to combat the use of AI to complete homework. A number of schools have reportedly banned the use of ChatGPT on their servers and computers, with some even threatening immediate expulsion should students be caught using the AI.3

There are a number of programs that have been released already that claim to be able to detect the use of AI in drafting a piece of text, however their effectiveness and application in this context remains untested.4

Am I infringing Copyright?

Once again, possibly. One of the major issues that has arisen since the release and widespread use of ChatGPT is the ownership of copyright over the content that the bot was trained on. If the AI was trained using content protected by copyright or content subject to licensing terms such as creative commons, and has subsequently reproduced or adapted this content for the user to distribute or use however they wish, is the user then infringing copyright without authority of the true owner? The answer may well be yes however the usual rules around copyright infringement would apply as no special rules apply for ChatGPT. However, how to do you know if you are infringing if you don't know what content has been used to train the AI bot in the first place?

There has already been movement in US Courts regarding this issue, with actions being brought against the developers/owners of AI systems for copyright infringement, or similar actions involving copyrighted work. One example comes from the recent action by GitHub, an open-source code repository, against OpenAI and Microsoft regarding the AI program Copilot, which assists users by providing code.5 It is claimed that some of the source-code used by Copilot came from GitHub, and is protected by copyright. No doubt, this is just the tip of the iceberg in terms of the legal action to come on this particular issue.

Is it ethical?

What makes ChatGPT so impressive, is its unprecedented ability to write in a way that is barely indistinguishable from human writing. To achieve this, the AI was allegedly trained on billions and billions of words pulled from the internet (and possibly elsewhere) and curated by OpenAI employees and contractors. The problem that this method posed, was how to weed out any potentially harmful, explicit or inappropriate content, or content that is simply incorrect. Allegations are now being put forth that OpenAI outsourced much of the work in finding and labelling inappropriate or unwanted content to a firm in Kenya, where their employees were to troll through thousands and thousands of excerpts of this 'bad' text.6 It is reported that some employees were tasked with viewing horrific material, including descriptions of murder, self-harm, torture, and child sexual abuse, and were only paid between $1 and $2 per hour to do so. Claims have been made that these workers were not afforded adequate counselling or psychological support either.

These allegations draw attention to the fact that the advancement of technologies such as this may have been achieved via the use (and possibly misuse) of an enormous amount of human labour. This is a serious issue, that if such allegations prove truthful needs to be addressed by the developers, distributers and even users of these systems.

Is ChatGPT accurate and correct?

ChatGPT does not currently provide citations for the information that it delivers to you, so how do you know if the information is accurate? The answer is, you don't. We have not, as of yet, been given access to the databank used to train ChatGPT or the proximal policy optimisation model used to fine tune the AI trainer's rankings, so it is impossible to verify how correct it's output really is. Additionally, the AI's knowledge base is current no later than 2021 (although that is likely to change shortly), so if you were to ask it a question that involves an event since then, it may not hold any answer, let alone the correct one. As such, there is no guarantee whatsoever that the output you are getting from ChatGPT, as acknowledged by OpenAI, is truthful or accurate.

It is important to note that ChatGPT is a powerful language prediction device, not an AI knowledge bank. It uses language patterns to predict and put together words and sentences, but it does not actually hold knowledge per say. So, while the language and text it is trained on might contain the correct knowledge, it might equally be incorrect.7 It is also prone to producing answers that may sound accurate, even if they are not.

What Does ChatGPT Have to Say?

We asked ChatGPT what it thought users should be mindful of when using it or other AI's like it to create content, and it provided the below response:

"When using ChatGPT or other large language AI models to create content, you should consider the following ethical considerations:

  1. Accuracy: AI models can generate content that is not entirely accurate or relevant, so it's important to review and fact-check the output before publishing or using it.
  1. Bias: AI models can exhibit biases learned from the data they were trained on. Be mindful of these biases and take steps to reduce their impact, such as by using diverse training data.
  1. Ownership and control: The content generated by AI models may be subject to copyright and other intellectual property laws. Be aware of these laws and obtain the necessary permissions before using the content.
  1. Responsibility: When using AI-generated content, it's important to take responsibility for its impact and consequences, especially if the content is harmful or offensive.
  1. Privacy: When using AI models, be mindful of privacy considerations and take steps to protect personal information and sensitive data.

By being aware of these considerations and taking steps to address them, you can use AI models like ChatGPT ethically and responsibly".

It may be informative for the readers of this article to pose similar questions to ChatGPT by going to ChatGPT: Optimizing Language Models for Dialogue (openai.com) and registering for an account.

To Conclude

With the widespread release of ChatGPT we are entering into uncharted territory. The issues brought about by AI generated content have not yet been widely considered, and as such the relevant legal rules and principals that will eventually govern them do not yet fully exist. Countries all over the world are currently in the process of considering and even creating specific legislation on these issues. Practically speaking, how does this apply to you? If you intend to use AI, or creations realised through or by AI, then it is important that you first evaluate the current laws and regulations surrounding copyright ownership and AI in the relevant jurisdiction to ensure that you can effectively protect any Intellectual Property you may hold in this content.

Footnotes

1 As reported by DMR at https://expandedramblings.com/index.php/chatgpt/.

2 s 32 Copyright Act 1968 (Cth).

3 Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears (msn.com).

4 Cheaters beware: ChatGPT releases AI detection tool to catch cheaters in schools and universities (firstpost.com).

5 Doe v. GitHub, 22 Civ. 6823 (N.D. Cal. Nov. 10, 2022).

6 OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time

7 OpenAI states on the ChatGPT: Optimizing Language Models for Dialogue (openai.com) website that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers" and "WEhile we've made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior".

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.