The Courts & Tribunal Service has recently published guidance on Artificial Intelligence ("AI") for judicial office holders (the "Guidance").

This is the first time Guidance of this nature has been published to assist judicial office holders. Its publication appears to reflect that the Courts are live to the risks posed by the unsupervised and unsanctioned use of AI by both Court staff and users, and, consequently, the precautionary approach the Courts intend to adopt, at least in the near term, particularly in relation to novel applications of AI in litigation and the courtroom.

Summary of the Guidance

The Guidance includes a glossary of common terms, including a broad definition of AI. It focuses not only on potential risk areas where AI is used by the judiciary but how the Courts should be increasingly live to the use of AI by legal teams and litigants in person. The key issues covered in the Guidance are summarised below.

  • Understanding the limitations of AI – public generative AI systems (i.e. chatbots like ChatGPT) do not draw from authoritative databases (on the contrary they are trained from publicly available information on the internet, which tends to be tilted towards US law), nor do they necessarily provide accurate answers. Their focus is on providing plausible, probabilistic answers, not accurate ones.
  • Confidentiality and privacy – public AI systems are not confidential. Any information entered into a public AI chatbot should be treated as being published to all the world – the information input to an AI chatbot is used to train it and may be used to respond to questions by other users.
  • Accountability and accuracy – as above, public generative AI is probabilistic and prone to making factual errors. These systems are therefore ill-suited to legal research and analysis.
  • Bias – because public AI is trained using publicly available data it will inevitably reflect inaccuracies and bias in such data. Judicial office holders should be live to the need to correct such biases.
  • Data security – this includes the need to use work devices and emails when using public AI systems for judicial work.
  • Accountability and responsibility – while generative AI may be a useful secondary tool, judicial office holders are personally responsible for material which is produced in their name and should therefore appropriately supervise clerks and judicial assistants, including in the use of AI tools.
  • Awareness of Court and Tribunal users using AI tools – while certain AI tools, such as technology assisted document review, image recognition, predictive text etc. have been in use for years, the current generation of generative AI has not.
    • Therefore, until the legal profession becomes familiar with these latest tools it may be necessary to remind individual lawyers of their obligations to the Court, including to ensure that all material put before it is accurate and appropriate. AI may be used by lawyers, but responsibly so, and its output should be verified and checked.
    • Generative AI is being used increasingly by litigants in person who may not be aware that such systems are prone to error and may not have the ability to verify their output. It may be necessary to make enquiries of litigants in person and the Courts should be live to faked materials. Deepfake technology is a new area but one Courts should be adept to handle.

Conclusions and Implications

The Guidance is timely, following the recent case where a litigant in person in the First-Tier Tribunal Tax Chamber apparently used generative AI (such as ChatGPT) to produce, at least parts of, her legal submissions which resulted in fictitious case citations. This is an example of what is known as an AI "hallucination", where the focus of the AI system on producing plausible but not necessarily accurate results leads to factually inaccurate output.

For an example of where generative AI has recently been used appropriately, look no further than Lord Justice Birss, who recently confirmed that he used ChatGPT to summarise an area of law with which he was already familiar for inclusion in a judgment. That is consistent with the overall tenor of the Guidance that AI may well be used to achieve efficiency gains on areas in which the user is already qualified, but it is unlikely to be appropriate to use publicly available AI systems to bridge a knowledge or skill gap.

The Guidance alludes to the possibility of the Courts adopting AI systems in the future (it is currently being trialled in some courts to produce transcripts of oral hearings). The judicial working group which prepared the Guidance is currently considering the scope of its future work to support the judiciary as it navigates this area, as AI systems become increasingly sophisticated and widely used. An FAQ document to support the Guidance is being considered, no doubt with more to follow.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.