Zhang v. Chen, 2024 BCSC 285 [Zhang] is a landmark decision addressing the misuse of artificial intelligence ("AI") tools in a British Columbia legal proceeding. This decision serves as an important reminder of counsel's ethical obligations when employing such tools—including, critically, the duty of competence and the duty to verify outputs produced by AI. It also underscores the lack of official guidance on the use of AI tools in BC's courtrooms, in particular around whether disclosure of use is required.

The Decision

In Zhang, counsel mistakenly filed a notice of application containing non-existent legal authorities that had been "hallucinated" (i.e., fabricated) by ChatGPT. The lawyer later withdrew the fabricated citations and gave evidence at the hearing that she did not know ChatGPT could generate fake authorities.1

Opposing counsel, after discovering that the authorities were fake, requested that "special" (i.e., elevated) costs be awarded against the lawyer personally, both as a sanction for her use of fake case authorities and as a means to compensate the opposing party for the time they expended trying to locate the fake cases. Justice Masuhara declined to order special costs against the lawyer, finding that such costs were only appropriate in extraordinary cases where there was reprehensible conduct or an abuse of the court's process.2 Although he described the lawyer's conduct as "alarming", he found that she had no intent to deceive the court.3 He also noted that the error would have been discovered before the authorities were relied on in court, as the requirement to produce books of authorities would have "exposed the non-existence of the cases" while the parties were preparing for the hearing.4

However, Masuhara J. did hold counsel personally liable for the ordinary costs of the application, to compensate for the additional effort and expense opposing counsel expended to research and address the fake cases. Additionally, he ordered a mandatory review by counsel of all her files before the BC Supreme Court and commented that going forward, it would be "prudent" for her "to advise the court and the opposing parties when any materials she submits to the court include content generated by AI tools such as ChatGPT."5

Implications for the Use of AI in BC Courts: To Disclose or Not Disclose

This is the first case in Canada to comment on the use of "hallucinated" legal authorities in court proceedings, but it is likely that more guidance will follow in BC and in other jurisdictions. The court highlighted that legal hallucinations are "alarmingly prevalent"6, noting some of the specific risks:

[38] The risks of using ChatGPT and other similar tools for legal purposes was recently quantified in a January 2024 study: Matthew Dahl et. al., "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models" (2024) arxIV:2401.01301. The study found that legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2. It further found that large language models ("LLMs") often fail to correct a user's incorrect legal assumptions in a contrafactual question setup, and that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. The study states that "[t]aken together, these findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks."

In spite of Masuhara J.'s comments about the prudence of disclosing reliance on generative AI tools, the BC courts have not released any official guidance on the use—or disclosure of use—of AI tools in court. This contrasts with a number of jurisdictions that have mandated not only disclosure of the fact that generative AI tools were used to prepare court-filed materials, but also disclosure of how these tools were used (e.g. the Supreme Court of the Yukon, Court of King's Bench of Manitoba, Provincial Court of Nova Scotia). These directives have generated criticism for creating risks around solicitor-client privilege that are arguably "unnecessary in view of counsel's professional obligations".7

On the other hand, the Federal Court recently published a directive—after a public consultation process—that requires litigants to make a uniform declaration stating that AI "was used to generate content in this document", without requiring parties to disclose how AI was used. The Federal Court directive also urges lawyers and litigants to use caution when employing AI tools in a legal context and emphasizes the need to maintain a "human in the loop" for verification of AI-generated materials. In answer to criticisms of the rule as unnecessary, the Federal Court noted that while counsel have professional obligations as officers of the court, self-represented litigants do not, and "[i]t would be unfair to place AI-related responsibilities only on ... self-represented individuals, and allow counsel to rely on their duties". Similarly, the Courts of Alberta have issued a combined notice urging practitioners and litigants to observe principles of "caution" and maintaining a "human in the loop", but without requiring disclosure of AI-generated material.

It remains to be seen what approach BC courts will take to this issue, and whether they will regard disclosure of AI-generated content as prudent (as Masuhara J. suggests) or as an unnecessary risk to privilege (as the Canadian Bar Association has elsewhere advocated).8 Ultimately, the framework that each court adopts regarding AI—if any—is likely to reflect a complex balancing of risks, and the solution is far from simple.

Law Society of British Columbia Guidance

As lawyers await further guidance from the BC courts, reference may be had to the Law Society of British Columbia's Practice Resource (November 2023), "Guidance on Professional Responsibility and Generative AI". At present, this Practice Resource does not counsel disclosure to the court; it merely cautions that lawyers should "check with the court, tribunal, or other relevant decision-maker to verify whether you are required to attribute, and to what degree, your use of generative AI". This guidance reflects a departure from the initial caution by the Law Society in July 2023, cited in Zhang, which counselled disclosure as the "prudent" choice:

[34] In this regard, I note that the Law Society sent out guidance to the profession in July 2023 stating:

... Where materials are generated using technologies such as ChatGPT, it would be prudent to advise the court accordingly. The Law Society is currently examining this issue in more detail and we expect that further guidance to the profession will be offered in the coming weeks.9

The Law Society of British Columbia's Practice Resource also sets out important guiding principles for the responsible and ethical use of generative AI by lawyers. It echoes the cautionary remarks in Zhang that "generative AI is still no substitute for the professional expertise that the justice system requires of lawyers", and that competence when selecting and using such tools remains "critical".10

Indeed, this month, the Law Society of British Columbia also added new commentary to Rule 3.1-2 [Competence] of the Code of Professional Conduct for British Columbia, which underscores that lawyers have obligations to be technologically competent and to know the risks of technology they employ in practice (including risks to confidentiality). These new commentary items read as follows:

[4.1] To maintain the required level of competence, a lawyer should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer's practice and responsibilities. A lawyer should understand the benefits and risks associated with relevant technology, recognizing the lawyer's duty to protect confidential information set out in section 3.3 [Confidentiality].

[4.2] The required level of technological competence will depend upon whether the use or understanding of technology is necessary to the nature and area of the lawyer's practice and responsibilities and whether the relevant technology is reasonably available to the lawyer. In determining whether technology is reasonably available, consideration should be given to factors including:

(a) the lawyer's or law firm's practice areas;

(b) the geographic locations of the lawyer's or firm's practice; and

(c) the requirements of clients.

Footnotes

1. Zhang v. Chen, 2024 BCSC 285 at paras. 16-17.

2. Zhang v. Chen, 2024 BCSC 285 at para. 26.

3. Zhang v. Chen, 2024 BCSC 285 at para. 31.

4. Zhang v. Chen, 2024 BCSC 285 at para. 30.

5. Zhang v. Chen, 2024 BCSC 285 at paras. 41-42.

6. Zhang v. Chen, 2024 BCSC 285 at para. 38.

7. Canadian Bar Association Letter to Honourable Justice Michael D. Manson re: Draft Federal Court Practice Direction on AI Guidance (November 24, 2023), online.

8. Canadian Bar Association Letter to Honourable Justice Michael D. Manson re: Draft Federal Court Practice Direction on AI Guidance (November 24, 2023), online.

9. Zhang v. Chen, 2024 BCSC 285 at para. 34, citing to Law Society of British Columbia (July 2023).

10. Zhang v. Chen, 2024 BCSC 285 at para. 46.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.