Is A Decision-Maker's Use Of AI Unfair?

TM
Torkin Manes LLP

Contributor

Torkin Manes LLP is a full service, mid-sized law firm based in downtown Toronto. Our clientele ranges from public and private corporations, to financial institutions, to professional practices, to individuals. We have built our firm from the ground up—by understanding our clients’ business needs, being results-oriented, practical, smart, cost-effective and responsive.
Across the country, tribunals and decision-makers face increased workloads, pressure to render decisions justly, and the need to balance efficiency against the importance of the issues they are adjudicating.
Canada Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The Canadian administrative state is under strain.

Across the country, tribunals and decision-makers face increased workloads, pressure to render decisions justly, and the need to balance efficiency against the importance of the issues they are adjudicating.

Artificial intelligence ("AI"), including large language models ("LLMs") like Chat GPT and BERT, hold much promise in alleviating the burden on decision-makers.

Algorithms could synthesize significant amounts of data to help a tribunal reach the most judicious outcome. LLMs have the potential to reduce the time it takes for adjudicators to render reasons for decision.

But the use of AI in administrative decision-making also threatens to compromise a hearing's fairness.

Because AI is largely trained on data culled from the Internet, any of the biases and lack of impartiality ingrained in online data could very well be reflected in algorithmic outcomes or linguistic outputs produced by LLMs.

Moreover, tribunals and adjudicators could be motivated, intentionally or not, to abdicate their decision-making role to AI.

Canadian case law governing the procedural fairness implications of employing AI for administrative decisions remains in its infancy.

Use of Non-AI Software is Fair

The question of whether the reliance on AI results in procedural unfairness depends on whether the software employed by the decision-maker constitutes AI at all.

Two recent decisions of the Federal Court illustrate the point.

In Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464 and Kumar v. Canada (Minister of Citizenship and Immigration), 2024 FC 81, the applicants challenged both the substantive reasonableness and procedural fairness associated with an immigration officer using Microsoft "Chinook" software.

According to the Government of Canada, Chinook is used by Immigration, Refugees and Citizenship Canada ("IRCC") to condense an applicant's information. IRCC states that Chinook "does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms".

Despite the Government's characterization of Chinook, Canadian Courts have nonetheless found that decisions which rely on Chinook have input "assembled by AI".

In Haghshenas, the Federal Court rejected the applicant's argument that a decision denying a work permit under the Immigration and Refugee Protection Act, S.C. 2001, c.27 was unreasonable if it used Chinook.

Applicant's counsel argued, in effect, that the decision remained opaque "unless it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes".

The Court held that the decision was made by a human visa officer, not AI.

While the Court agreed that the decision "had input assembled by artificial intelligence", the question of whether an administrative decision was substantively reasonable turned on the decision itself, not whether artificial intelligence was used in achieving it.

A similar line of reasoning was employed in the recent decision of Kumar, supra.

The Court dismissed an application for judicial review which challenged a visa officer's refusal to issue a study permit. The application was based in part on the fact that the visa officer's notes referenced the assistance of Chinook 3+, "without further explanation or context".

Citing Hagshenas, the Court in Kumar reiterated that the decision in question was made by a visa officer, not by software.

The decisions above illustrate a few key points:

  1. there appears to be some confusion between the Courts and the Government about whether administrative decisions are being made, at a minimum, with input from AI; and
  2. where the decision is ultimately made by a human, it will not be prima facie unreasonable simply because it was arrived at with the help of algorithmic or AI contributions.

Not Inherently Unfair for Decision-Makers to Use AI

In April of this year, the Federal Court released its decision in Luk v. Canada (Minister of Citizenship and Immigration), 2024 FC 623, echoing the theme that the mere use of AI, in and of itself, does not render an administrative decision unfair.

Luk concerned an application for judicial review of a visa officer's decision refusing the applicant's study permit to pursue a diploma in baking and pastry arts at a local community college in Canada.

The officer declined the application on the basis that they were not satisfied that the applicant would leave the country following their studies.

The applicant advanced two arguments relating to the use of AI.

First, they alleged that the decision was made by an "inanimate object" or what the applicant characterized as a "computer inside a computer".

Second, the decision was signed by "KL", whom the applicant argued could in theory not be an actual human visa officer, but "an administrative assistant filling in for the computer".

In upholding the visa officer's decision, the Court noted that there was no evidence that AI or an algorithm was used in the decision-making process. Rather, the record showed that a visa officer made the decision and provided reasons for it.

The Court also held that the mere use of AI or an algorithm, in and of itself, would not render the decision procedurally unfair:

...Even if there was evidence that the decision had been made with the assistance of [AI] or an algorithm, I am not satisfied that any such assistance, on its own, constitutes a breach of procedural fairness. Whether or not there has been a breach of procedural fairness will turn on the particular facts of the case, with reference to the procedure that was followed and the reasons for decision...When those factors are considered, I find that the Applicant has failed to demonstrate any breach of his procedural fairness rights.

Need for Clarity on the Use of AI in Decision-Making

As Canadian Courts continue to navigate the effect of AI on natural justice, a few themes emerge.

The use of AI and algorithms to assist the decision-making process is not prima facie unfair.

The question of natural justice and procedural rights depends on context.

This means that administrative decisions should continue to be made by humans. Where AI is employed to assist in that process, the person affected by the decision is arguably entitled to notice that an algorithm or LLM, for example, was relied on in reaching the adjudicator's conclusions.

Of course, procedural fairness concerns regarding AI do not end with notice.

While the question has yet to be determined by a Canadian Court, whether a tribunal's reliance on AI outputs gives rise to a reasonable apprehension of bias remains a real issue.

Perfunctory dependence on AI tools, trained on data from the Internet, incorporates all the problematic biases inherent in online information. Decision-makers tempted to delegate their adjudicative function to AI could find themselves, unwittingly or not, perpetuating bias and stereotypes for historically disadvantaged groups.

The solution, it seems, is clear guidance in the form of tribunal policies or case law.

In the absence of such direction, tribunals and adjudicators risk tainting their decisions with potential bias and natural justice violations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More