Ars Boni Et Aequi Or Artificial Boni Et Aequi? Why AI Cannot Replace Lawyers

Most lawyers are familiar with the paremia attributed to the Roman jurist Celsus: "Ius est ars boni et aequi" – The law is the art of equity and goodness.
Poland Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Most lawyers are familiar with the paremia attributed to the Roman jurist Celsus: "Ius est ars boni et aequi" – The law is the art of equity and goodness. These words have become the basis of the axiology of law in the European cultural circle, obliging both practitioners of law and its creators to always keep in mind equity and goodness as fundamental values - thoroughly permeating the legal system.

Thus, it is not without reason that the system of law is not a closed system, but in many places refers to separate and external to the law normative orders (moral, customary), as exemplified by the common references in the Civil Code to "established custom" in relations of a given kind or the principles of social coexistence. Thus, the law not only deliberately operates with concepts whose definitions cannot be found within the legal system, but also in a somewhat veiled way reminds us that it is a profoundly humanistic field since it attempts to encompass the multitude of phenomena occurring between individual people, which from an individual perspective determine someone's fate, while they become abstract from the perspective of written norms. This abstractness of the norms of law creates an apparent resemblance between law and the rules governing mathematics or syntax in language, and at the same time a temptation to subject the navigation of legal norms to automation using the artificial intelligence systems based on machine learning algorithms, which are becoming increasingly popular today.

Indeed, we are overwhelmed by the attempts to automate particular aspects of legal work with more or less success. AI-based algorithms that analyze documents and select relevant information from them, or that pick up repetitive patterns, are developed. Legal information systems have long used AI to facilitate information retrieval. AI is also being used to create legal documents based on ready-made templates. Let's leave aside the wild imagination of some lawyers trying to write pleadings using the popular GPT chatbot.

But is it a good idea to apply machine learning algorithms to processes that require decision-making, i.e., adjudication? Can artificial intelligence effectively create pleadings and argue for the interests of individuals?

In order to answer the questions posed this way, it is worth to have a look at the achievements of the philosophy of mind and cite the now classic thought experiment of American philosopher John Searle1 .

Let's imagine that in a closed room we place a man who speaks English and does not know Chinese. This man is equipped with a written set of Chinese syntax rules and is tasked with generating responses to Chinese messages delivered to the room. The man processes the Chinese characters received and compiles them according to the rules available to him, then generates responses in Chinese in such a convincing manner that a native Chinese speaker might assume that he is communicating with another Chinese person. Can it then be claimed that the person in this "Chinese room" understands Chinese?

The answer is no, as the person is merely manipulating signs, the meaning of which he has no idea. The same is true of a computer algorithm - it is designed to know a set of rules governing a language, or a system of law in the case of a hypothetical legal artificial intelligence. Thus, the algorithm receives input - a set of symbols (letters and numbers), interprets and compiles them according to the rules, creating a response consisting of these symbols.

Like a person in a "Chinese room," an algorithm can generate a response of such quality that a person using the algorithm can be convinced that he or she is actually in contact with another human being. In the case of machines, such a situation is referred to as passing the "Turing test"2 . However, this does not mean that the algorithm understands the meaning of the generated message.

This is because both a computer algorithm and a human in a closed room are deprived of an external reference. That is, for example, receiving the word "apple" as part of the input, the algorithm may be equipped with the definition "an apple is a sweet, hard fruit" and, of course, may use this word in the correct context. However, it has no awareness of what this apple is apart from the definition contained in the internal instructions. This is because the program does not and is not able to interact with the apple. No one will point the finger at the fruit to show the algorithm that this is what is meant by "apple". The algorithm will not take the apple in its hand, nor will it taste what it tastes like.

Thus, it becomes apparent that machine learning algorithms, using a variety of concepts, are deprived of the understanding of these concepts - they have mastered the syntactic rules of putting symbols together, while semantics remains alien to them. This phenomenon can be humorously exposed with the use of Chat GPT. Whenever Chat GPT is asked to tell a joke (and even given some scenario as input i.e. two men walk into a bar) it generates absurd answers.

On the other hand, returning to more serious matters, artificial intelligence does not understand the meaning of simple concepts. Thus, it is all the more alien to abstract concepts like the aforementioned "goodness," "equity," "established custom," or "principles of social co-existence." How, then, could it find its way around the existing references in the legal system to other, external systems such as the value system? How could it create an argument based on considerations of equity or make a decision based on those considerations?

These questions remain open, but it is far from certain that technological developments can change anything in these matters. Meanwhile, it is not uncommon for AI-based systems not only to fail to implement the basic values of the legal order, but also often to act in a manner directly opposed to them. AI is known to generate racist, xenophobic and homophobic content. This is because artificial intelligence multiplies what is contained in the input data it receives. This data comes from humans, so it is not free of what is inherent in the worse part of human nature. The system operating on this data replicates such content in a way that we would humanly describe as "mindless."

It is undoubtedly not the best idea to incorporate artificial intelligence into the more complex and more "humanistic" aspects of practicing law.

Footnotes

1. https://en.wikipedia.org/wiki/Chinese_room

2. https://en.wikipedia.org/wiki/Turing_test

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More