Part 16: How Company Can Ensure Transparency For Their Use Of AI

According to certain data protection experts, data protection always requires it, the EU AI Act explicitly only does so in certain cases, and in "ethics" it is a basic...
Switzerland Technology
To print this article, all you need is to be registered or login on Mondaq.com.

According to certain data protection experts, data protection always requires it, the EU AI Act explicitly only does so in certain cases, and in "ethics" it is a basic requirement: transparency in the use of AI. But when and how must or should a company be transparent? In this part 16 of our AI blog series, we discuss the legal and ethical foundations of transparency - and a practical approach to ensuring it.

There are a number of reasons why transparency in the use of AI is required today. Transparency should allow those affected to be able to adapt to decisions and other results of AI, evaluate them and possibly even take action against them. Another reason, however, is undoubtedly the widespread uncertainty about what this technology can do and what it does to each and every one of us. In connection with its use, we feel a loss of control over the processing of our data and decisions made about us by others with the help of technology that many of us cannot begin to understand, let alone control. This leads to a feeling of unease and we seek measures to deal with it. Because we don't really have any, we fall back on the supposed silver bullet that we like to use in such situations: the call for transparency.

Transparency is not a silver bullet

However, transparency is neither a silver bullet nor a panacea. Although it can increase our awareness of AI as a potential source of danger, it often will not make up for the loss of control. Even if you were to find out what AI technologies are being used to monitor us on the internet, you as an individual can only defend yourself against this to a limited extent. And only a few of us make any serious effort to inform ourselves about these things anyway Unlike with certain other technologies, experts believe that the public's interest in understanding and learning about AI is stagnating or even declining. There is no lack of information about how AI works, where it is used and what it can (and cannot) really do.

There are many reasons for this lack of interest. The amount of information required is constantly increasing, the possibilities for intervention are decreasing, people are becoming more accustomed to the technology and, as a result, the psychological strain is also decreasing. We have all had this experience with the processing of our data and the corresponding privacy statements: although the law requires these statements, the majority do not see them as a contribution to data protection but as an alibi exercise. The data subjects are usually not interested in them. Only the supervisory authorities are imposing ever more absurd requirements for privacy statements.

A similar development is emerging in the field of artificial intelligence. Data protection supervisory authorities such as the Swiss Federal Data Protection and Information Commissioner (FDPIC) are publicly demanding that every time AI is used, it must be disclosed why this is being done, what exactly is happening and what data is being used for this purpose. Although he claims there is a legal basis for such a broad requirement, it does not exist. The requirement is far too absolute, makes no sense and will not be complied with – probably not even by the FDPIC. AI is used in so many places (and has been for years) that full disclosure would go beyond what is reasonable.

The demand for total transparency also fails to recognize that "AI" as a technology initially only means that a system has not only been programmed by a human, but that it has also been trained using examples.. This means that it is not only able to calculate results "linearly", but also to recognize "non-linear" patterns, which in simple terms leads to it being considered partly "autonomous" and, thus, qualify as AI in the legal sense (e.g., under the EU AI Act). This initially says nothing about the associated risk of an application for affected persons – the central element: Many antispam-scanners have been AI-based for years, and nobody would think of specifically labelling their use and making it transparent in the required sense. There are also many other everyday AI applications that certainly do not need to be disclosed – from text translation with tools such as "DeepL" to optical character recognition (OCR), which is now found in numerous office scanners and or in our PDF readers, to photography and image editing software on today's mobile phones. AI technologies have been around for decades and in many places we take them so much for granted that we don't give them a second thought (we will explain how these technologies work in another blog post using the example of a large language model). New developments in certain areas, such as generative AI, have only recently begun to be marvelled at by a wider public. However, there are also applications that are already very familiar to us all and are considered harmless, such as the previously mentioned text translation systems, which are also generative AI in its purest form.

But why is transparency not a silver bullet in the field of AI either? The concept of problem-solving through transparency in the field of AI is based on the understanding that a person affected by AI has a realistic chance of avoiding the AI or influencing it – there is a kind of "algorithmic" self-determination. In practice, this often does not really exist and even where it does exist, it is hardly possible for a person of average understanding to make a sensible decision because they will normally lack the necessary technical understanding to understand the how AI works and what the consequences of its AI are. This often cannot really be communicated in a practicable form for everyday use either, apart from the fact that most people are not interested in it at all. Added to this is the fact that not even experts know why the systems do exactly what they do in some AI applications today. So the maximum that can be conveyed under the heading of transparency is a very rough picture of what AI does and roughly how the decisions are made. This in turn means that it ultimately becomes purely a "gut" decision as to whether someone decides in favour of or against, for example, submitting a job application to a company that pre-checks it with an AI. Of course, the possibility of being able to make such a gut decision can already be a gain in self-determination, but the HR example shows that it does not solve the actual problems that such a use of AI entails, such as the lack of reliability of the outputs of an AI. There will certainly be applicants who, knowing that an AI is reviewing their application, will feel powerless and therefore worse than if only a human HR person were to do screen their application, even if they were to review the documents not only with bias but also being influenced by random other factors.

For its part, the call for transparency as a protective measure harbours risks: it can give the impression that responsibility is being shifted to those affected – according to the motto: "They are informed and can, therefore, now defend themselves if they don't like it". This is, by the way, a fundamental principle of Swiss data protection law. Similarly, an excessively far-reaching transparency obligation leads to an inflation and overflow of information, which in turn leads to an "attention fatique" and, thus, a diminishing awareness of those affected: They are no longer aware of all the information, which means that the really important information is lost. The obligation to provide data protection statements and cookie banners has demonstrated this very well. Defining legal requirements with a good intention are not always good. This also applies to some transparency requirements in connection with AI.

A sense of proportion is required . The decisive factor must therefore generally be what a system is used for and what risk is associated, and not whether it is based on a specific technology. Information should be provided where it is really important, so that those affected can adapt to it. The "scattergun approach" is not a valid concept for determine transparency. For that reason we assume that extreme demands, such as those postulated by the FDPIC, will disappear again as people get more and more used to AI and the hype surrounding genAI recedes.

Where and when transparency is required

Legally, transparency about the use of AI is mandatory in European data protection law in principle in these four cases (for US law, we have found the blog post of our colleagues at Orrick to be informative):

  • As part of the duty to inform (Art. 13/14 GDPR, Art. 19 ff. CH FADP): This concerns the data protection statement, which, apart from certain exceptions, is always required when personal data is obtained, i.e. collected in a planned manner. Aspects such as the purposes of processing and the categories of data obtained via third parties must be specified, but not the computer technology used to process this data. Whether a programmed or a trained (i.e. AI) computer system processes data has never played a relevant role in the past and still does not. Of course, it must be disclosed as a purpose if a customer's data is to be used by a company not only for performing a contract, but also for training an AI model, but it is the (additional) intended purpose of use, not the technology that triggers the transparency requirement. There are four caveats here: Firstly, the use of AI can lead to the collection of new personal data, which must be disclosed. Anyone who uses generative AI to create an analysis of a natural person collects new personal data about that person, not directly from them, but indirectly. This means that this category of personal data must be disclosed in the privacy statement. Secondly, there is an obligation to provide information about certain automated individual decisions (according to the EU General Data Protection Regulation, GDPR, including profiling). This can concern AI, but just as easily other technologies. This is not new. Also keep in mind that such automated decisions/profiling may trigger additional data subject rights (namely for human review). Most privacy statements inform about this, too. Thirdly, as a special provision, the Swiss Data Protection Act (CH FADP) stipulates that all other information to ensure "transparent data processing" must also be listed in the privacy statement. However, this is a regulation for exceptional cases that rarely applies. The use of AI does not trigger it per se, and Art. 19 CH FADP does not generally go any further than Art. 13/14 GDPR. Fourthly, we assume that certain data protection authorities will come up with the idea of having to disclose the involvement of AI service providers such as OpenAI or Microsoft by name, just as they require this when using third-party providers as part of a company's online activities. Neither the GDPR nor the CH FADP require providers to be named as recipients of personal data in a privacy statement; it is sufficient to specify categories. However, where providers themselves use the company's activities to collect personal data for their own purposes, an explicit mention will generally be appropriate. If the provider is purely a processor, we see no legal reason to mention it by name.
  • Transparency of the purpose of data processing and of other important parameters (Art. 5(1)(a) and (b) GDPR, Art. 6 para. 2 and 3 CH FADP): Personal data may normally only be processed for those purposes that were evident when they were obtained or that are compatible with such purposes. If we disregard cases such as the training of an AI, the processing purpose is normally independent of the technology used. Nevertheless, the principle of good faith and fairness, as the case may be, can also require transparency with regard to other important parameters of a data processing activity, so that the data subjects can decide whether to provide their personal data or even object to its processing. Based on the circumstances of a particular case, this is where one can validly argue that data protection requires transparency about AI. That said, the use of AI does not automatically lead to a transparency obligation of this kind. For example, it is quite conceivable that the fact that a data subject's data is processed using AI (instead of other methods) is objectively irrelevant to that person or that they must assume such processing because it is evident from the circumstances. Anyone who carries out transactions via a bank must today assume that the bank will also use AI to analyze these transactions for indications of money laundering and illegal activities. Anyone who sends a text to a company must assume that this text will be checked for malware using AI, possibly translated using generative AI and that its content will also be analyzed or summarized. None of these processes require separate disclosure. The question is therefore: Do data subjects have to expect that AI will be used in this specific way without special notice? If the answer is objectively yes, then no disclosure is required. It must be assumed that what is considered "normal" is constantly evolving. In other words: If genAI is one day in use everywhere, it must then also be expected to be in constant use.
  • In the context of consent (Art. 4 No. 11 GDPR, Art. 6 para. 6 CH FADP): Related to the foregoing principle of transparency is the requirement that consent is only valid if it is given on an informed basis, i.e. it is clear to the data subject what they are consenting to. Here too, the technology used for processing (i.e. AI) will not per se lead to a disclosure obligation. In practice, the purpose for which data is processed will be much more important. The counterexample will primarily be processes in which an AI is used to make or prepare important decisions, for example when it comes to obtaining consent from an applicant for a job to have their dossier checked by an AI. In this case, the applicant will be interested to know whether a human or a machine is assessing their data. This is what transparency in data protection is all about: the data subject should be able to decide whether they want their data to be processed by a controller for a specific purpose. If it is objectively relevant to know whether and how an AI does this in a significant way, transparency is also required. This is not necessarily the case when examining the application provided by a job candidate, at least if GenAI only supports a human recruiter (even where this is a "high-risk" AI system under the EU AI Act).
  • In the context of the right of access (Art. 15 GDPR, Art. 25 et seq. CH FADP): This is the counterpart to proactive information in the context of a privacy statement. It sometimes goes a little further, because it is necessary to respond to specific questions and this may require a higher degree of specification of the information. For example, under the GDPR, the ECJ has now clarified that a company must also provide information about the specific recipients of personal data, not just their categories. This means that if a data subject wants to know whether their personal data is being transferred to a specific AI service provider, they can request information about this.

In all cases, we see in practice that companies in relation to the transparency obligation should not only think about AI as but also about the data processing that also takes place in the context or because of AI. If a company launches a chatbot for its employees and records all input and output (audit trails, logs) and possibly even evaluates them for its own training, then employees should be informed about this. They will then think carefully about what private or personal information they want to entrust to the company's chatbot.

Furthermore, transparency may be indirectly required by law if it is necessary vis-à-vis those who are to operate an AI system and therefore need to know that it may harbor certain risks that they must be aware of in order to use it in a legally compliant manner. Even if the use of an AI system does not have to be explained to the persons affected by it, generative AI, for example, can lead to errors that employees must pay particular attention to when using such systems. If they fail to do so, the principle of accuracy of personal data may be violated, for example, if conversations are incorrectly summarized, or incorrect figures may find their way into the accounting records of a company if an AI-based solution that extracts numbers from acounting records does not work correctly.

Laws other than data protection may of course also require transparency regarding the use of AI. However, this is the exception today, and these provisions are not specific to AI, but also apply to the use of AI as soon as the conditions are met:

  • One example is Ordinance 3 to the Swiss Labour Act, which prohibits, among other things, behavioral monitoring in the workplace (Art. 26); if monitoring systems are required for other reasons, it may be necessary to inform and consult employees (Art. 6). However, whether such monitoring takes place with AI or without AI, e.g. for safety reasons, does not change the duty to inform.
  • Similarly, unfair competition law does not provide for a standard obligation to disclose the use of AI, as we explained in part 9 of our blog series. Today, there is not even a general obligation to disclose the use of deep fakes; this will only be introduced as part of the EU AI Act (see below). However, the untransparent use of AI can, of course, violate existing legislation where it serves to deceive the audience, for example.

We assume that transparency regarding AI could be prescribed selectively in future regulation,for example in highly regulated areas such as insurance law or in teaching and research, e.g. in examination regulations (use of transparency for the writing of papers), or e.g. in submissions to government agencies (e.g. the U.S. Copyright Office requires the disclosure of AI in works submitted for registration).

Of course, the EU AI Act contains some provisions that require transparency about the use of AI, for example in the case of the use of emotion recognition in the workplace, where humans interact with an AI (and will not know that they are communicating with a machine) or by setting forth an obligation to identify deep fakes and AI-generated or manipulated content (see our part 7 of our blog series). Incidentally, these transparency regulations apply to all AI systems, not just those that are not considered high-risk.

Why voluntarily provide for transparency about AI

That said, there are some companies that voluntarily want to go further in terms of transparency than what is (currently) required by law. In the end, this is about "business ethics", i.e. the supra-legal rules that a company voluntarily imposes on itself in order to fulfil the expectations of certain stakeholders. In our experience, companies do this primarily out of fear of losing trust and reputation (or, conversely, because they hope to achieve marketing effectiveness by announcing such guidelines) and not because transparency is intrinsically important to them as a value. However, this does not diminish the positive effects that it can have if transparency about the use of AI is created even where it would not be legally mandatory.

But what are these positive effects? Beyond mere compliance, what speaks in favour of creating transparency? Here are a few examples:

  • Transparency can enable a data subject to decide how they should be advised or otherwise treated with regard to a particular topic (e.g., by an AI system or by a human). Such a choice does not necessarily have to be "against the machine". Science shows that in many areas, computer-aided decisions are less random and therefore qualitatively better than those made by humans, who are not only subject to bias like poorly trained AI but are often also influenced by other unrelated factors ("noise"). However, the quality of a decision is not the only relevant criterion. It can also be a question of human dignity that a person is not required to submit to the dictates of a machine rather than a human in important matters. As already mentioned above, it should of course be borne in mind that those affected will often not really have the choice to decide on the use of AI. Nevertheless, in such cases, transparency can convey a sense of self-determination.
  • Transparency can help a person to better assess the results produced by an AI and accept weaknesses. Anyone who knows that a certain text has been produced by an LLM and knows how an LLM works can understand the quality of the content and knows that they should not be deceived by the perfect wording and that they must check the text carefully for accuracy. In this sense, transparency can take the form of a disclaimer or warning, which manages expectations and thus protects the provider of an AI solution from legal liability because it has told the user that the machine's answers may contain errors or prejudices, however convincing they may sound.
  • Transparency can help to strengthen the trust of users or the public. If people understand how decisions are made or how AI is used and why, they may be more likely to trust this technology or the company relying on it. Experience shows that very few people will want to understand the technology itself. However, they want to have the impression that companies using this technology know what they are doing with it and are also taking appropriate measures. However, other aspects can also become relevant here. For example, consideration should be given not only to providing information about the use of AI, but also to providing information about where AI is not being used or where AI is only being used to a lesser extent. The media should be mentioned in this context because media users consider the quality and credibility of AI-generated content to be lower than that of human-generated content. It can therefore be interesting for a media company to emphasize that it does not use AI for certain purposes or to make a de-facto binding commitment that it will disclose the use of AI in certain areas.
  • Transparency can guide responsibility so that companies can assume responsibility themselves or assign it to others by committing to the use of AI in certain areas. Just as in data protection it must be specified who the "controller" is for data processing (who can then also be sued or sued at the request of data subjects), it is also possible in the area of AI to define responsibility for the use of certain applications, for example in a group or with various partners, i.e. to tell data subjects who is responsible if their data is (also) processed using AI and who is not. The EU AI Act also takes up this principle and links responsibility to whose name or trademark an AI system is offered or marketed under, among other things. This can mean, for example, that it can be legally advantageous for a company to clearly label the chatbot on its website, which is operated by another company, as a chatbot of this other company ("Powered by ...") in order to avoid being deemed as a provider itself.

And another point that belongs halfway in this "voluntary transparency" chapter: ask yourself whether you want to do something in the future with the data you obtain today that would justify an adjustment to the privacy statement today. This could be the use of personal data for training AI models, for example. We assume that many companies will develop such a need and then ask themselves whether they are allowed to do this with the data they have already collected in the past. If you mention this in your privacy statement today, it will be easier to claim later that the data collected in the meantime can be used for this purpose (we will discuss what needs to be considered for the training of AI models in a separate blog).

But how do you actually provide information in practice?

If a company decides of its own accord to inform about its use of AI in certain applications, three main methods are used in practice (which can also be combined):

  • Expand the privacy statement: The existing privacy statement is used to also provide information about the use of AI. This is necessary anyway where data protection requires additional information, such as the use of personal data for AI purposes (including for training AI models, potentially only for the future), the generation of new data about a person or if information about certain data recipients should or must be provided. Consideration should also be given to any applicable AI regulations in foreign target markets in which the company operates (in the U.S.A., for example, there are special 'opt-out' rights in various U.S. states with regard to AI-based decisions and it is also recommended to specify when data is used for AI training). However, the privacy statement can also be used to present voluntary information about the use of AI, for example in a separate section on the topic (just as separate information is typically provided about data processing on websites). This provides an overview of the areas in which AI is used and for what purposes. If you want to go further, you can also specify which data is used for this and possibly even which techniques or models (although experience has shown that these are always changing). The company should also consider specifying the things it does not want to do, such as passing on data to external providers for their AI model training – this is what the public is particularly afraid of. However, the inclusion of such additions in a privacy statement needs to be carefully considered, especially as they are voluntary: The company is thereby committing itself in an area where it does not know where it, as a user of the technology and the technology itself, will be in a year's time and what the company will want to do then. Although the company can adapt its privacy statement at any time, a "step backwards" (first promising that data will not be used in a certain way and then doing so) can have a much more negative impact on its reputation than if nothing was communicated from the outset. In addition, such an adjustment with regard to the (personal) data collected in the meantime only has an effect for the future. In addition, a company must be able to defend itself against the statements made in a privacy statement, at least with regard to personal data. If these turn out to be incorrect or incomplete, fines may conceivably be imposed, even if information has been provided about things that should not have been provided at all. One argument in favor of using the privacy statement for transparency about AI applications is that it already exists and many data subjects are likely to enquire about it.
  • Information in the context of the AI application itself (or its output): If specific AI applications are operated or data is collected for such applications, information about this can be provided in direct interaction with the data subjects at the point of the application. For example, anyone using a chatbot on their website or intranet may use a short disclaimer for this, which also states, for example, that the dialogues are recorded but not used for training. Further information can be provided on a separate page. Anyone who assesses job applicants using AI and comes to the conclusion that they must disclose this (which does not have to be the case in certain constellations) can, for example, indicate this in a job advertisement with the keyword "AI checks" and, if necessary, refer to further information; most applicants will be clear in which direction the reference goes in such a context and will seek further information if they are interested. Since the reference must ultimately be made at the time when the job applicant submits their application and is thus exposed to the AI, such a reference can also only be made when using an application portal where the applicant uploads their documents. As there are no specific requirements as to which aspects must disclosed, the company is free to adapt to the specific case and the associated risk (for the persons concerned). If AI images or other content are generated, a small, unobtrusive notice such as "AI-gen" will suffice to fulfil the transparency expectations. The example also shows that the use of AI is not always a question of data protection: the transparency requirement in relation to deep fakes is not aimed at protecting individuals from the processing of their data, but is primarily intended to prevent the public from being deceived.
  • Use of an "AI statement": This refers to a statement similar to the privacy statement, but communicated separately from it. It can be published on the website alongside the privacy statement. It describes in general terms where and how the company uses AI technologies and then singles out individual use cases and, if necessary, specific technologies and situations. There is no prescribed content, as such a statement is not yet required by law. In this way, however, the company can communicate to the public how innovative it is in its use of AI and at least suggest that it is using it responsibly by creating transparency and assuming responsibility. In our opinion, the credibility of such a commitment depends on whether such an AI statement consists of nothing but platitudes or whether the company is specific and conveys the impression that, in addition to the transparency it has created, it is also limiting its use beyond the law or at least regulating it. In our opinion, an AI statement is not a suitable instrument for companies that provide information but also make it clear that they want to push the boundaries with regard to AI, because in such cases the statement will convey negative associations to the public; as it is then primarily a disclaimer. Practical studies have shown that even well-drafted privacy statements do not usually convey a positive feeling to their readers, as these readers realize how much is being done with their data. If online media discloses in its cookie consent that with consent >800 advertising partners will receive the user's data (which is not uncommon in the realm of real-time bidding), then this will undoubtedly put them off. An AI statement can also address any rights the company wishes to grant data subjects; here it is advisable to clearly separate these from those under the applicable data protection law, as is done by the EU AI Act, which only defines another separate data subject right (however, the increasing number of AI regulations in other countries in which the company operates must also be taken into account).

Further recommendations

Each company will have to determine its own path and approach – and possibly a combination of the above options. There are a few practical tips to bear in mind:

  • Most companies come across the topic of transparency because they are dealing with the rules for the internal use of AI. Many companies are quick to issue directives making statements such as that the use of AI will be transparent. Such a requirement sounds good and will also meet with broad approval – until the company has to operationalize it. It will realize that unconditional transparency, as demanded by the FDPIC, cannot be implemented in a reasonable manner because AI is currently used in many places where transparency is not necessary and makes no sense. For this reason, a company should prescribe a maximum of "appropriate" transparency. It must then be determined for each application how far the company needs to go. But even this will ultimately go too far for some when they think about it. They will realize that if they make tools such as ChatGPT, Copilot or similar available to their employees, they don't even know what they are supposed to communicate to third parties. In such cases, the concept of the AI statement (or a supplement to the privacy statement) can be the lowest common denominator: The company does not issue a general transparency requirement to its employees, but provides an AI statement as a substitute. Even if this may seem like an alibi or fig leaf, the company can claim that it provides information about the use of AI and thus goes beyond what others do.
  • An AI statement is not just restricted to explaining what is being done with AI. It can also serve as a showcase for the internally issued rules for the "responsible" use of AI. If transparency ultimately serves the goal of building trust, this is entirely in line with the objective. It not only communicates what the company is doing, but also emphasizes that it is aware of the risks and is dealing with them. We ourselves are of the opinion that internal directives should not be set out in an outside-facing document; anyone who makes an AI statement is already showing the public that the company goes further than others. Anyone who discloses their own rules and internal directives is unnecessarily restricting themselves: if an incident occurs, the company may even have to be measured against these rules and may have (unnecessarily) created a basis for liability.
  • An AI statement should only contain examples. It is neither expected nor realistic for it to be exhaustive. Nor should the company give this impression. AI is used in too many places today. Of course, it would make sense to present the applications that are particularly risky from the perspective of the data subjects (this term is not only to be understood in terms of data protection, but also includes the people affected by an AI), but this requires a corresponding risk analysis and a well-functioning compliance organization, which most companies do not yet have in this area. In other words, they are not even in a position to guarantee a complete overview.
  • As in the case of a privacy statement, all three approaches listed above raise the question of how the information should be updated in the event of changes. Typically, the responsibility for this will lie with the owner of the respective application.
  • Remember: Even if you decide in favour of a separate AI statement, the privacy statement must be correct and complete, i.e. if necessary, you cannot avoid making adjustments where information on the use of AI is to be provided under data protection law.

We offer a sample "AI statement" for download here in English and German:

1460214a.jpg

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

Part 16: How Company Can Ensure Transparency For Their Use Of AI

Switzerland Technology
Contributor
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More