Answer ... (a) Healthcare
The fields of application of AI in medicine are numerous and include:
- computer-assisted surgeries;
- remote patient monitoring;
- intelligent prostheses;
- diagnostic assistance; and
- personalised treatments.
The reports listed in question 1.2. deal with AI in the healthcare sector, among other things. Most of them highlight the importance of preserving the decisional power of doctors with regard to AI, and recommend the development of technical devices using AI in order to assist in making medical decisions, rather than imposing on doctors or patients a decision made by the algorithms. They also emphasise the importance of training healthcare professionals to understand the global operating approach, in order to identify the limits of AI and the recommendations/solutions it affords.
In November 2016 the High Authority for Health issued Good Practice Guidelines on Health Apps and Smart Devices (Mobile Health or mHealth), with the aim of guiding and promoting the use of connected applications and objects, and strengthening confidence in this regard. The guidelines set out good practices covering the reliability of health content, data protection and cybersecurity. Such guidelines are also relevant for AI software.
The French Data Protection Act also contains several provisions dedicated to the processing of health data, which may apply to the processing of health data in the context of developing or running medical software that includes AI elements (eg, computer-assisted surgery, remote patient monitoring, diagnostic assistance). However, these provisions are not specific to AI.
The Act of 26 January 2016 on the modernisation of the healthcare system led to the creation of the National Health Data System, which brings together the main public health databases and sets out the rules on the use of such data. Under certain conditions, this data may be used by companies to conduct research that could further and/or lead to the development of software incorporating AI.
The current French bill on bioethics addresses the algorithmic processing of genetic data. For example, the bill aims to ensure that:
- proper patient information is used when a medical act involves an algorithmic processing of massive data; and
- a healthcare professional intervenes in the adaptation of the settings of the processing, and as such, the principle of a human guarantee when using AI is respected.
The specific legal issues that are not yet addressed yet by the current regulatory framework include the following:
- Medical liability regime: Is the current medical liability regime sufficient to adapt to specific issues that might be raised due to AI or is a dedicated liability regime necessary?
- Eugenics: Is the current legal framework (Article 214-1 of the French Criminal Code) sufficient to address the issues that might be raised by AI (transhumanism and augmented humans) or are amendments necessary?
(b) Security and defence
In France, the number of AI military applications is increasing; they include computer vision, intelligent robotics, distributed intelligence, automatic language processing, semantic analysis and data crossing. On 13 September 2019 the Ministry of the Armed Forces issued a report outlining a roadmap for the deployment of AI on battlefields, with concrete examples, raising ethical issues related to its use (AI at the Service of Defence). The authors of the report identified several “priority areas of effort”, including decision support, robotics, cyber defence, intelligence, logistics and support, as well as maintenance in operational condition and “collaborative combat”. To this end, a new organisation within the ministry specifically dedicated to AI was created on 1 September 2018 by Florence Parly, minister of the armed forces. The purpose of the Defence Innovation Agency is to coordinate the ministry’s innovation initiatives by ensuring the coordination and consistency of all innovation initiatives.
Examples of specific AI military applications include the following:
- in the field of communications, use of a system that can adapt in real time, based on data from monitoring satellites, to avoid failures and automatically re-plan operations during a mission if necessary; and
- AI-based tools and weapons to supporting on-site missions – drones that will adapt in real time to the situation on-site, fighter planes equipped with virtual voice assistants and battle tanks that can be accompanied by semi-autonomous robots with the ability to operate in complex environments. As far as autonomous lethal weapons are concerned, France has no plans to develop fully autonomous systems that are totally beyond human control in the definition and execution of their mission.
The Ministry of Defence has called for “trusted AI” and the establishment of international standards. In its report, the ministry highlighted the need for France to strike the right balance between benefiting from what large private and often foreign digital groups can offer in terms of AI without becoming dependent on it and developing its own military applications.
The use of AI in the field of security and defence presents specific difficulties and challenges. Beyond the issues of national sovereignty, the regulations applicable to classified information can be a constraint on the use of AI. Indeed, Articles R 2311-1 and 2 of the Defence Code classify information considered as defence and national security secrets as follows:
- Top secret defence: Reserved for information and materials that concern government priorities in defence and national security, and whose disclosure is likely to seriously harm national defence;
- Defence secret: Reserved for information and materials whose disclosure is likely to cause serious harm to national defence; and
- Defence confidential: Reserved for information and materials whose disclosure is likely to harm national defence or could lead to the discovery of a secret classified at top secret or defence secret level.
Classified information can be accessed only with a security clearance, which differs according to the classification levels of the information. It is well known that AI-based systems can become intelligent only if they have enough relevant data to learn from: without data, an algorithm is necessarily blind, and without an algorithm, a data is definitely mute. Although nothing prevents such data from being injected into a deep learning process, the restricted nature of access to such data may limit the players that are seeking to develop AI-based technologies in the defence sector.
(c) Autonomous vehicles
The rules on the traffic conditions of autonomous vehicles (AVs) for experimental purposes can be found in several instruments:
- the Act on the Energy Transition for Green Growth, dated 17 August 2015;
- the Order Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 3 August 2016;
- the Decree Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 28 March 2018;
- the Order Relating to the Experimentation of Delegated Driving Vehicles on Public Roads dated 17 April 2018; and
- the so-called Loi Pacte, dated 22 May 2019.
These texts authorise the circulation on public roads of vehicles with total or partial delegation of driving authority, for experimental purposes (ie, in order to carry out technical tests or to evaluate the performance of these vehicles). Experimentation is subject to prior authorisation. The issue of authorisation is subject to the condition that the system of delegated driving may be neutralised or deactivated by the driver at any time. In the absence of a driver on board, it is necessary to provide evidence that a driver located outside the vehicle, responsible for supervising the vehicle and its driving environment during the experiment, will be ready to take control of the vehicle at any time, in order to take the necessary steps to ensure the safety of the vehicle, its occupants and road users.
The Loi Pacte further clarifies the criminal liability regime applicable in the event of accidents that occur during experiments on AVs. This act exempts the driver from criminal liability during periods when the system of driving delegation is activated if the following cumulative conditions are met:
- The driver must have activated the driving delegation system in accordance with its conditions of use; and
- The driving delegation system must be in operation and must inform the driver, in real time, that it is in a position to observe traffic conditions and carry out any manoeuvre independently without delay (instead of the driver).
According to these provisions, instead of the vehicle driver, criminal liability will be borne by the holder of the prior authorisation for experimentation, which will have to pay any fines imposed and any damages awarded in case of accidents.
Currently, this liability regime is limited to the situations of experimentation outlined above. French commentators have thus questioned whether the current French liability regime for traffic road accidents (the so-called loi Badinter, dated 5 July 1985) is sufficient to address issues raised by AVs (with slight adaptations where necessary – for example, on the notion of ‘driver’); or whether a dedicated liability regime would be necessary, under which the liability of different stakeholders (eg, driver, car manufacturer, AI provider) might be triggered.
(d) Manufacturing
There is no dedicated regulation in this sector as yet.
(e) Agriculture
There is no dedicated regulation in this sector as yet.
(f) Professional services
There is no dedicated regulation in this sector yet.
(g) Public sector
The French Act for a Digital Republic (Act 2016-1321 dated 7 October 2016) introduced the principle of transparency of public algorithms used as a basis for individual administrative decisions. Its provisions have been transposed in the French Code of Relationships between the Public and the Administration. According to these provisions, whenever an individual decision is taken on the basis (even partially) of an algorithm, the administration must:
-
include an “explicit statement” on the relevant documents (eg, notices, opinions) informing the user that the decision concerning him or her has been taken on the basis of an algorithm. This statement must outline:
-
- the purposes of the processing;
- the user’s right to know about the “main features” of this processing; and
- how this right can be exercised (Articles L311-3-1 and R311-3-1-1); and
-
explain, at the request of the individual, how the relevant algorithm works (Articles L311-3-1-2 and R311-3-1-2), by providing the following information:
-
- the degree and mode of contribution of the algorithm to the decision making;
- the data processed and its sources;
- the processing settings and their weighting applied to the situation of the data subject; and
- the operations carried out by the processing.
For administrations with at least 50 agents or employees, the administration must provide general information (Article L312-1-1-3), which involves publishing online the “rules defining the main algorithmic processing used in the accomplishment of their missions” – provided, once again, that these form the basis of individual decisions.
These provisions are complemented by Article 47 of the Data Protection Act, which provides that the data controller must ensure that it remains in control of the algorithmic processing and its development, so that it can explain to the data subject, in detail and in an intelligible form, the manner in which the processing has been carried out on him or her.
To assist administrations in complying with such obligations, in March 2019 Etalab (a French public body) published a guide for the attention of administrations, together with a practical factsheet on the explicit statement that must be provided to the user.
(h) Other
Numerous AI applications have emerged in the legal sphere. Many legal tech firms are using machine learning processes to develop software and applications for tasks such as:
- analysing case law;
- identifying the chance of success;
- anticipating the result of litigation; and
- assisting with due diligence.
The development of such software is facilitated by the implementation of open data policies due to the obligations set forth in the Act for a Digital Republic.
The introduction of AI software in the judicial system is also being explored. For example, for three months the judges of the Courts of Appeal of Douai and Rennes have tested software that aims to predict judicial decisions.
Some provisions have been inserted in the General Data Protection Regulation and the Data Protection Act in order to protect data subjects from the adverse consequences that the sole utilisation of algorithms may have on judicial decisions concerning them. In this respect, Article 47 of the Data Protection Act provides that: “No judicial decision involving an assessment of a person’s conduct may be based on the automatic processing of personal data intended to evaluate certain aspects of that person’s personality.”