It is nearly impossible to find some aspect of work or life that artificial intelligence has not impacted. Consider ChatGPT, the AI chatbot software that debuted last November and quickly disrupted higher education in the United States.1 Already, some learning institutions have revamped their curriculums to accommodate its use by students.2 Others have outright banned it.3

As AI technology grows more sophisticated and more organizations adopt it in their operations, issues are sure to follow. Already, the European Union ("EU") is preparing to roll out a set of policies that will have broad implications across the global business spectrum.

The Artificial Intelligence Act ("AI Act") — referred to by some as the "mother of all AI laws" — is the first major attempt to regulate the use of AI by businesses.4 The goal of the AI Act is to provide a regulatory framework around the development, commodification and use of AI-driven products. The new policies, which were first proposed in 2021, are still being drafted, so it may be a few years before businesses are expected to comply with them.

The AI Act's proposed restrictions on companies' using facial recognition would apply to approximately 447 million people across 27 countries.5

Here's a glimpse of what organizations can expect as the new policies become law.

A Risk-Based Approach

The first question on the minds of many business leaders is how the AI Act will differ from existing regulations like those in the General Data Protection Regulation ("GDPR"). It's expected that the regulation will expand on the GDPR, which focuses on individual privacy rights, as well as the concepts of fairness and transparency.6

A defining aspect of the AI Act is the adoption of a risk-based approach dividing AI into three areas of potential risk: unacceptable, high, and low or minimal.7 Unacceptable risk is defined as systems that manipulate human behavior, assign social scores, perform mass surveillance and more.8 These are strictly banned.9 High risk systems are those that provide access to employment, education or employment services, and that must meet strict requirements.10 Systems with low or minimal risk, like chatbots or spam filters, are largely unregulated — although in some instances these are held to specific transparency standards.11

In December 2022, the Council of the European Union adopted a proposed amendment that reexamines the definitions of these categories.12 The amendment awaits finalization by the European Parliament.

Authorities Take Action

The impact of the AI Act will be far reaching, and regulatory authorities around the globe are already following its lead. In the United Kingdom, for instance, committees like the Digital Regulation Cooperation Forum ("DRCF") are leading the charge by bringing several regulators together to define common areas of interest and concern. The DRCF — which comprises the Information Commissioner's Office, the Competition and Markets Authority, and the Financial Conduct Authority and Ofcom — has already taken a stance on algorithmic processing, a practice likely to come under scrutiny with the new regulations.13

There are several exclusionary practices that organizations like the DRCF will seek to stamp out once the AI Act comes into effect.14 One such practice is self-referencing, which occurs when algorithms apply preferential treatment to a firm's own competing service or product when presented on a platform.15 Practices like this will attract law enforcement authorities, so it is important that organizations work with regulators to ensure they are operating in accordance with the new policies.

An October study by Cambridge University showed that artificially intelligent hiring tools remain subject to variability and are not yet sophisticated enough to produce results free of bias risk.16

New Risks around Non-Compliance

Research shows that brand loyalty and consumers' willingness to share data with companies directly correlate to trust.17 If an organization is perceived as too relaxed when it comes to data, for example, it can directly affect the organization's bottom line. In countless data breaches, organizations have faced share price losses and leadership turnover due to mismanagement.

Companies may now also run the risk of additional scrutiny if pricing and other practices are considered discriminatory by the AI Act. Imagine an organization owns a taxi-hailing app that is able to detect that a user's smartphone battery is low. The app's AI may charge customers more due to the urgency of the user's situation. In such a situation, the company will need to address bias that may be underlying within its AI application to ensure policies and business models are compliant and fair to end users.18

Organizations must find the right balance between supporting innovation and managing compliance and ethical requirements. Data science is only one aspect of that dichotomy, however. To develop and adopt policies that manage compliance without stifling innovation, cross-functional collaboration (for instance, among legal teams, developers and data scientists) is crucial. This may include creating opportunities to automate tasks related to regulatory monitoring and reporting.

An Updated Toolkit for Monitoring AI

Regulators will expect organizations to perform a reasonable and proportionate risk assessment to ensure they are aligned with the new laws as they come into force. This includes maintaining records that demonstrate and explain rigorous testing of AI applications.

While there is no one-size-fits-all approach to demonstrating compliance, there are three areas where organizations should direct their focus:

  • Workflows and Approvals: These should be well organized and defined.
  • Quality of Data: Leaders must be confident in the data and that it comes from a trusted source.
  • Model Monitoring and Documentation: Adopting model monitoring and automated documentation tools signals the company is taking the new policies seriously.

There is still time to take action before the AI Act comes into full effect. However, if there is one thing organizations have learned from GDPR compliance, it's that preparedness and proactive actions are critical to staying on the right side of the law.


1. Kalley Huang. "Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach." The New York Times (January 16, 2023).

2. Ibid.

3. Ibid.

4. Melissa Heikkilä. "A quick guide to the most important AI law you've never heard of." MIT Technology Review (May 13, 2022).

5. Ibid.

6. "The EU's new AI Act — What We Can Learn from the GDPR." Netskope (July 26, 2022).

7. "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts." European Commission, page 12 (April 21, 2021).

8. Ibid.

9. Ibid.

10. Ibid.

11. Ibid.

12. Laura De Boel. "Council of the EU Proposes Amendment to Draft AI Act." Wilson Sonsini (December 22, 2022).

13. "The benefits and harms of algorithms: a shared perspective from the four digital regulators." Gov.UK (updated September 23, 2022).

14. "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts." Page 21.

15. Ibid. Page 57.

16. Eleanor Drage and Kerry Mackereth. "Does AI Debias Recruitment? Race, Gender, and AI's 'Eradication of Difference.'" Springer Link (October 10, 2022).

17. Press Release from DotDigital. "Consumers turn away from brands that do not protect their data." Retail Dive (July 19, 2021).

18. "Emerging Fairness and Transparency Considerations in Artificial Intelligence." FTI Consulting webinar. (June 14, 2022).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.