The AI Act And The Role Of Artificial Intelligence In Digital Transformation And Sustainability

WL
Withers LLP

Contributor

Trusted advisors to successful people and businesses across the globe with complex legal needs
Artificial intelligence (AI) is at the core of the technological and digital revolution, with potential benefits spanning across a multitude of sectors and industries.
Italy Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Artificial intelligence (AI) is at the core of the technological and digital revolution, with potential benefits spanning across a multitude of sectors and industries. However, the unethical use of AI can have serious consequences for society and the fundamental rights of its citizens. To handle the risks associated with this technology, the European Union has introduced the AI Act to promote and develop trustworthy, human-centred, environmentally, socially, and governance-responsible AI systems.

What is the AI Act?

The AI Act is the first specific legislation on artificial intelligence and applies to suppliers, users, importers and distributors of AI systems positioned or used in the EU, regardless of their registered office. This legislation sets out different rules and obligations based on the AI systems level of risk, with the aim of protecting citizens' health, safety and fundamental rights, while promoting environmental sustainability and social responsibility.

Risk Levels and Obligations

Unacceptable risk AI systems posing an unacceptable risk to health, safety or fundamental rights are prohibited. Some examples of such systems include emotional recognition in the workplace or social credit rating systems.

For high-risk AI systems, such as those using biometric systems, toys, medical devices, hiring processes, the AI Act requires compliance with key requirements, including, a compliance assessment, certain data quality, documentation and traceability, transparency, human oversight, security and cybersecurity measures.

The timing of Application

The European Parliament adopted the final text of the AI Act in March 2024, entering into force on the 20th day subsequent to its publication in the Official Journal of the European Union (likely during the summer period). The deadlines for the application of the rules are phased: within 6 months, bans on AI systems at unacceptable risk will come into force; within 12 months, the obligations for general-purpose AI systems; within 24-36 months, all the rules of the AI Act, including the obligations for high-risk AI systems, will become effective.

Recommended Actions for Businesses

Companies should take several steps now to ensure compliance with the AI Act and bolster trust in AI technology, with a focus on environmental, social and governance responsibility. Here are some recommended actions that our firm can assist you with:

  • Identification and Cataloging of AI Systems: Assistance in identifying and classifying AI systems based on risk.
  • Impact Assessment and Compliance: Assessing the impact of AI systems on fundamental rights and regulatory compliance, including the implementation of bias-free algorithms and development practices that ensure accessibility and non-discrimination.
  • Legal Risk Management: Assess and manage the legal risks associated with AI systems, including intellectual property regulations, GDPR, and product liability, promoting solutions that minimise environmental impact.
  • Tech Due Diligence: Assistance in technological due diligence for extraordinary transactions, investment agreements and strategic contracts concerning AI.
  • Protection of Intellectual Property Rights: Assistance in the protection and enhancement of intellectual property rights relating to AI systems, software and algorithms.
  • Drafting and Negotiation of Contracts: Support in the drafting and negotiation of contracts for the development, licensing, acquisition and distribution of AI systems.
  • Data Protection Impact Assessment: Assistance in data protection impact assessment, governance and data security measures in case of use of AI for the processing of personal data.
  • Transparent Governance: Adoption of transparency and accountability policies, including traceability of automated decisions to ensure that they can be understood or duly challenged.
  • Training and Company Policies: Training and drafting of policies for the ethical and responsible use of AI systems within companies and organisations.

Adopting these practices will not only ensure regulatory compliance and the protection of your AI investments, but also help build trust in AI technology, which is essential for its effective and responsible development and use.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More