Essential Legal Due Diligence For Investing In AI Companies: Top Tips For Investors And Legal Advisors

MT
Miller Thomson LLP

Contributor

Miller Thomson LLP (“Miller Thomson”) is a national business law firm with approximately 525 lawyers working from 10 offices across Canada. The firm offers a complete range of business law and advocacy services. Miller Thomson works regularly with in-house legal departments and external counsel worldwide to facilitate cross-border and multinational transactions and business needs. Miller Thomson offices are located in Vancouver, Calgary, Edmonton, Regina, Saskatoon, London, Waterloo Region, Toronto, Vaughan and Montréal.
Investment banks and private equity firms are increasingly attracted to the potential of artificial intelligence ("AI") companies as lucrative investment opportunities.
Canada Technology
To print this article, all you need is to be registered or login on Mondaq.com.

INTRODUCTION

Investment banks and private equity firms are increasingly attracted to the potential of artificial intelligence ("AI") companies as lucrative investment opportunities. However, the unique and evolving nature of AI technology and developing regulations demands comprehensive due diligence to evaluate the risks and opportunities associated with such investments. Legal due diligence tailored to the highly specialized aspects of AI companies plays a critical role in the overall assessment of AI company investment opportunities.

This article highlights key considerations for potential investors and their legal advisors in regards to conducting legal due diligence on AI companies. The considerations discussed below are not exhaustive but rather, focus on a few key diligence items that should be considered when looking to invest in the AI space.

INTELLECTUAL PROPERTY RIGHTS

Intellectual property ("IP") is a critical asset for AI companies, making a thorough assessment of IP rights in the context of the applicable company's technology imperative. A thorough review of the company's patents, copyrights, trade secrets, and trademarks (including registered and unregistered IP) should be undertaken to determine their validity, scope, and enforceability. It is essential to identify potential infringement risks, ongoing IP litigation, and any licensing agreements that may affect the company's technology and market position. A comprehensive understanding of the company's IP assets helps evaluate the investment's long-term value and competitive advantage.

It is critical to assess company and potential third-party intellectual property rights in the scope of inputs used in the AI system, outputs of the AI system and in the technology itself. Investors and their counsel must understand the diversity, quantity, and source of input data used for training AI models and the company's underlying data acquisition practices. If the company uses third party data without license (even if the data is publicly available), copyright infringement concerns may arise, especially if data was acquired through data scraping, or similar methods. In addition, if unlicensed data is used to train the AI system, then AI outputs generated using unlicensed training data may themselves infringe third party rights in the underlying data. Multiple class action lawsuits have been filed around the world by artists who claim generative AI system operators used the artists' original works without license to train their AI systems and to generate AI outputs which are mere derivatives of these original works, in violation of copyright laws. Absent wholesale changes to these laws, clear rights to all relevant IP used by the AI system are necessary to avoid third party infringement claims.

One should also assess the involvement of different participants in the AI system's development. In Canada, authors of copyrighted works have certain 'moral rights' to their works, which cannot be assigned or licensed but can be waived. Ensuring that moral rights have been waived by employees, independent contractors, and other third parties involved in developing the AI system is necessary to ensure that adequate rights are maintained by the AI company.

The 2023 Thaler v Perlmutter ("Thaler") decision, which held that the US Copyright Act requires human authorship, impacts whether AI-generated outputs can be protected as copyright. If an output involves human skill and judgment, it may be protectable. However, purely AI-generated work may not be. While the Canadian Copyright Act does not define "author", it implies that an author must be a natural person, making it unlikely to support an AI machine as a sole author. In 2021, the Canadian Intellectual Property Office ("CIPO") permitted the first copyright registration for a work listing an AI (non-human) author as co-author with a human author. It remains to be seen how the Thaler decision will impact Canadian copyright law. In this regard, it is unclear how CIPO would treat a copyright application without a human co-author. The degree of AI and human involvement in generating outputs should be considered in light of this ambiguity.

REGULATORY COMPLIANCE

AI companies operate in a complex and rapidly evolving regulatory landscape necessitating a comprehensive review of compliance. Investors and their counsel should assess the company's compliance with relevant laws and regulations, such as data privacy and protection regulations, cybersecurity protocols, and industry-specific guidelines, particularly in sectors like FinTech, RegTech, biomedical, and autonomous vehicles. Potential liability associated with the company's AI technology and any required regulatory approvals should also be considered.

The global AI specific regulatory landscape is still in its infancy, continuously evolving with rapid advances in the AI technology and the general public's perception of AI. Key regulatory regimes to watch include:

  • European Union AI Act: Approved in May 2024, it introduces a comprehensive legal framework on AI, categorizing AI systems by risk levels, such as "limited risk", "high risk" or "unacceptable risk", and imposing obligations accordingly. AI deemed to be an "unacceptable risk" are completely prohibited. Its extraterritorial effect requires compliance from global entities whose AI systems are used in or affect the EU. Therefore, global entities with AI systems will be required to comply with the EU AI Act if the outputs of such systems are used in or have an effect in the EU. The text of the EU AI Act is in the process of being formally adopted and translated. It will enter into force 20 days after its publication in the Official Journal, and will be fully applicable two years later, with some exceptions.
  • The Bletchley Declaration: Signed in November 2023 by 28 nations, including the US, UK, EU and Canada, it signals a collaborative to develop a harmonized approach to AI regulation, focusing on identifying and understanding AI safety risks.
  • US Executive Order: Issued in October 2023, the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence is meant to promote the responsible use of AI while mitigating the substantial risks AI poses on society. It also creates tangible obligations on (i) the governmental bodies to create standards and potential regulations covering the entire AI lifecycle, and (ii) developers of AI systems to share safety test results.
  • Canada's AIDA: Canada's proposed Artificial and Intelligence and Data Act ("AIDA"), introduced in 2022 as part of Bill C-27, would apply to high-impact AI systems, imposing obligations across their lifecycle in order to address risks associated with such AI systems. Bill C-27 is currently making its way through the legislative process.
  • Canada's Voluntary Code: Canada's Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the "Voluntary Code"), has been effective since September 27, 2023. The Voluntary Code encourages responsible development and management of generative AI systems by identifying measures that organizations should take – the core principles underlying such measures are: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness

Lawyers should review and monitor the aforementioned regulatory frameworks, among others, and provide a long-term risk assessment evaluating how the AI company is positioning itself with both current and future regulatory requirements.

DATA PRIVACY

Assessing an AI company's privacy and data protection safeguards starts with reviewing its data privacy policies. It is important to ensure these policies meet required privacy standards, especially for highly sensitive data.

If AI models use personal data in training or input, confirming appropriate privacy consents is crucial. Even with de-identified data, AI systems can infer personal information, increasing re-identification risks when combined with other data. This risk is heightened if the information contains confidential or commercially sensitive data. Investors and their counsel should understand the risks involved in the information collected and used by AI models to assess the AI company's risk profile.

Data scraping poses another privacy concern for AI companies. It has become prevalent with the rise of generative AI systems trained on publicly available internet data. Canadian privacy regulators have targeted companies engaging in unlawful data scraping of publicly available personal information. It's essential to recognize that "publicly available" personal information is still protected under privacy laws in most jurisdictions, making data scraping incidents by AI companies potential breaches of data and privacy laws.

ETHICAL CONSIDERATIONS

Ethical implications should not be overlooked in the diligence process when evaluating AI investments. Investors and their counsel must scrutinize company policies regarding ethical guidelines, fairness, human rights, transparency, accountability, and sensitive data handling, as well as potential biases in AI algorithms and mechanisms for addressing ethical concerns. Due diligence on ethical implications should consider new AI regulations like the EU AI Act, and identify high-risk AI systems based on the underlying regulatory principles.

The significance of ethical implications and potential to be considered a high-risk AI system are elevated in AI systems that are intended to make decisions or predictions that can impact individuals' access to services or employment, use biometric data or facial recognition for identification and inference, are integrated in health and safety functions, and have the potential to influence human behaviour, emotion, and expression at scale.

CONCLUSION

Comprehensive due diligence is essential when assessing investments in AI companies and technologies. In this context, it is critical to conduct a thorough analysis of intellectual property rights, regulatory compliance matters, data privacy and ethical considerations, among a host of other important legal due diligence considerations. By evaluating these key aspects, among others, lawyers can provide valuable insights, identify potential risks and opportunities, and enable investors to make well-informed decisions in the dynamic and growing AI landscape.

Acknowledgment

The authors of the publication would like to thank Allison Choi for her assistance with this publication.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More