ARTICLE
6 October 2023

Generative AI Development: Canada Releases Voluntary Code Of Conduct

BJ
Bennett Jones LLP

Contributor

Bennett Jones is one of Canada's premier business law firms and home to 500 lawyers and business advisors. With deep experience in complex transactions and litigation matters, the firm is well equipped to advise businesses and investors with Canadian ventures, and connect Canadian businesses and investors with opportunities around the world.
The federal government has recently released its voluntary Code of Practice (the Code) relating to advanced generative artificial intelligence (AI) systems.
Canada Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The federal government has recently released its voluntary Code of Practice (the Code) relating to advanced generative artificial intelligence (AI) systems. The code identifies measures that organizations are encouraged to adopt when they are developing generative AI systems. The Code outlines measures that are aligned with six core principles:

  • Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities
  • Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety.
  • Fairness and equity: Organizations will assess and test systems for biases.
  • Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.
  • Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.
  • Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.

Organizations seeking to use, develop, and manage such systems are encouraged to integrate the principles of the Code into their operations, and by doing so, take steps to ensure that risks associated with the use of and reliance on AI are appropriately identified and mitigated.

The federal government's release of the Code occurred just after it published a Guide on the use of Generative AI for government institutions on the use of Generative AI, and opened a consultationon a proposed Code of Practice for generative AI systems. Bennett Jones has previously blogged about both—Generative Artificial Intelligence (AI): Canadian Government Continues to Clarify Use of Generative AI Systems and Artificial Intelligence—A Companion Document Offers a New Roadmap for Future AI Regulation in Canada.

In the background of these developments, Bill C-27, which includes draft legislation on AI—The Artificial Intelligence and Data Act—is expected to be passed into law relatively soon. Although, it is worth noting that Bill C27 has been under consideration since June 2022. This draft legislation includes substantial compliance obligations in connection with the design, development and deployment of AI systems in the private sector, and corresponding exposure to penalties for non-compliance. The focus of this draft legislation is on addressing potential harm (physical, psychological, damage to property, or economic loss) arising from the use of AI systems. In its current state, the draft legislation lacks clarity as to what activities involving the use of AI will be defined as "high risk" (a relevant standard for the imposition of obligations and penalties). At present, pending Bill C-27 being passed into law, regulation of AI in the private sector is governed by the federal privacy legislation (Personal Information Protection and Electronic Documents Act).

While the Code is voluntary, the principles underlying this code will likely serve as a framework for assessing regulatory compliance, and therefore provide a loose roadmap of how AI is regulated. However, the precise manner in which the principles underlying the Code will be interpreted is critical to defining more precisely what compliance looks like. Likewise, how the concept of "high risk activities" will be defined will be significant in understanding the relevant compliance standards.

In short, at present, there is no clearly defined roadmap from the federal government to guide organizations in the design, development and deployment of AI. Absent this roadmap, organizations seeking to deploy AI in their business operations may inadvertently expose the business to regulatory scrutiny and penalties. Careful navigation is required to reap the benefits of AI while effectively managing exposure.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More