On December 7, the federal Office of the Privacy Commissioner of Canada (OPC), jointly with all Canadian provincial and territorial privacy regulators, released new guidance entitled "Principles for responsible, trustworthy and privacy-protective generative AI technologies" (Principles). This new guidance interprets existing Canadian privacy legislation and principles in the context of generative AI, and it applies to businesses that develop, provide, and use generative AI systems.
While the Principles do not bind the regulators, its content is likely to influence future regulatory decisions, investigations, and policy statements.
What you need to know
- While there are some distinct requirements for developers/vendors of generative AI systems compared to organizations that make use of these systems, both groups must generally comply with privacy law principles with respect to data governance, consent, and transparency.
- The Principles emphasize the protection of vulnerable groups when developing, providing, or using generative AI, particularly ensuring that no discriminatory output is generated.
- The Principles outline a number of concrete practices that help document generative AI compliance with privacy laws. They focus on preventing inappropriate uses of AI and ensuring that end users are provided with both sufficient information about systems they interact with and mechanisms to enforce their privacy rights.
Summary of principles for the responsible use of generative AI
The Principles are applicable to both public and private organizations, and they relate to the application of both public- and private-sector Canadian privacy laws. The Principles apply many existing requirements of Canadian privacy legislation to the use of generative AI.
Below, we have mapped the Principles to central privacy law requirements (including as reflected in Schedule 1 of PIPEDA) to demonstrate how the Principles will influence the application of privacy laws in novel contexts related to generative AI.
Key privacy law requirements |
Corresponding recommended course(s) of action in the Principles |
Requirement 1: Accountability |
Accountability
A more operationally onerous requirement for developers and providers: ensure that generative AI outputs are "traceable and explainable," meaning that organizations or individuals using the system should know how it works and should be able to access a rationale for how it arrived at a particular output. |
Requirement 2: Identifying Purposes |
Appropriate Purposes
Importantly, anticipated "no-go zones" include (but aren't limited to):
|
Requirement 3: Consent |
Legal Authority and Consent |
Requirements 4 and 5: Limiting Collection, Use, Disclosure and
Retention |
Limiting Collection, Use and Disclosure |
Requirement 6: Accuracy |
Accuracy
|
Requirement 7: Safeguards |
Safeguards
|
Requirement 8: Openness |
Openness |
Requirement 9: Individual Access |
Individual Access |
Requirement 10: Challenging Compliance |
This requirement, represented in PIPEDA as one of 10 key privacy principles, is not specifically addressed in the Principles, though the applications of the openness and access principles as outlined above require that individuals be given mechanisms to gain more information about decisions made about them using generative AI systems. |
Vulnerable groups remain a special consideration
Guidance and best practices regarding the use of AI have consistently emphasized the importance of human rights and non-discrimination considerations in the development and deployment of AI, as we discussed in detail in our Guide to AI regulation in Canada. In this guidance, the regulators have made it clear that organizations have a responsibility to identify and prevent risks to vulnerable groups by ensuring the fairness of generative AI systems, especially for "highly impactful contexts" such as health care, employment, education, policing, immigration, criminal justice, housing or access to finance. Children and young people are identified to be at particularly high risk of significant negative impacts by generative AI.
Practical considerations for businesses that develop, provide, or use generative AI
Additional practical steps recommended by the Principles suggest that organizations should take the following steps.
- Use adversarial or red team testing to identify potential inappropriate or "no-go zone" uses of generative AI systems
- Implement appropriate use policies to which individuals or organizations using the generative AI system must agree in advance
- Publish documentation about the datasets used to develop or train the generative AI system, including sources and the legal authority for its collection and use (for developers and providers)
- Meaningfully identify generative AI outputs that could have a significant impact on a person or group as being created by generative AI
- Conduct privacy impact assessments (and/or algorithmic impact assessments for government entities) to mitigate against potential or known privacy impacts
- Disclose accuracy issues and limitations to users (for developers and providers); evaluate the impacts of accuracy issues or other limitations disclosed by the provider or developer of a generative AI system on whether the system should be used (for user organizations)
- Allow individuals to access or correct personal information contained within an AI model
- Ensure that a group is adequately and accurately represented in the system's training data if the system is going to be used in relation to that specific group
- Implement safeguards that protect against novel data security threats for generative AI
- Evaluate the data used to develop and train generative AI systems to ensure that the systems do not replicate or amplify "historical or present" biases in the data, or introduce new biases, to reduce the risk of discriminatory outcomes for marginalized groups based on race, gender, or other characteristics
- Establish oversight and review of the outputs of the AI systems, or enhanced monitoring for potential discriminatory or other adverse effects
While these recommendations are not strict legal requirements themselves, they do align with AI best practices. Incorporating these practices as appropriate can help reduce risk associated with existing legal requirements and can facilitate future compliance in the dynamic AI regulatory environment. While dynamic in nature, the AI regulatory environment is starting to coalesce around certain core tenets, as evidenced by Canada's Bill C-27 (including recently proposed amendments) and the content of the recent agreement to the AI Act in the EU in December 2023. This coalescence makes the early adoption of best practices a more attractive option for many organizations despite uncertainty the specifics.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.