1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

Canada has a federal system, which means that jurisdictions over policy areas relevant to AI – including privacy, health, education, environment and transportation – are shared with or under provincial/territorial governments. The federal government works with local governments to develop national standards and policy frameworks, but depending on policy areas, following these will depend on provincial/territorial governments.

For instance, the federal government's Personal Information Protection and Electronic Documents Act (PIPEDA) has set the national standards for the private sector; but in Alberta, British Columbia, and Quebec, provincial privacy laws that are ‘substantially similar' to PIPEDA are applied instead. Similarly, Transport Canada released the Automated and Connected Vehicles Policy Framework for Canada (2019), which provides guiding principles for transportation policies at federal, provincial/territorial and municipal levels; but it is up to the local governments to create regulations that align with it.

Today, Canada has no comprehensive federal legislation or regulatory measures specific to AI. The Parliament recently tabled the Digital Charter Implementation Act (DCIA), which aims to reform Canada's private sector data protection legislation and propose specific provisions to regulate the industry's use of automated decision-making systems. Additional initiatives include department or sector-specific guidelines. For instance, the Treasury Board Secretariat released the Directive on Automated Decision-Making (DADM) and an accompanying Algorithmic Impact Assessment tool to guide the use of automated decision making within federal public service.

AI governance is still largely left to self-regulation by the industry, informed by principles and frameworks such as the Canadian-led Montreal Declaration for a Responsible Development of Artificial Intelligence and the Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems. In October 2019, the CIO Strategy Council published the world's first national standards on AI, followed by other standards on data governance and digital trust and identity.

1.2 How is established or ‘background' law evolving to cover AI in your jurisdiction?

Canadian laws have not evolved to cover AI specifically. Recent developments in legislation, such as the DCIA, signal important developments regarding how legislation can enforce and promote more ethical and accountable data uses. The DCIA generally aligns with Europe's General Data Protection Regulation (GDPR) while distinguishing Canada in important respects – namely, regarding requirements for algorithmic accountability, processes and procedures about using de-identified data maintaining a principles-based approach to data protection.

The government has also published guidance documentation and white papers to help inform how laws need to evolve to cover specific AI-related opportunities and challenges. Examples include:

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

Determining the established standard of care when using AI is a timely but unsettled question in Canada. The current state of the law is tempered and may be summarised as not needing to use the latest tools or techniques to meet the standard of care; but neither can these be ignored once they have found their way into everyday use. Whether the current state of the law strikes the proper balance between innovation and the need to caution against adopting AI too quickly is a timely and important question.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

The potential liability regimes that may be the most directly applicable in the context of AI-related tort claims include:

  • general tort liability;
  • product liability;
  • vicarious liability; and
  • strict liability.

1.5 Do any special regimes apply in specific areas?

Through the DCIA and the DADM, Canada seeks to establish specific regimes and requirements around the use of automated decision making. The requirements proposed for automated decision making are based not on any specific application, but rather on the technology itself. The DCIA defines ‘automated decision-making systems' as "any technology that assists or replaces the judgment of a human decision-maker using techniques such as rule-based systems, regression analysis, predictive analytics, machine learning, deep learning, and neural nets".

This definition includes a wide range of possible computer systems and a broad scope of applicability. Arguably, this could be the broadest and farthest-reaching definition of ‘automated decision-making systems' ever legislated. For instance, Article 22 of the GDPR, which is widely cited as the leading regulation of automated decision systems, limits its application to those decisions made "solely" by an automated system and to those "which [produce] legal effects concerning him or her or similarly significantly [affect] him or her". In Canada, defining what constitutes an automated decision-making system will implicate a much larger and increasing number of algorithms and systems used for decision making.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

Over the past few years, there have been several bilateral and multilateral initiatives on AI, which have largely served as forums for discussing larger directions, information sharing and policy coordination. These initiatives at the G7, the G20 and the Organization for Economic Co-operation and Development (OECD) have resulted in key ministerial statements on AI, such as the G7 Innovation Ministers' Statement on Artificial Intelligence (2018), the G20 AI Principles (2019) and the OECD Principles on AI (2019), which reflect the consensus about the need to promote ‘responsible AI' and general policy directions AI governance among developed economies. While these forums do not impose ‘hard' regulations, they provide the space to develop consensus on broader principles and shape global norms, which eventually translate into concrete legislation and regulatory measures at the national level.

Canada has been not only an active participant, but also a leader in these international initiatives. For instance, Canada hosted the G7 Multi-stakeholder Conference on Artificial Intelligence in 2018 and, building on its work, launched the Global Partnership on AI (GPAI) in partnership with France in June 2020. The GPAI is a multi-stakeholder initiative that seeks to become "a global reference point for specific AI issues" and has 15 founding members, including the European Union and the United States. In this context, Canada's domestic policies on AI are expected to align with the directions set by forums such as the G7, the G20 and the OECD.

It will also be important to watch how AI regulations in the European Union and the United States impact on Canada. The European Commission proposed the "first-ever legal framework on AI" in late April 2021 and the US Federal Trade Commission also expressed the intent to play a greater regulatory role on AI around the same time. Canada regards the European Union and the United States as key, like-minded partners, collaborating with both in the initiatives mentioned above; and there are economic and regulatory issues intertwined in existing free agreements (the Canada-United States-Mexico Agreement and the EU-Canada Comprehensive Economic and Trade Agreement). Major regulatory moves in either the European Union or the United States will have significant ramifications for Canada.

Finally, on that note, trade agreements will also indicate future directions that may have relevance for Canada's AI agenda – specifically through provisions on AI, data flows and e-commerce. For instance, as of April 2021, Canada is engaged in exploratory discussions about potential participation in the Digital Economy Partnership Agreement (DEPA), initiated by Chile, New Zealand and Singapore. DEPA is the first international trade policy instrument solely dedicated to digital trade, which addresses AI, digital identities and digital inclusion.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

Today, there is no single body designated to enforce AI-related policies, and the relevant laws and regulations largely address the use of data. In this context, the OPC and the Competition Bureau are the two main Canadian federal entities that will play the most significant role in regulating AI until more AI-specific regulations are introduced.

The purpose of the OPC is to protect the privacy rights of individual Canadians in certain contexts, and therefore it is the main enforcer of data regulation in Canada. More specifically, the OPC oversees the collection, use and disclosure of personal information in commercial activities through PIPEDA, the Privacy Act and parts of Canada's Anti-Spam Legislation. The OPC also has jurisdiction over personal information held by government departments and agencies under the Privacy Act.

In 2020, the OPC conducted a public consultation on PIPEDA reforms so that the law could serve a greater regulatory role on AI. The final recommendations, submitted in November 2020, suggest that PIDEDA should:

  • allow the use of personal information for commercial and socially beneficial uses of AI;
  • authorise the uses of personal information within a rights-based framework that entrenches privacy as a human right;
  • include provisions specific to automated decision making; and
  • require businesses to demonstrate accountability.

If these recommendations are accepted, then the OPC is expected to play a greater role in the regulation of AI in Canada.

The OPC does not have complete jurisdiction over privacy issues in Canada, as there are also provincial privacy laws and commissions. In Alberta, British Columbia and Quebec, provincial laws that are ‘substantially similar' to PIPEDA are applied instead of PIPEDA.

Finally, the Competition Bureau has the power to enforce proper uses of AI through the regulation of competitive activities. The bureau oversees, among other things:

  • the use of personal information for deceptive marketing practices;
  • the formation of anti-competitive cartels; and
  • increasingly, the formation of ‘data-opolies' through mergers and acquisitions of data-rich companies through the Competition Act

1.8 What is the general regulatory approach to AI in your jurisdiction?

As is the case in other parts of the world, Canada is still in the early stages of developing regulations on AI and much is left to self-regulation by the industry. There are numerous discussions and initiatives on responsible AI within the industry and the civil society sectors. For instance, the CIO Strategy Council published the world's first national AI standards in October 2019.

However, the government recognises the need for more concrete regulations and there are several initiatives underway. Canada released the Digital Charter in May 2019, consisting of 10 principles that create a broader framework of trust and responsible innovation and reflect Canadian values of respect, fairness, control and choice in a digital era. In other words, the Digital Charter will serve as a values blueprint for relevant policies to come.

Two key issues to watch are the DCIA and PIPEDA reforms. The DCIA, if passed, would establish a new privacy law for the private sector (the Consumer Privacy Protection Act), providing Canadians with greater control over their data. The legislation has been introduced and went through the first reading in November 2020. As for the PIPEDA reforms, the OPC made final recommendations to the government in November 2020.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

Insights and predictive modelling: These include the following:

  • using AI to analyse and predict outcomes and effectiveness, undertake comparative analysis and inform decision making;
  • using talent analytics to match individuals to suitable jobs, gauge and optimise productivity, or assess and manage performance; and
  • conducting financial/business management to analyse patterns in accounting, cost forecasting, compliance and resource mix allocations.

Machine interactions: These include any techniques to improve interactions with users, such as semantic analysis, natural language processing, speech recognition and rule-based pattern matching. Potential applications include:

  • chatbots and virtual agents to help answer questions, provide step-by-step instructions and improve how information is communicated;
  • smart routing to determine the best communication channels and the right resources required; and
  • search optimisation and targeted content distribution.

Cognitive automation: This includes any technique that could further automate repetitive tasks or information-intensive processes. Potential examples include:

  • automated decision systems to process and review information, classify cases in terms of risk and priority, make recommendations and/or render decisions;
  • automated content generation to summarise and compare notes, write backgrounders or take meeting scenario notes; and
  • speech, audio and visual recognition capabilities.

2.2 What AI-based products and services are primarily offered?

  • Automated decision-making systems and recommendation engines;
  • AI-enabled interconnected devices; and
  • User-facing digital service platforms.

2.3 How are AI companies generally structured?

AI companies in Canada range from small start-ups to large international companies such as Facebook, Google, Samsung and NVIDIA, which have set up research centres in Montreal and Toronto. The top AI companies in Canada generally provide services in developing AI systems for clients in a broad range of sectors, such as Folio3, Trigma and Synergo.

2.4 How are AI companies generally financed?

Like all companies in Canada, AI companies generally begin with seed capital generated by the owners and their personal circles. They may also receive financing from venture capital and private equity firms. Private sector funding for Canadian AI companies jumped from C$67 million to C$658 million, and the number of deals increased from 15 to 57, between 2014 and 2019. Further, Canada's AI sector has been successful in attracting foreign direct investment from multinational corporations such as Google, Samsung and Fujitsu across the country.

Another key source of funding for AI companies is government. According to a March 2021 research report on AI funding, the government of Canada has been a key source of funding for for-profit organisations. According to the report dataset, the government of Canada invested a total of C$1.1 billion in AI between 2007 and 2020 through 1,335 federal government grants and contributions, of which 72% went to for-profit organisations. Further, there are government contracts for private sector companies and contributions from provincial/territorial governments that must be considered as well.

2.5 To what extent is the state involved in the uptake and development of AI?

Canada's research funding programme through the 1980s and 1990s has been credited with attracting AI researchers to Canadian institutions, which has resulted in Canada's leadership in basic R&D today. Recognising this strength, the government of Canada invested C$125 million to the Pan-Canadian Artificial Intelligence Strategy through Budget 2017 and further increased its funding by C$443.8 million in Budget 2021.

The strategy, managed by the Canadian Institute for Advanced Research, has primarily focused on attracting and retaining highly skilled AI researchers and supporting their research projects in the three nationally designated clusters in Edmonton, Montreal and Toronto. The increased funding in 2021 expands the scope of the strategy to AI commercialisation and development of AI standards.

In addition, the government of Canada has invested significantly to support its innovation sector and boost the commercialisation of emerging technologies, including AI, through the Strategic Innovation Fund (SIF) and Innovation Superclusters Initiative (ISI). The recently announced Budget 2021 allocated additional funding for both initiatives, with SIF getting C$7.2 billion and ISI receiving C$60 million for COVID-19-related spending, in addition to the initial C$950 million pledged in 2017.

As of April 2021, Innovation, Science and Economic Development Canada has contributed C$3.4 billion to the SIF, leveraging C$46.6 billion of investment for 85 projects in the innovation sector for R&D projects, firm expansion, the attraction of large-scale investments and collaborative demonstration projects across Canada. Since 2017, the government of Canada has also invested C$515 million (amount matched by industry partners) in the ISI, which has supported over 285 projects with over 900 partners in the five superclusters across the country:

  • digital technology (BC):
  • protein industries (Prairies);
  • advanced manufacturing (Ontario);
  • scale AI (Quebec); and
  • ocean (Atlantic Canada).

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

The Canadian Institute for Advanced Research's AI for Health Task Force has identified the potential of AI to:

  • improve the effectiveness, efficiency and safety of healthcare delivery; and
  • contribute to better public health policies and the development of innovative tools and treatments in Canada.

Today, AI in healthcare in Canada primarily consists of machine learning in image-based applications, such as diagnostic imaging/radiology. Image-based AI applications are heavily dependent on the availability of quality data, and without it, there is a risk of perpetuating existing biases. However, Canada is seen as falling behind its peers in the development of health ‘infostructure.' Access to health data in Canada is limited because healthcare delivery is under the jurisdiction of provincial governments; local privacy laws prevent the free flow of data and there are also issues of data interoperability.

Health Canada has identified AI applications in healthcare as advanced therapeutic products (ATP) – those that are personalised, developed at the point of care and manufactured, distributed and used in ways different from traditional health products. Based on the new provisions in the Food and Drugs Act, Health Canada has been working to introduce a pathway' that would allow the authorisation of new ATPs through the ‘regulatory sandbox' approach by the end of 2021. The regulatory sandbox approach would allow Health Canada to authorise new products in an agile manner by testing them and developing tailored regulations for them.

(b) Security and defence

Autonomous weapons systems are referenced subtly in Canada's defence policy, by way of reference to maintaining "appropriate human involvement" in the use of military capabilities that can exert lethal force. Foreign Minister François-Philippe Champagne's mandate letter specifically listed advancing international efforts to "ban the development and use of fully autonomous weapons systems". However, the government of Canada has not clarified how AI will be used in security and defence. Defence and Research and Development Canada released a Military Ethics Assessment Framework in 2017 that could be applied to use AI in the military, but this serves as a guideline rather than concrete regulations. For now, Canada does not have regulations specific to the use of AI in security and defence.

(c) Autonomous vehicles

While fully autonomous vehicles (self-driving cars) are rarely spotted on the roads in Canada, many current vehicles have AI-powered safety features such as automatic emergency braking, automated steering and adaptive cruise control. Federal, provincial/territorial and municipal governments have differing policy jurisdictions over transportation. In this context, Transport Canada has published several guidance materials, including Testing Highly Automated Vehicles in Canada: Guidelines for Trial Organizations (2018) and Automated and Connected Vehicles Policy Framework for Canada (2019). Local governments across Canada also have pilot projects, such as Ontario's 10-year pilot programme (2016) and the City of Edmonton's Autonomous Vehicle Pilot project (2018). However, Transport Canada does not currently have any uniform standards for these technologies.

(d) Manufacturing

Some 10% of Canada's economy depends on manufacturing and the government of Canada has invested in advanced manufacturing to maintain its global competitiveness. For instance, one of the five superclusters (see question 2.5) is dedicated to advanced manufacturing, with total funding of up to C$230 million. In advanced manufacturing, AI can be integrated into all stages, from production to quality control. Currently, Canada does not have regulations specific to the use of AI in the manufacturing sector.

(e) Agriculture

Agriculture is another key economic sector for Canada and the government will be investing up to C$153 million in the protein industries supercluster, based in the Canadian prairie provinces. AI could be applied in all stages of agricultural production (planting, seeding and harvesting) and farm management. Specifically, AI applications in predictive analytics could help with weather predictions, farm efficiency and early detection of risks. The application of AI in agriculture raises potential ethical concerns over algorithmic transparency, data and privacy, animal welfare and the environment, which could lead to regulatory issues. However, Canada does not have regulations specific to the use of AI in the agricultural industry.

(f) Professional services

The use of AI in professional services includes risk mitigation analysis, applied in industries such as consulting and regulatory compliance. As of April 2021, British Columbia and Ontario have approved regulatory sandbox pilots for innovative legal services, which would allow companies to get a provisional approval for operation, subject to risk-based monitoring and reporting requirements. However, Canada does not have regulations specific to the use of AI in the professional services sector.

(g) Public sector

The government of Canada released a Directive on Automated Decision-Making, which sets out the purposes, objectives and goals of employing AI to make or assist in making administrative decisions to improve service delivery, and an accompanying Algorithmic Impact Assessment tool. While the directive offers helpful insights, Canada does not currently have regulations specific to the use of AI in the public sector.

(h) Other

A subset of the use of AI in the public sector is AI in law enforcement – for example, the use of facial recognition software and the use of algorithms to predict future crime. As a result of the public concerns raised with these types of police applications and the recent Clearview AI breach, there is an increase in public pressure to ban facial recognition technology in policing.

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The Personal Information Protection and Electronic Documents Act (PIPEDA) is the main legislation governing data protection in the private sector across Canada. The Privacy Act governs a person's right to access and correct personal information that the Canadian government holds about him or her. In British Columbia, Alberta and Quebec, private sector privacy is governed by provincial privacy laws that have been deemed ‘substantially similar' to PIPEDA. Canadian companies face a lack of guidance surrounding algorithmic technologies and their ability to process data in mass quantities differently from traditional technologies.

The Copyright Act may have specific implications for AI companies and applications, as Canada does not currently have an exception in a fair dealing of copyright-protected materials for algorithmic training. One of the suggestions in the 2019 report on the Statutory Review of the Copyright Act, prepared by the Standing Committee on Industry, Science and Technology at the federal government, recommends allowing this exception through an amendment to the Copyright Act. However, as of April 2021, no such amendment has been made.

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

Under PIPEDA, businesses must provide appropriate physical, administrative and technical security safeguards. PIPEDA does not prescribe specific requirements on security safeguards, but provides that the degree of protection afforded by these security safeguards will be contextually dependent, such as the sensitivity of the information.

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

As companies continue to leverage AI in their service delivery models and operations, positive and negative market impacts emerge. Examples include the following:

  • False advertising and misinformation: The use of AI to track, influence and manipulate behaviour for a market advantage is a negative risk.
  • Demonstrable adherence to good AI governance/data protection can be used as a positive quality indicator that could influence consumer behaviour. This would be a positive impact.

In Canada, the impacts that AI presents from a competition perspective would likely be addressed via policies from the Office of the Competition Commissioner and/or the Office of the Privacy Commissioner of Canada.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

Globally, not just in Canada, AI is transforming how organisations conduct their business and the types of talent they will need to meet the demands of an increasingly digital and data-driven marketplace. Some studies suggest that nearly 50% of companies expect that automation will lead to a reduction in their full-time workforce, while more than half of all employees will require significant reskilling and upskilling. The private and public sectors have an important role to play in proactively ensuring that all workers are employable, productive and able to thrive in a digital workspace.

The government of Canada has made policy commitments to address some of the emerging labour issues related to AI through Budget 2021. First, the Canada Labour Code will be amended to improve labour protections for gig workers employed through digital platforms. Second, the government will invest C$960 million over three years in the Sectoral Workforce Solutions Program, which will provide training in skills identified by sector associations and employers. In addition, the government will invest C$1.4 billion in the Canada Digital Adoption Program to train 28,000 young Canadians (a ‘Canadian technology corps') and provide relevant access to skills, training and advisory services for small and medium-sized enterprises.

Finally, Canadians will have to tackle new emerging issues at the intersection between labour and AI. Following the pandemic, employers have increasingly turned to AI-powered hiring platforms, raising concerns about fairness, privacy and bias, and digital surveillance of employees.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

The use of AI in certain sectors, such as law enforcement, has the potential for machine-learning systems to reinforce existing biases (eg, racial, gender, class), as their training data may reflect such biases and structural injustices. To mitigate this risk, developers should ensure that their systems do not perpetuate these existing biases and introduce preventive measures against intentional attempts to manipulate these systems.

However, current laws and regulations do not specifically address the AI-related aspects of the harm caused by data manipulation, as the AI regulatory landscape is still in its early stages in Canada. The government of Canada has created the Advisory Council on Artificial Intelligence to advise it on balancing interests in trust and accountability in AI while protecting democratic values, processes and institutions. Recommendations such as the Montreal Declaration for the Responsible Development of Artificial Intelligence also provide further guidelines on how AI can be developed responsibly in the broader society.

8 AI best practice

8.1 There is currently a surfeit of ‘best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

In the last five years, there has been an explosion of ethical AI frameworks and principles intended to help inform the safe and responsible use of AI. In Canada, the Montreal Declaration for a Responsible Development of Artificial Intelligence is the most widely circulated made-in-Canada ethics framework. Canada is also a signatory to the Organisation for Economic Co-operation and Development AI Principles, which underpin the Canada-led Global Partnership on AI. In terms of more concrete guidelines, the CIO Strategy Council published the national AI standards in October 2019. The government of Canada's Algorithmic Impact Assessment tool has been referenced frequently.

In the absence of clear regulatory and policy guidance, there are concerns over ‘ethics washing' and ‘ethics shopping.' In translating these ethical principles into practice, it is important to acknowledge that they cannot be an alternative to legal analysis. If a framework does not combine sound ethical principles with demonstrable legal analysis, it cannot assure that the technology is safe, responsible and defensible.

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

At a high level, responsible AI practices can be grouped into three main categories, as follows:

  • Organisational readiness and culture awareness:
    • Has the organisation developed principles and values to inform its AI strategy?
    • Is there an AI governance framework in place to ensure clear roles and responsibilities?
    • Are there oversight mechanisms to monitor, audit and ensure accountability?
  • Algorithmic impact assessment:
    • Has the organisation developed a repeatable and consistent approach to assessing AI risk?
    • Does the assessment consider AI-related risks along multiple verticals (ie, data quality, individual harm, operational and reputational risk) and related legal issues?
    • Does the algorithmic impact assessment identify metrics, key performance indicators and other measurable indicators?
  • Algorithmic and system controls:
    • Have the appropriate standards for algorithmic explainability and transparency been identified?
    • Are there recourse mechanisms or other processes to obtain user input?
    • Are monitoring and testing processes in place and well documented?

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

The landscape around responsible data use is changing. New laws, such as the proposed Digital Charter Implementation Act in Canada, will elevate accountability requirements for transparency and explainability, especially when using AI. There is a heightened expectation that privacy management programmes should be more robust to include a full suite of privacy, security and AI policies and procedures. More robust data governance is becoming imperative. Getting ready now will help prepare businesses for smoother, more compliant and more trustworthy processes.

The elements referred to in question 8.2 could help an organisation to maximise the benefits of AI while demonstrating the required due diligence that could help mitigate risks and enhance consumer trust.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

Companies increasingly rely on the expertise of technology suppliers to augment their internal AI capacity and enhance service delivery models. As organisations cannot generally effectively evaluate and understand the behaviour of algorithms, companies need to develop mechanisms to ensure transparency, explainability and accountability throughout the procurement process. Best practices in AI governance include:

  • revising procurement policies to promote accountability in understanding how the AI solution works; and
  • developing specific vendor assessment processes to ensure trustworthiness and ethically aligned AI and data suppliers.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

As discussed in question 1.4, determining which liability regime should apply for damages caused by AI systems is challenging and still an emerging area. Factors such as the opaqueness, autonomy, self-learning capabilities and plurality of actors often involved in the supply chain of AI systems add to the challenge. The best mitigation strategy is to conduct early AI liability assessments for higher-risk systems to determine lines of responsibilities, identify potential liabilities and develop appropriate strategies to ensure legal defensibility.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

Algorithms are trained using existing data, which may possess biases. Systems designed to mimic human decision making could also generate bias and discriminatory outcomes, intended or unintended. For instance, a school admissions computer system may mimic human admission decisions with great accuracy and still be biased. Applications used to train talent acquisition systems may favour applicants of a certain background over others, leading the system to teach itself to prefer certain candidates and types of qualifications over others. In medical practice, patients of certain backgrounds may be taken less seriously when seeking medical advice, leading systems in the medical space to undervalue complaints of symptoms from patients of certain backgrounds or groups over others.

Mitigating AI risk is not dissimilar to mitigating human risk: risks and biases must first be accounted for before being addressed. AI systems must be monitored and checked throughout the development process for any potential biases, flagging where the data and outcomes differ from training data, true to the idea of human-in-the-loop. Algorithmic developers should make a point to identify potential biases and predictions of their diagnostic tools against real-world outcomes, measuring their accuracy for different genders, races, ages and socio-economic factors.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

AI innovation is largely protected under IP law in Canada. Much like all technologies, AI innovations may also be protected by a combination of patents, trademarks, copyrights and trade secrets. Under the current IP law, it is easy to understand how algorithms and systems are protected, but it is less clear how the products/works of AI systems could be addressed, which is an emerging area in Canadian law.

The federal government launched the C$85.3 million Intellectual Property Strategy, which includes the goal of amending IP legislation to remove barriers to innovation and enhancing IP literacy and providing tools to Canadian entrepreneurs. Ongoing initiatives under the strategy might have significant implications for the relationship between AI and IP.

10.2 How is innovation in the AI space incentivised in your jurisdiction?

See question 2.5.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

The Canadian Human Rights Act provides a broad prohibition on discrimination based on gender, race, ethnicity and other grounds. The Employment Equity Act requires federally regulated organisations and businesses to provide equal employment opportunities to women, Indigenous peoples, people with disabilities and members of visible minorities. Provincial governments also have specific employment legislation and regulations.

AI is often used in talent acquisition at various stages of the recruitment process, but most notably to assist HR to filter applicants down to the most suitable candidates. AI technologies often help improve anti-bias in workplace recruitment, as they remove implicit human bias that HR personnel may hold unconsciously towards factors such as race, gender and schooling. However, AI is not immune to recruitment bias. For example, Amazon abandoned a recruitment tool that was discovered to be biased against women. It was discovered that the tool filtered for certain patterns in CVs; but because most Amazon applicants whose CVs were used to train the algorithm were men, the system taught itself that male candidates were preferable.

Another consideration that workplaces must have is ensuring that AI is compliant with privacy laws in Canada. Applicants' information must still be kept confidential, with security safeguards that are appropriate to the risks. AI can process greater quantities of data, in far greater detail, than human recruiters. The analysis of applicants' profiles may be more invasive than the information submitted, and the security safeguards must account for that.

For more information about the federal government's initiatives to upskill and create employment opportunities relevant to AI, please see question 6.1.

11.2 How can AI companies attract specialist talent from overseas where necessary?

National AI strategies worldwide unanimously highlight the shortage of AI talent as a critical issue to address. In this context, Canada has a special comparative advantage with its world-class AI researchers and research institutes such as Amii (Alberta), MILA (Montreal) and Vector Institute (Toronto), as well as the emerging AI in the British Columbia cluster (Vancouver). The Pan-Canadian AI Strategy has recognised this strength and has sought to further solidify Canadian leadership in this space by focusing on the retention and attraction of top AI talent to Canada.

The efforts to support the commercialisation of innovative research from these clusters, such as the C$950 million-Innovation Supercluster Initiative and support for tech adoption of small and medium-sized enterprises through programmes including the Canada Digital Adoption Program, are critical for talent attraction and prevention of AI brain drain. When top firms in Silicon Valley or New York seek to hire top AI specialists, Canada must be able to provide competitive economic opportunities to attract and retain talent.

In 2017, the government of Canada launched the Global Skills Strategy, which created the Global Talent Stream, allowing skilled workers in largely computing science/digital industry to obtain employment visas within two weeks of application. The strategy was designed to address skills shortages for Canadian industry. In addition, the anti-immigration policies administration and bans on international students and researchers under the Trump administration in the United States have made Canada a more attractive destination for tech talent.

Between 2015 and 2019, Canada's ranking on the AI Skills Migration Index increased by 20 spots to fourth place among 55 countries, which underscores the positive impact of the talent attraction policies for the AI industry.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

In April 2021, the European Commission introduced the proposal for AI regulations and the US Federal Trade Commission indicated its intention to play a greater regulatory role over AI. Following these global trends, Canada will introduce more concrete data and AI regulations in the next 12 months.

The Digital Charter Implementation Act (DCIA) is just the beginning of what we can expect to see in data regulation using emerging technologies such as AI. If passed, the DCIA establishes in law the importance of good data governance practices. There will be a heightened expectation that data management programmes should be more robust to include a full suite of augmented privacy, security and AI policies and procedures. Also, the Office of the Privacy Commissioner has submitted concrete recommendations for PIPEDA reforms in November 2020 to the government. Further, there are several sector and department-specific regulatory reforms, such as the development of a regulatory sandbox approach for advanced therapeutic products and local government initiatives that might reach completion or emerge over the next year or so.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

Companies and organisations that can demonstrate sound AI governance will have a competitive edge. Increasingly, demonstrating adherence to ethical best practices serves as a quality indicator that can have a significant impact on consumer behaviour. Seeing that Canada continues to position itself as a global leader for responsible AI, companies would do well to implement strong data practices to address consumer trust deficits and signal their commitment to responsible innovation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.