With remote business gaining significant traction in the last few years, many companies have looked to transform and advance their digital strategies, including through online commerce enhancements and digital offerings in the media industry. While new opportunities for industry participants have emerged with the expanding digital reach, new areas of risk are emerging.

Online platforms and Canadian courts

As a constantly evolving centre for communication and expression, the internet continues to pose challenges for the Canadian justice system.

The combination of the wide reach of online platforms to readers across the world, and users' ability to publish content anonymously provides an opportunity for the exchange of ideas to flourish worldwide. At the same time, the risk of potential harms is virtually limitless.

In an attempt to strike the right balance, Canadian courts have grappled with the liability of internet intermediaries for content distributed by third parties on their platforms. Over the past year, several novel claims have raised important questions, and the resulting decisions have begun to shape the law. The trends that have emerged suggest that there may be a greater willingness by Canadian courts and Parliament to regulate online platforms.

This article summarizes the recent evolution of online platform liability in Canadian law. We begin by reviewing the existing case law which, in certain specific circumstances, recognized an online platform's liability for a third party's defamatory content. While the case law continues to evolve, the courts have regularly acknowledged that some degree of liability for online platforms is warranted.

We then turn to recent developments in both case law and proposed legislation that increases the risks for online platforms, not only in defamation but also breach of privacy, hate speech, breach of human rights legislation and breach of contract. Parliament appears to be taking a very different direction than the United States, which provides online platforms with broad-ranging immunity from liability. Section 230 of the U.S. Communications Decency Act (CDA 230) bars online platforms from being considered a "publisher or speaker" of third-party content and establishes that a platform can absolve itself from liability by moderating its content in good faith1. Canada has no such legislation.

Finally, to address these emerging potential risks, we provide practical strategies to mitigate liability.

Where we are: platforms may be held liable for defamatory comments

Starting in 2005, a series of cases recognized that in certain circumstances online platforms could be liable for defamatory expression posted by third parties. A significant motivator for imposing liability was the threat to reputational interests through anonymous postings left indefinitely in cyberspace. In Carter v. B.C. Federation of Foster Parents Assn., the British Columbia Court of Appeal described the policy rationale for recognizing liability where a defendant fails to remove offensive material in a timely manner as follows:

[I]f defamatory comments are available in cyberspace to harm the reputation of an individual, it seems appropriate that the individual ought to have a remedy. In the instant case, the offending comment remained available on the internet because the defendant respondent did not take effective steps to have the offensive material removed in a timely way2.

An Ontario court agreed. In Baglow v. Smith, the Ontario Superior Court found the operators of an online message board could be liable for defamatory comments which had been posted on the message board by a third party. The operators of the message board conceded that they were publishers by disseminating third-party content, but argued they were a "passive instrument" by merely making the comments available. The Court rejected that argument. The operators had notice of the impugned posts but refused to delete them. Furthermore, the posters on the message board were generally anonymous, leaving a plaintiff with no other recourse3.

More recently, in Pritchard v. Van Nes, the British Columbia Supreme Court considered whether someone who posted defamatory content on Facebook is liable for third-party defamatory comments on the post. The Court recognized that the legal question of liability for third-party defamatory content was still an emerging issue. It proposed a three-part test for establishing liability for a third-party's defamatory post4. A plaintiff must demonstrate that the defendant:

  1. had actual knowledge of defamatory material posted by the third party;
  2. made a deliberate act (which can include inaction in the face of actual knowledge); and
  3. had power and control over the defamatory content5.

None of Carter, Baglow nor Van Nes definitively holds an online platform liable for the defamatory posts of a third party. However, the law is likely to evolve along the lines expressed in Pritchard: a party that has both knowledge and control is at risk of not being considered merely a passive instrument.

Where we're going: growing focus on internet platforms and willingness to intervene

While the courts have not had a recent opportunity to weigh in on the merits, recent procedural decisions (outlined below) suggest a somewhat inconsistent approach. Meanwhile, Parliament has begun to take steps toward regulating online platforms.

Courts continue to grapple with the scope of liability

Four procedural decisions from the past year demonstrate the courts' inconsistent approach to online platform obligations regarding third-party content. However, despite the inconsistencies, no court has held that online platforms do not have some measure of responsibility for third-party content posted on their websites.

In early 2021, the British Columbia Supreme Court released its decision in Giustra v. Twitter Inc6, which we summarized here. In short, Twitter challenged the Court's jurisdiction to hear a defamation action against it, asserting that the claim should proceed in California where Twitter has its headquarters. It argued that it should not be expected to defend defamation actions in every jurisdiction in which a tweet can be accessed and where the plaintiff has a reputation to protect. The Court disagreed. While it expressly declined to comment on the substantive merits of the claim, it held that the law in Canada with respect to online platforms is unsettled. Moreover, the Court relied on the fact that the allegedly defamatory tweets had been brought to Twitter Canada's attention and the plaintiff had a significant reputation in British Columbia, which was sufficient to allow the claim to proceed to the merits. The Court also rejected Twitter's arguments that California was the more convenient forum, because all the parties agreed that the claim would be dismissed in California pursuant to CDA 230.

Shortly after Giustra v. Twitter Inc., the Québec Superior Court released a decision in Lehouillier-Dumas c. Facebook inc.7, in which it declined to authorize a class action in defamation against Facebook on behalf of individuals who had been named in a Facebook group as alleged sexual abusers. The plaintiff alleged that Facebook had an obligation to remove content that was "potentially" defamatory. The Court rejected that argument, holding that Facebook's policies required it only to remove content that is illegal or which a court had deemed defamatory, but not merely offensive or unpleasant. The plaintiff had provided Facebook with insufficient information to determine whether the content defamed him. As a result, the court concluded that this did not trigger any obligations.

While Lehouillier-Dumas may provide a route to limit liability for online platforms, the principle on which it was decided may prove to be narrow: the plaintiff had failed to demonstrate the defamatory nature of the content to Facebook, meaning that Facebook had no obligation to act. Lehouillier-Dumas also demonstrates the utility of clear and consistent terms of use in expressly limiting a platform's liability for the acts of third parties.

Outside the defamation context, we have also seen cases this year against online platforms for both their content promotion and removal policies:

  • In Beaulieu c. Facebook, the Québec Superior Court declined to authorize a class action against Facebook for breaches of Québec's Charter of Human Rights and Freedoms on behalf of people who were searching for employment or housing, but who, "as a result of race, sex, civil status, age, ethnic or national origin, or social condition were excluded by Facebook's advertising services from receiving advertisements for employment or housing opportunities, or who were explicitly excluded from eligibility for these opportunities through advertisements posted on Facebook."8. The Court held that the plaintiff had made out the evidentiary and legal threshold requirements to establish a cause of action against Facebook, but the allegations of discriminatory advertising would require context-specific determinations that would differ from advertisement to advertisement and user to user, rendering a class action inefficient. The Court held that whether Facebook's arguments that it could not be held liable as an intermediary required complex determinations of fact and law, leaving open the question of the scope of platform liability9.
  • A recent claim filed in the Ontario Superior Court alleges that Twitter's content removal policy interferes with the plaintiff's free speech. The plaintiff alleges that a trailer for his documentary was rejected by Twitter's content policy for being too political and for being cause-based advertising. However, Twitter did not give any further details about the basis of its decision. The claim for breach of the duty of good faith in contract seeks a declaration that Twitter's content policy violates the doctrine of public policy under Canadian contract law. While the claim does not suggest that the Charter of Rights and Freedoms imposes on Twitter an obligation to not interfere with its users' freedom of expression, it does allege that the Charter applies to the common law which governs the parties and their contract. The case has not yet been considered by the courts, but is reflective of the difficulties online publishers may face if they are expected to balance the harmful nature of content against its expressive value when determining whether it should be removed from the platform.

Parliament considers online harms and the role of internet platforms

Prior to the 2021 federal election, Parliament's focus in regulating internet platforms related to eliminating five types of illegal content: child pornography, terrorist content, incitements to violence, hate-speech and the non-consensual sharing of intimate images. In July of this year, the Ministry of Heritage published a technical paper which presented a proposed framework to regulate online platforms with respect to those harms10. The framework includes:

  • imposing an obligation on online platforms to take all reasonable measures to identify harmful content communicated on their websites and to make the harmful content inaccessible to persons in Canada within 24 hours;
  • creating a "Digital Safety Commissioner" who, among other things, would oversee the regulations and who could impose penalties on platforms for not complying with the legislation; and,
  • creating a complaints regime overseen by the Digital Safety Commissioner, giving the Commissioner the power to investigate, hear submissions and release a decision on the complaint.

Parliament also demonstrated a desire to regulate the communication that occurs on online platforms. Bill C-10 amended the Broadcasting Act to apply to online media and allowed the CRTC to regulate broadcasts on online platforms, including requiring a minimum level of Canadian-produced content on the platforms. This bill passed the House, but had not yet passed the Senate when Parliament was dissolved for the 2021 election.

Canada's international obligations with respect to online platforms

Canada has not enacted legislation like CDA 230. However, on July 1, 2020, the United States-Mexico-Canada Agreement came into force. USMCA requires that Canada, Mexico and the United States provide online platforms with broad protection against liability for hosting third-party content. It does not go as far as CDA 230, as it does not prevent platforms from being considered a "publisher or speaker", but it does bar an online platform from being treated as the content provider "in determining liability for harms". Commentators have suggested that this distinction leaves open the ability to enforce equitable remedies against online platforms11.

It is not clear what impact the USMCA provisions will have on online platforms in Canada. International treaties are only enforceable when they are incorporated into domestic law, and the USMCA online platform provisions have not been incorporated into Canada's implementing legislation12. In its statement on implementation13 the Canadian government advised that the provisions do not affect the ability to impose measures to address harmful online content or enforce criminal law. It also advised that the issue will be primarily addressed through judicial interpretation of legal doctrines, including defamation. While it is unclear exactly how courts will treat this issue in the absence of an actual change to domestic law, it is at least possible that it will have an impact on cases against online platforms like those in Baglow and Pritchard.

What does it all mean?: Mitigating potential liability

In the absence of legislation which defines or reduces the scope of an online platform's liability for the content on its website, online platforms should continue to be cautious. The courts have not clearly explained when liability may arise, but have recognized that platforms may ultimately have some responsibility for the content posted on their websites. Two general themes have emerged from the case law:

  1. The point at which liability crystallizes may differ based on the nature of the online platform-for example, whether the platform has a role in creating content or in promoting or pushing content to certain users, the nature of the content hosted on the platform, or the degree to which the platform moderates its content. Completely passive online platforms, like ISPs and search engines, will not be liable for the information they transmit or index.
  2. A platform's awareness or knowledge of the impugned content continues to be a requirement to establish liability.

With these two themes in mind, we suggest the following principles to mitigate the risk of liability:

  1. Registration and terms of use. Users of online platforms should be required to register with the platform using their real first and last name and a valid email address or other form of contact information. Users should also be required to agree to terms of use which specifically require individuals not to post comments or information which is defamatory, illegal or may breach another person's privacy. While these actions will not absolve a platform from any potential liability, both policies will assist the platform when it has no knowledge of the defamatory or illegal content that has been posted.
  2. Complaint investigation and response. Once a complaint is made, an investigation of the impugned material should occur promptly and according to pre-defined criteria. The relevant decision-making criteria should be made available on the platform and a record should be kept of the criteria that inform the ultimate decision regarding the complaint. That record should be maintained and provided to interested parties who request the reasons for the decision. A consistent and pre-determined process will assist in avoiding claims alleging arbitrary removal of material, such as the recent lawsuit against Twitter.
  3. Removal of defamatory or illegal content once a complaint is lodged. Removal should occur as reasonably quickly as possible. The guidance from Parliament's online harms framework suggests 24 hours is the standard for removal of the most severe online harm. However, the online harms considered by Parliament do not include defamatory content, which may take longer to investigate to determine whether there is a factual basis for removal (as the court recognized in Beaulieu). To mitigate risk, the more harmful or sensitive the content, the faster it should be removed.

Active monitoring of the platform. Monitoring is important particularly where a platform takes an active role in determining the content which is promoted or "pushed" to certain users. In such cases, it is likely that the platform has an obligation to monitor or regulate what is posted. Monitoring may include a method of "flagging" suspicious content to the administrator at the low-end, to employing software or individuals to review material before it can be published at the high-end. Where a platform falls on that spectrum will be determined in part by both the sensitivity and expressive value of the content featured on the platform. The monitoring of content is a difficult balancing act. While some degree of moderation may reduce exposure to liability, platforms must also be careful not to over-interfere and risk litigation similar to that in the recent claim against Twitter. However, this latter risk can be mitigated by having clear and well-defined policies in place regarding active monitoring.

Footnotes

1. Communications Decency Act of 1996, 47 U.S.C. § 230.

2. 2005 BCCA 398 at para 20.

3. 2015 ONSC 1175 at paras. 195-6.

4. 2016 BCSC 686 at para 91.

5. 2016 BCSC 686 at para 108; see also Holden v. Hanlon, 2019 B.C.S.C. 622.

6. 2021 BCSC 54.

7. 2021 QCCS 3524.

8. 2021 QCCS 3206 at para 39.

9. At para 116: "The time to weigh defences as against the allegations of the motion for authorization that are assumed to be true is, as a general rule, at trial. Furthermore, the determination of Facebook's intermediary liability is not a pure question of law and raises complex determinations of fact and law which should be examined at a later stage."

10. https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/technical-paper.html

11. Vivek Krishnamurthy et. al., CDA 230 Goes North American? Examining the Impacts of the USMCA's Intermediary Liability Provisions in Canada and the United States, July 2020 at 7.

12. See Vivek Krishnamurthy et. al., CDA 230 Goes North American? Examining the Impacts of the USMCA's Intermediary Liability Provisions in Canada and the United States, July 2020 for an in-depth analysis of the potential impact of the USMCA on Canadian law.

13. https://www.international.gc.ca/trade-commerce/trade-agreements-accords-commerciaux/agr-acc/cusma-aceum/implementation-mise_en_oeuvre.aspx?lang=eng#82

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.