On July 29, the federal government announced that it intends to address criminal content and other egregious and reprehensible forms of harmful content circulating on social media and other online services. Five categories of content are targeted:

  1. Online child sexual exploitation content, including all content associated with child pornography and other sexual offences relating to children;
  2. Terrorist content, including any content that actively encourages terrorism and is likely to result in terrorism;
  3. Content that incites violence, including any content that actively encourages or threatens violence and is likely to result in violence;
  4. Content resulting from the non-consensual sharing of intimate images, including sharing intimate images or videos of a person who did not give their consent or for which it is not possible to assess whether consent was given;
  5. Content consisting of hate speech as defined in the Canadian Human Rights Act (as amended by Bill C-36) that meets the criteria established by the Supreme Court of Canada in its hate speech jurisprudence.

It should be noted that these definitions are inspired by the Criminal Code but certainly go beyond it; in fact, this is expressly stated in the documents published. Apart from the criminal law, remedies for alleged harm caused by public statements, including damages and injunctions to have them removed, fall under civil law, established by each province. Parliament should therefore be conscientious in regulating speech that departs from that contemplated by the Criminal Code, so as not to leave itself open to a constitutional challenge based on the distribution of powers.

The initiative presented is intended to supplement Bill C-36 in the fight against hate speech and hate propaganda. Among this bill's proposals is an amendment to the Canadian Human Rights Act to add a section about communicating hate speech:

Communication of Hate Speech

13 (1) It is a discriminatory practice to communicate or cause to be communicated hate speech by means of the Internet or other means of telecommunication in a context in which the hate speech is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.

Definition of Hate Speech

(9) In this section, hate speech means the content of a communication that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.

Clarification: Hate Speech

(10) For greater certainty, the content of a communication does not express detestation or vilification, for the purposes of subsection (9), solely because it expresses mere dislike or disdain or it discredits, humiliates, hurts or offends.

 

It must be noted that the federal government's objective is to combat criminal content and the most egregious and reprehensible types of harmful content online; hurtful, humiliating or offensive speech would not be covered. The importance of protecting and respecting freedom of expression and free debate is recognized and reiterated on several occasions in the documents, made available to the public, presenting this initiative.

That is why the content targeted by the definition of "hate speech" will have to meet the criteria established by the Supreme Court of Canada in its jurisprudence on this subject.

The Supreme Court provided guidance concerning hate speech in a number of decisions. In 2013, it held that in order for legislation restricting hate speech to be a reasonable limit on freedom of expression, the expression captured under such legislation must rise to a level beyond merely impugning individuals: it must seek to marginalize the group "by affecting their social status and acceptance in the eyes of the majority" (Saskatchewan (Human Rights Commission) v Whatcott, 2013 SCC 11, para 80). The harm suffered must be larger than purely individual harm and must rather be inflicted on the group (Whatcott, para 81). The feelings or emotions of the publisher of the speech or of the victims are not relevant in assessing harm; the assessment must be as objective as possible and focus instead on "the likely effect of the hate speech on how individuals external to the group might reconsider the social standing of the group" (Whatcott, para 82).

With respect to the implementation of the initiative presented, the government aims to present a new legislative and regulatory framework in the fall that will impose binding obligations on social media services and other online services, with steep fines for non-compliance, and create various regulatory bodies to administer the new regime.

It is proposed to require that social media and other online communication service providers (referred to as "OCSPs" by the government) comply with certain obligations, which would include mandatory oversight, measures for processing reports or notifications of content, and removal of content found to be harmful. The proposed scenario, once a user flags content, is that the OCSP would have to assess whether the content meets the definitions established by the law and, if so, make the content inaccessible in Canada within a specific timeframe: the government has proposed that it be 24 hours, although different timeframes could be established for different types of content.

The OCSP would then be required to inform both the author of the content and the person who flagged it, and give them an opportunity to request reconsideration of the decision and make representations for the purpose of such reconsideration.

Everything would take place within the regulated entities, which would have to establish appropriate internal systems to allow for content to be flagged, for reports to be assessed, and for decisions to be made, communicated and reconsidered.

There is also an informational aspect to the obligations that OCSPs would have to comply with: OSCPs would have to collect certain information, including the volume and type of content dealt with at each stage of the process, and report to the public periodically. OCSPs would also be required to preserve certain information and evidence that might be useful in an investigation. Certain specific content (content considered to be a threat to public safety, for example) would be subject to additional measures; it would not simply be removed and would have to be reported to law enforcement and/or the Canadian Security Intelligence Service (CSIS).

For users who are not satisfied with an OCSP's ultimate decision (whether to remove or keep the flagged content, and the result of the internal reconsideration of the decision), the government is proposing to create an independent federal tribunal that would act as an appeal tribunal for decisions made by OCSPs: the Digital Recourse Council of Canada. The Recourse Council would issue binding public decisions regarding whether the content that is the subject of an appeal qualifies as harmful content, having regard to the regulatory definitions. If the content were found to be harmful, the Recourse Council would order the OCSP to remove it and inform of its decision another body responsible to enforce it: the Digital Safety Commissioner.

The Digital Safety Commissioner would be established to oversee and improve online content moderation, and ensure compliance by OCSPs with their obligations under the new regime. If the Digital Safety Commissioner found non-compliance on the part of an OCSP, it could issue a compliance order and recommend whether an administrative monetary penalty should be imposed on the non-complying OCSP. The recommendation as to whether to impose an administrative monetary penalty would be made to the Personal Information and Data Protection Tribunal, which is to be created under another bill, Bill C-11. That Tribunal would hear appeals from the findings and orders made by the Digital Safety Commissioner and would decide whether or not to impose the administrative monetary penalties.

The penalties could be substantial and amount to as much as $10 million, or 3% of the gross global revenue of the OCSP in question, whichever is higher.

As an alternative to these administrative monetary penalties, criminal prosecution could be instituted for a violation. The resulting offences could lead to penalties of up to the higher of $25 million and 5% of the OCSP's gross global revenue, or $20 million or 4% of the OCSP's gross global revenue in the case of summary conviction.

The decisions of each of these three bodies - the Digital Recourse Council, the Digital Safety Commissioner and the Personal Information and Data Protection Tribunal - could be filed with the Federal Court of Canada, making their order enforceable as a judgment of that Court.

An Advisory Board would also be put in place to inform and advise those various bodies on emerging industry trends and technologies and on content-moderation practices and norms.

The published documents tend to show a desire on the part of the federal government to combat criminal content and the most egregious and reprehensible types of harmful content circulating on social media. The obligations are exhaustive and binding, and the penalties for non-compliance are substantial. A set of bodies is proposed to ensure that the proposed framework, rather than being a wish list, is put into operation. Measures are even provided for to avoid having all these measures and authorities become rapidly outdated technologically.

It should nonetheless be noted that the proposals target categories of speech that for the most part are narrowly circumscribed by the Supreme Court. The initiative will therefore have a correspondingly limited scope. While this is good news for freedom of expression, it will obviously not solve all the problems created by certain content found on social media. That is notably the case for harassing or defamatory speech, for which proceedings will still have to be brought under the applicable civil law rules and legislation.

We see other elements to be attentive to, to ensure that freedom of expression is preserved. Because the penalties contemplated are substantial, social media platforms will certainly have an incentive to remove more flagged content than less, and not to defend content that, at the end of the day, is not their own. The 24 hours allowed to take note of and assess flagged content, and to remove it where applicable, is an very short turnaround for some types of content targeted by this new framework. Terrorist content or child sexual exploitation content may be more readily identifiable, but the determination becomes more sensitive and subtle when it comes to hate speech. As pointed out above, the criteria established by the Supreme Court for speech to be considered hate propaganda are very stringent. That assessment could require more than 24 hours from the time when it is flagged.

A public consultation on the proposed initiative is underway. The public and interested parties have until September 25 to participate and submit their comments.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.