Open letter re: Online harms legislation
The following statement is addressed to the federal government of Canada. It raises concerns with the July 29 proposal released by the federal government regarding the regulation of five distinct "online harms": “terrorist content,” “content that incites violence,” “hate speech,” “non-consensual sharing of intimate images,” and “child sexual exploitation content.”

If you would like to sign, please fill out the form which follows the letter. ***If signing as an individual, please include a short description or affiliation (affiliations listed for identification purposes only).*** The list of signatories will be periodically updated.

*Une version française de la lettre est ici: Vous êtes bienvenues à signer la lettre en français en utilisant le formulaire ci-dessous.*

If you are having trouble adding your name, please email Tim McSorley at or Azeezah Kanji at

*** Please make sure to fill out all fields on the form exactly as you would like them to be listed in the letter ***



As organizations and individuals with expertise in anti-racism, we are profoundly concerned by the government’s proposed “online harms” legislation – purporting to address “terrorist content,” “content that incites violence,” “hate speech,” “non-consensual sharing of intimate images,” and “child sexual exploitation content.” [1]

While the proposal is billed as protecting marginalized groups from “hate, harassment, and violent rhetoric online,” we fear that, as currently formulated, it risks exacerbating the existing, well-documented pattern of online speech policing and removal targeting Indigenous, Black, Palestinian, and other colonized and racialized communities. [2]

Particular aspects of concern regarding the proposed legislative framework from an anti-racism perspective include:

1) Incentivization of over-removal produced by: the short timeline for required response after content being flagged (24 hours); the obligation for online communication service providers (OCSPs) to take proactive measures to identify harmful content, including through use of automated systems (repeatedly shown susceptible to amplifying existing biases [3]); vague definitions that will lead platforms to be over-inclusive in order to be “safe;” and significant financial penalties for non-compliance.

2)  Conflation of very different types of online harms – for example, “hateful” or “terrorist” content with “child sexual exploitation” or “non-consensual sharing of intimate images” [4]  – under a single regulatory regime. This is particularly problematic given the existing deployment of categories of “hate speech” and “terrorist speech” to censor Black and Palestinian content online [5]; abetted, in the Palestinian case, by efforts to institutionalize the International Holocaust Remembrance Alliance definition of antisemitism, widely critiqued for conflating criticisms of Israeli policy with antisemitism. [6]

3) Increased information-sharing with law enforcement and security agencies regarding possibly harmful content. As law and technology scholar Michael Geist observes, this may “lead to the prospect of [artificial intelligence] identifying what it thinks is content caught by the law and generating a report to the RCMP” [7]  – likely intensifying the current state of over-policing and -surveillance of colonized and racialized communities. [8]

4) Sweeping search powers for “inspectors” to verify compliance with the legislation, secret hearings, and new information-gathering powers for CSIS – allocating further police-like capacities to CSIS.

5) Absence of adequate transparency, accountability, and redress measures - with no clear mechanisms for publicly assessing whether Internet companies are fulfilling their obligation to prevent discriminatory treatment in content removal and reporting to law enforcement and CSIS; the protection of companies from criminal and civil liability for notifications to law enforcement and CSIS made in “good faith”; and no requirement to restore content found to be wrongfully removed, deferring to Internet companies’ own community standards. As three UN Special Rapporteurs recently noted, “such terms of service or community standards do not reference human rights and related responsibilities, thereby creating the possibility of an ‘escape route’ from human rights oversight.” [9]

According to Daphne Keller, Director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, Canada’s proposal is “like a list of the worst ideas around the world – the ones human rights groups … have been fighting in the EU, India, Australia, Singapore, Indonesia, and elsewhere.”  [10]

Our concerns are compounded by troubling deficiencies in the government’s ongoing consultation process organized to validate the proposed legislation. Expert perspectives on addressing harmful speech online while protecting civil liberties have reportedly been disregarded. [11]  Planned consultation meetings with community representatives have been cancelled due to the election, yet the deadline for the consultation period remains as previously advertised, September 25 – just five days after the election.

Given the serious risks posed by the proposed “online harms” legislation – including to the very communities it is represented as protecting – we call for the government to suspend any implementation, until a full, fair, open, and responsive consultation with anti-racism, human rights, and civil liberties experts has taken place, and the problems and pitfalls identified have been rectified.


Azeezah Kanji, journalist and legal academic
Dania Majid, Arab Canadian Lawyers' Association
Corey Balsam, Independent Jewish Voices
Tim McSorley, International Civil Liberties Monitoring Group



[2] For example:;
[4] See e.g. and that focus specifically on non-consensual sharing of intimate images and technology-facilitated gender-based violence.

Sign in to Google to save your progress. Learn more
I would like to sign the letter *
I'm signing: *
*If signing as an organization, please write out your organization's name as you would like it to appear. *If you are signing as an individual, please write out your name as you would like it to appear, as well as your affiliation and/or description (ie, academic, writer, lawyer).  **Please make sure you include exact language. We cannot include blanks or just initials.** *
Your name (if signing as an organization)
If signing as an organization, please confirm you are authorized to do so
Clear selection
Email address (for contact purposes only, will not be shared) *
Clear form
Never submit passwords through Google Forms.
This form was created inside of International Civil Liberties Monitoring Group. Report Abuse