Policy Brief: Combating Online Hate
The Jewish community holds the unfortunate distinction of being the most frequently targeted minority when it comes to hate crime. According to Statistics Canada’s latest hate crime data (2017), an antisemitic hate crime takes place on average every 24 hours in our country. As the horrific October 2018 attack on a Pittsburgh synagogue demonstrates, antisemitism can be lethal – and online hate can foreshadow mass violence.
The Government of Canada should launch a national strategy to combat online hate, consisting of four steps: defining hate, tracking hate, preventing hate, and intervening to stop hate.
1. Defining Hate
This initiative should begin with a parliamentary study to examine the scope of this challenge and define the parameters of a national strategy to combat it. Several federal offices should be enlisted to support the strategy in their respective realms, including the Departments of Justice, and Canadian Heritage; Public Safety Canada; and the Canadian Radio-Television and Telecommunications Commission (CRTC). This process should also include consultations with industry stakeholders engaged in combating hate and with those involved in preventing the spread of online hate and recruitment: not-for-profits, academics, social media companies, internet service providers, and experts in new media and technology – including encryption software and artificial intelligence.
The national strategy should clearly define what constitutes hate, beginning with the adoption of the International Holocaust Remembrance Alliance (IHRA) definition of antisemitism. The IHRA definition is a practical tool that should be used by Canadian authorities in enforcing the law and by social media providers in implementing policies against hateful content.
2. Tracking Hate
A national strategy will only succeed with strengthened monitoring and reporting of online hate through strategic partnerships between the federal government and technology companies. Tech against Terrorism (TaT), a UN-mandated initiative that works with online companies to prevent their platforms from being exploited by extremists, is one among other successful examples that Canada can use in developing a made-in-Canada approach.
3. Preventing Hate
As demonstrated in recent high-profile cases of radicalized Canadians, young people are particularly susceptible to digital misinformation and extremism. A national strategy should include the creation of tools to help young Canadians resist the lure of extreme ideologies while improving their internet literacy and critical thinking. As a component of this initiative, parents should be empowered with practical knowledge to identify signs of online radicalization and extremism among youths and with appropriate methods for intervention.
4. Intervening to Stop Hate
Freedom of expression is a core Canadian value. At the same time, authorities must act in exceptional circumstances to protect Canadians from hate speech and incitement, especially given the clear link between vicious rhetoric and violent crimes. In 2013, Section 13 of the Canadian Human Rights Act– an effective but flawed tool in combating online hate speech – was removed by an act of Parliament. Its absence has left a gap in the effort to protect Canadians from hate speech, which can be resolved in various ways.
The government could introduce legislation to replace Section 13 with a provision that effectively balances free speech and protection from hate. Alternatively, the federal government could offer training and guidelines to help provincial attorneys general, prosecutors, and police to enforce Criminal Code hate speech provisions more effectively. These guidelines should include greater use of Section 320.1 of the Criminal Code, which allows judges to issue warrants seizing online hate propaganda based on “reasonable grounds.” Used more effectively, this law would enable authorities to take relatively swift action to disrupt the activities of those promoting toxic ideologies.
Also worthy of consideration is Germany’s Network Enforcement Law Act, which levies penalties on companies failing to apply laws regarding the removal of hate.
Lastly, the short limitation period – currently six months from the crime’s occurrence – can make it difficult to lay charges for “willful promotion of hatred.” While Bill C-75 will extend this window to twelve months, consideration should be given to extending it further. This is especially crucial with online hate because propaganda posted on the internet can circulate within online communities without drawing mainstream attention for months or even years before it is flagged.
Background: Tracking Online Hate
In 2017, the World Jewish Congress, representing Jewish communities in 100 countries, released a report indicating that 382,000 antisemitic posts were uploaded to social media in 2016. Stated differently, that is one antisemitic post every 83 seconds. The International Holocaust Remembrance Alliance (IHRA) definition of antisemitism was used to determine whether the post was discriminatory against the Jewish population.
According to the Montreal Institute for Genocide Studies (MIGS) little information is available regarding online hate in Canada. However, according to Cision Canada, a Toronto-based PR software and service provider, there was a 600% rise in intolerant hate speech in social media postings by Canadians between 2015 and 2016 .
James Rubec, architect of the study, says that, while some of the intolerant or hate speech was generated by bots, as determined by analyzing the high frequency of posts over a short time, the researcher noted that the bots’ language was later mimicked by human users. Rubec also notes that tracking hate speech is a constantly-evolving practice, as political realities and related descriptors change. He therefore suggests tracking conspiracy theories, in addition to problematic speech, such as discounting the October 2018 shootings and attempted bombings in the United Stated as “false flag attacks”. Without comprehensive data, however, it is not possible to track this kind of online narrative that has been linked to violent hate-based crime.
To address this deficiency, those in the burgeoning field of combating online hate and incitement to violence are applying multiple approaches, including political and corporate collaboration. This is perhaps best-illustrated by Tech against Terrorism (TaT), as referenced above.
TaT advocates “industry self-regulation” alongside collaboration between tech companies, such as Facebook and government on a systematic approach to combating the dangerous exploitation of online platforms. TaT is particularly concerned with micro-platforms, file-sharing sites, fintech, and startups due to the small platforms’ resistance to moderating their users’ content and activitiesin order to draw in membership.This has become especially pertinent following the activities of the alleged perpetrator in the tragic Pittsburgh murders, who posted his antisemitic messages on Gab, a social media platform that does not moderate hate speech.
 Tech against Terrorism Conference, Montreal, March 2018.