1 of 28

Hate Speech & Harassment

Week 9

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

With contributions from Agnès Evrard; Ina Kamenova, University of Massachusetts Lowell; Mark Schneider; Theodora Skeadas; Q. J. Yao, Lamar University

2 of 28

Learning Objectives

  • Learn the definitions and types of online harassment and hate speech
  • Learn the pervasive status quo of online harassment and hate speeches
  • Learn aspects of ethically and legally analyzing online harassment and hate speech
  • Learn how platforms and users handle online harassment and hate speech

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

3 of 28

Online Harassment

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

4 of 28

Definition

  • “Interpersonal aggression or offensive behavior(s) that is communicated over the internet or through other electronic media.”(Slaughter & Newsman 2022)
  • Besides defining online harassment, this study also presented a new measure of online harassment called the Online Harassment Experience Questionnaire (OHEQ)
  • The OHEQ is composed of eight online harassment items, categorized based on traumatic stress theory

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

5 of 28

The Online Harassment Experience �Questionnaire �(OHEQ)

  • Non-traumatic questions:  “I was impersonated by someone.”  “I was excluded from an online group.” “Offensive or hurtful comments were directed at me or posted about me or I was insulted and called names.” “Someone spread untrue rumors about me.”
  • Potentially traumatic questions: “Someone threatened to harm me.” “I experienced unwanted sexual attention.” “My personal information was posted online where other could access it.” “Someone hacked, stole, or otherwise gained access to my online account(s) without my permission.” 
  • Note: those eight items are measured on a 6-point Likert scale: 0 = never; 1 = less than once a month; 2 = 2 to 4 times a month; 3 = 2-4 times a week; 4 = daily; 5 = multiple times a day.

Source: Slaughter, A., & Newman, E. (2022). New frontiers: Moving beyond cyberbullying to define online harassment. Journal of Online Trust & Safety, 1(2). https://doi.org/10.54501/jots.v1i2.5

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

6 of 28

The State of Online Harassment (Pew Center 2021)

  • Surveyed 10,093 American adults in September 2020
  • Roughly four-in-ten have experienced online harassment, half due to political reasons and half experiencing more severe behaviors

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

7 of 28

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

8 of 28

Online Hate & Harassment: The American Experience of 2023 (ADL*)

  • Surveyed 2,139 American adults and 550 teens in Spring 2023
  • Roughly five-in-ten have experienced online harassment, with a quarter to a third experiencing identity-related reasons and a third of teens experiencing a severe level
  • Almost half of the time, teen online harassment led to in-person harassment as well
  • Nearly all demographic groups across almost all measures were found to have increased levels of harassment from previous years

*ADL: Anti-Defamation League

Source: https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2023

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

9 of 28

Source: https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2023

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

10 of 28

Framework of Online-Harassment Assessment for Platform Regulation of Conversations

  • Three criteria to decide if moderation is needed (Foley & Gurakar, 2022):
    • Intensity: severity of the violation
    • Specificity: the size of the target(s) of the violation; the more specific the more danger the violation is
    • Persistence: frequency of the violation
  • Times when additional moderation is required:
    • When the correspondence between the violator and the platform moderators is rough
    • When platform users or the victim of the violation have no other way to address the violation

Source: Foley, T. J., & Gurakar, M. (2022). Backlash or bullying? Online harassment, social sanction, and the challenge of COVID-19 misinformation. Journal of Online Trust & Safety, 1(2). https://doi.org/10.54501/jots.v1i2.31.

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

11 of 28

Online Hate

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

12 of 28

Definition

  • “A kind of speech act that contextually elicits certain detrimental social effects that are typically focused upon subordinated groups in a status hierarchy” �(Demaske, 2021, p. 1014; United Nations, 2019, p. 2)
  • “Any kind of communication in speech, writing or behavior, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factor.” (UN definition)
  • Definitions are variable and hotly contested as legal concepts differ between EU and US and beyond along with a shifting landscape

Source: https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

13 of 28

How is hate speech different from harassment? 

The message applies to a characteristic shared by a group of people 

The impact is broader, described as concentric circles of hate impact

The threat is potentially more unpredictable and unavoidable

There can be intersections between harassment and hate speech, such as when targeted harassment includes hate speech, or when online hate speech also leads to harassment campaigns

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

14 of 28

Explicit �Vs. �Implicit Hate Speech

  • Explicit Hate Speech 
    • Directly identifies the target group and uses explicit attacks against the target group (e.g., slurs, explicit dehumanizing content)
  • Implicit Hate Speech 
    • May not directly identify the target group but refer to them as the people group who supports a certain issue to which the reader can then know who they mean
    • Uses implicit language rather than explicit slur or attack
    • Sometimes implicit hate speech is context or geolocation specific

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

15 of 28

Types of Hate Speech

Based on their “religion, caste, age, disability, serious disease, national origin, race, ethnicity, gender, gender identity, or sexual orientation” (Twitter)

Dehumanizing: Calling, through various terms, a group less than human

Wishes/Threat/Incitement of violence against an entire group

    • Wishes/threats are what you want to/plan to do
    • Incitement is encouraging others to do violent acts

Stereotypes or slurs against a group of people

Demonizing a group as being dangerous

Discriminatory speech against a group

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

16 of 28

Online Hate Speech Has Unique Features

Anonymity of the speaker

Even when the speaker is not anonymous, there is less risk to the speaker than when speaking directly in public, especially in the speaker’s community. 

Mobility and reach

Speech can be produced in one part of the world and copied and disseminated easily 

Durability

Sometimes speech can disappear and other times it can live forever, as it can be difficult to track down across platforms

Size of audience

Online hate speech can reach a much wider or more niche audience much more easily than offline speech 

Ease of access

There is no need to join a group or physically opt to be in a space accepting of hate speech

Source: Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326. https://doi.org/10.1177/1468796817709846

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

17 of 28

Impact of online hate speech

Direct harm to targeted people and groups of people 

    • E.g. psychological distress, defamation, social repercussions (e.g. in outing someone) 

Link with violent attacks 

    • E.g. linked with increasing hate crime and hateful ideology radicalization, which is thought to be related to increased single-actor attacks precipitated by participation in online, channels and other spaces elaborating on hateful ideologies or directly attacking people with targeted characteristics. 

Influences Political Discourse 

    • E.g. Hate speech online normalizes hateful ideologies, influences electoral process, in turn legislation and even judicial decisions. 

Disincentivizes others from participating in the discourse 

    • E.g. people with targeted characteristics will be less likely to engage in public discourse knowing they will become targets for hate speech and harassment. Other people may also refrain because they do not want to read hateful content.

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

18 of 28

Comparative Legal Frameworks 

  • US 
    • Racist, sexist, and other hateful language online is not a crime
    • It can matter in criminal settings to categorize another crime as a hate crime and the offender may receive a hate crime enhancement to their sentence
  • UK & Europe 
    • The UK and Europe have much more stringent laws that make certain kinds of speech a crime in themselves.
    • EU Recommendations: a guiding document for member states to combat digital hate speech
      • https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=0900001680a67955

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

19 of 28

Additional Legal Frameworks

  • India
    • Government can order platform to take down offensive posts 
  • Japan
    • A national ban on hate speech was established in 2016 and delegates municipal governments responsibilities to “eliminate unjust discriminatory words and deeds against people from outside Japan” (Laub 2019)

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

20 of 28

Freedom of Speech Exceptions in the US

  • The exceptions are very narrow and refer to criminal sanctions
  • Private institutions can regulate speech as they choose and have routinely done so

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

21 of 28

Topic Evolution

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

22 of 28

US Public Opinion Toward Hate Speech Leans Toward Moderating Hate Speech, Differs by Age

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

23 of 28

US Public Opinion is Split Along Political Lines

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

24 of 28

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

25 of 28

Platform Response

  • Hate speech, or hateful speech based on identity characteristics is mentioned in various ways in the content policy of platforms with some platforms more specific in their TOS and others still vague
    • Check current platform policies for most up-to-date information as this is an evolving topic as the world, cultural, and legal landscapes change
    • These policies and practices are also changing as new legislation take effect in the EU and US
  • A team approach is needed both on the platform AND across industries/disciplines to address and study hate speech

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

26 of 28

Main Questions Investigated Differ by Discipline

    • In the United States, legal scholars primarily ask questions about free speech and the limits of free speech
    • Another currently contested area for legal scholars and legislators are regulatory expectations of platforms and to what extent offline speech principles apply such as the “public square” doctrine

Law 

    • Considers social and individual impacts of hate speech. Research shows that there are similar effects to other traumatic events and effects vary by the type and amount of exposure

Psychology & Sociology

    • Hate speech can be produced by hate groups as well as by individuals and has been linked with online radicalization and hate crimes and terrorist acts

Criminology & Security Studies 

    • Primarily focused on hate speech as political speech and the effects on political discourse, elections & legislation

Political Science 

    • Primarily asks questions about automatic detection and moderation of hate speech

CS & IS 

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

27 of 28

Sources of Information

  • Since this a rapidly developing field, the most recent information on new scholarship and information can be very field-specific. Below is a list of sources that are have specifically dedicated to online Trust & Safety Issues, including free speech. 
    • Journal of Online Trust and Safety 
    • Arbiters of Truth Series of the Lawfare Podcast 
    • Tech Policy Press
    • All Tech is Human 

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium

28 of 28

Challenges for Further Study & Recent Issues

  • There are repeated calls for interdisciplinary collaboration to make progress on these issue. 
    • The magnitude of the issue is difficult to pin down and scholars are calling for more research. 
    • Multi-platform cooperation and academic-industry collaborations are key to be able to approach the issue with scientific rigor yet have the insight from backend data
  • Moral and legal principles are continuously under debate alongside technological capabilities to execute desired policies for intervention and prevention 
  • Recent Development 
    • Legislation debates 
    • Tech industry layoffs of teams responsible for hate speech detection and moderation 
    • Online Regulation in Europe (The Digital Service Act)  and the UK (Online Safety Bill)

CJ407/CJ507 Digital Safety © The University of Alabama at Birmingham / Trust and Safety Teaching Consortium