1 of 37

A Deep Dive into Multilingual

Hate speech Classification

Sai Saketh Aluru, Binny Mathew, Punyajoy Saha, Animesh MukherjeeDepartment of Computer Science and Engineering, IIT Kharagpur, India

CNeRG

2 of 37

Warning!

The following presentation contains words or phrases that are often considered as offensive and hateful by many.

However this cannot be avoided due to the nature of the work.

3 of 37

Overview

Brief description of our work

4 of 37

Hatespeech and its hazards

  • Hate speech is defined as a “direct and serious attack on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease

  • Crimes related to hatespeech (e.g., Rohingya genocide, Pittsburgh shooting, etc.) are increasing in number

TEXT

Hatespeech?

I f**king hate ni**ers!

Yes

Jews are the worst people on earth and we should get rid of them.

Yes

Mexicans are f**king great people!

No

5 of 37

Work description

  • First large scale analysis of multilingual hatespeech
  • Languages - 9 languages: Arabic, English, German, Indonesian, Italian, Polish, Portuguese, Spanish and French
  • Models Used
    1. MUSE Embeddings + CNN - GRU
    2. Translation + BERT
    3. LASER Embeddings + LR
    4. mBERT (Multilingual BERT)
  • Different Scenarios - Monolingual and Multilingual settings, considered low and high resource cases

6 of 37

Overall results - Toward a benchmark

  • Increasing data ⇒ Better performance.

  • What signals the models pick? LASER + LR → Keyword based | mBERT → Context based

7 of 37

Further details

SCAN ME!

8 of 37

Detailed Description

9 of 37

EFFECTS OF HATE SPEECH

  • Hate speech is increasingly becoming a concerning issue in several countries.
  • The public expression of hate speech promotes the devaluation of minority members.
  • Frequent and repetitive exposure to hate speech could increase an individual’s outgroup prejudice

Rohingya Genocide

Christchurch Shooting

Sri Lanka riot

Pittsburg Shooting

10 of 37

Plaguing all platforms

Twitter

👹मुसलमानो को करारा जवाब है हर हिन्दु को शेयर करना चाहिये!!! *😡🚩😠🚩😡आज पता चलेगा कितने हिन्दु एक हो गये है!!!!..........*जागो...हिन्दु.....जागो.....

Whatsapp

11 of 37

Plaguing all platforms

Twitter

👹मुसलमानो को करारा जवाब है हर हिन्दु को शेयर करना चाहिये!!! *😡🚩😠🚩😡आज पता चलेगा कितने हिन्दु एक हो गये है!!!!..........*जागो...हिन्दु.....जागो.....

Whatsapp

Gab

12 of 37

Our efforts

Spread of hate speech (WebSci’ 19, cscw’ 20)

Explainable hate speech detection (AAAI’ 21)

multilingual hate speech detection (ecml/pkdd’ 20, ACL’ 20)

Trapping hateful users (Hypertext’ 21)

Spread of fear speech (The webconf’ 21)

Counterspeech and its types (ICWSM’ 19)

13 of 37

multilingual hate speech detection

  • First large scale analysis of multilingual hate speech
  • Languages - 9 languages: Arabic, English, German, Indonesian, Italian, Polish, Portuguese, Spanish and French
  • Models used
    • MUSE Embeddings + CNN - GRU
    • Translation + BERT
    • LASER Embeddings + LR
    • mBERT (Multilingual BERT)
  • Different scenarios - Monolingual and Multilingual settings, considered low and high resource cases

14 of 37

why is this necessary?

  • One of the current issues - majority of the hate speech works are available in English language only.

* - based on data from hatespeechdata.com

15 of 37

Related works

  • The earlier efforts to build hate speech classifiers used simple methods such as dictionary look up, bag-of-words, etc.
  • Recently, complex classification models using deep learning and graph embedding techniques have become popular.
  • Zhang et al.[1] used deep neural network, combining convolutional and gated recurrent networks to improve the results on 6 out of 7 datasets used.
  • Research into the multilingual hate speech is relatively new. Some works such as Corazza et al.[2] have studied hate speech in 3 languages.

[1] Zhang, Ziqi, David Robinson, and Jonathan Tepper. "Detecting hate speech on twitter using a convolution-gru based deep neural network." In European semantic web conference, pp. 745-760. Springer, Cham, 2018.

[2] Corazza, Michele, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. "A multilingual evaluation for online hate speech detection." ACM Transactions on Internet Technology (TOIT) 20, no. 2 (2020): 1-22.

16 of 37

DATASET DESCRIPTION

Majority dataset in english

Dataset imbalance in most cases

17 of 37

Experimental Setup

LASER[1]:

MUSE[2]:

  • For each language, we combine all the datasets and perform stratified train/ validation/ test split in the ratio 70%/10%/20%. We report macro F1-score to measure the performance.
  • For sentences, LASER embeddings were used and for words MUSE embeddings were used to generate the multilingual representation of the corpus.

[1] Mikel Artetxe and Holger Schwenk. “Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond”. In: Transactions of the Association for Computational Linguistics 7 (2019), pp. 597–610.

[2] Lample, Guillaume, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. "Word translation without parallel data." In International Conference on Learning Representations. 2018.

18 of 37

Model Architectures used

CNN - GRU:

BERT & mBERT:

19 of 37

experiments

  1. MUSE + CNN-GRU: For the given input sentence, we first obtain the corresponding MUSE embeddings which are then passed as input to the CNN-GRU model.
  2. Translation + BERT: The input sentence is first translated to the English language which are then provided as input to the BERT model.
  3. LASER + LR: For the given input sentence, we first obtain the corresponding LASER embeddings which are then passed as input to a simple Logistic Regression (LR) model.
  4. mBERT: The input sentence is directly fed to the mBert model.

20 of 37

hyperparameters

  1. MUSE + CNN-GRU
    1. Embedding size: MUSE default
    2. Word/sentence: 100
  2. Translation + BERT
    • Translator: Google Translate API
    • BERT token length: 128
  3. LASER + LR
    • Embedding size: LASER default
  4. mBERT
    • Token length: 128
  5. Batch size: 16, learning rate: 2e-5, 3e-5, 5e-5, eochs: 1-5

21 of 37

Results - Monolingual

Training

Language L

Validation & Testing

Same language L

22 of 37

Results - Monolingual

Training

Language L

Validation & Testing

Same language L

23 of 37

Results - Monolingual

Training

Language L

Validation & Testing

Same language L

24 of 37

Results - Monolingual

Training

Language L

Validation & Testing

Same language L

25 of 37

Results - Multilingual

Training

Dataset from all but one language

Validation & Testing

Target language dataset

Fine-tuning

Target language dataset �(incremental steps)

mBERT

All but one language datasets

Target language dataset (incremental steps)

Training

LASER + LR

Validation & Testing�Target language

26 of 37

Results - Multilingual

Training

Dataset from all but one language

Validation & Testing

Target language dataset

Fine-tuning

Target language dataset �(incremental steps)

mBERT

All but one language datasets

Target language dataset (incremental steps)

Training

LASER + LR

Validation & Testing�Target language

27 of 37

Results - Multilingual

Training

Dataset from all but one language

Validation & Testing

Target language dataset

Fine-tuning

Target language dataset �(incremental steps)

mBERT

All but one language datasets

Target language dataset (incremental steps)

Training

LASER + LR

Validation & Testing�Target language

28 of 37

Hate speech Benchmarks

Recipes for different languages and resource settings, as obtained in our experiments.

29 of 37

Interpretability (examples)

Interpretability analysis of LASER + LR and mBERT using LIME[3]

[3] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “” Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.

30 of 37

Interpretability (examples)

Yellow - LASER+LR

Green - mBERT

Sentences with hate label

das pack muss tag und nacht gejagt werden,ehe sie es mit den deutschen machen !!

Translation :- the pack must be hunted day and night before they do it with

the Germans !!

Interpretability analysis of LASER + LR and mBERT using LIME[5]

[5] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “” Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.

31 of 37

Interpretability (examples)

Yellow - LASER+LR

Green - mBERT

Sentences with hate label

das pack muss tag und nacht gejagt werden,ehe sie es mit den deutschen machen !!

Translation :- the pack must be hunted day and night before they do it with

the Germans !!

Interpretability analysis of LASER + LR and mBERT using LIME[5]

[5] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “” Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.

32 of 37

Interpretability (examples)

Yellow - LASER+LR

Green - mBERT

Sentences with hate label

das pack muss tag und nacht gejagt werden,ehe sie es mit den deutschen machen !!

Translation :- the pack must be hunted day and night before they do it with

the Germans !!

absolument ! il faut l’arraisonner en mer par la marin nationale arrêter tous les

occupants expulser les migrant... @url

Translation :- absolutely! it must be boarded at sea by the navy national arrest all occupants expel migrants... @url

Interpretability analysis of LASER + LR and mBERT using LIME[5]

[5] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “” Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.

33 of 37

Interpretability (examples)

Yellow - LASER+LR

Green - mBERT

Sentences with hate label

das pack muss tag und nacht gejagt werden,ehe sie es mit den deutschen machen !!

Translation :- the pack must be hunted day and night before they do it with

the Germans !!

absolument ! il faut l’arraisonner en mer par la marin nationale arrêter tous les

occupants expulser les migrant... @url

Translation :- absolutely! it must be boarded at sea by the navy national arrest all occupants expel migrants... @url

Interpretability analysis of LASER + LR and mBERT using LIME[5]

[5] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “” Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.

34 of 37

Error Analysis - Types of errors

Confounding Factors (CF)Errors caused when model relies on some irrelevant features like normalized mentions and links.

Annotation Dilemma (AD)Ambiguous instances, where according to us model predicts correctly but annotators have labelled it wrong.

Hidden Context (HC)Errors due to model failing to capture context of the post

Abusive Words (AW)Errors caused due to over dependance of model on abusive words in input.

35 of 37

Error analysis examples

mBERT:

LASER + LR:

Sentence

GT

P

E

“Könnten wir Schmarotzer und Kriminelle loswerden würde die Asylanten-Schwemme auf beherrschbare Zahlen runtergehen.”

Translation: If we could get rid of parasites and criminals, the asylum seeker flood would drop to manageable numbers.

1

0

HC

Sentence

GT

P

E

this movie is actually good cuz its so retarded.

1

0

AW

Here “parasites

refers to immigrants

Here “retarded” is used for movie.

NOTE:- For additional examples please check our paper

36 of 37

conclusion

  • In this work, we have analyzed multilingual hate speech using datasets from 16 different sources, comprising of 9 different languages.
  • We considered various conditions like low and high resource settings and monolingual or multilingual cases for the different languages.
  • As per the observations, for low resource cases, LASER+LR is more effective while for high resource cases, mBERT is usually more effective.

37 of 37

Thank you

Github :https://github.com/punyajoy/DE-LIMIT��HuggingFace : https://huggingface.co/Hate-speech-CNERG��Contact us:�Animesh Mukherjee: animeshm@cse.iitkgp.ac.in