According to the platform, this number has been possible thanks to the advances that the company has made in automated detection and the implementation of Extensible Markup Language (XLM), a metalanguage that helps identify offensive language, in different languages, in addition to tools that help decipher large numbers of publications.
“Considering that hate speech depends on language and cultural context, we sent these representative samples to content reviewers in different regions. Based on this methodology, we estimate that the prevalence of hate speech from July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10-11 included hate speech, ”Kantor said.
According to the company, the definition of hate speech is not a simple task, since opinions have nuances that make it difficult to classify the publication, in addition to depending on the context, language, religion and cultural changes where Facebook has presence.
“Based on input from various experts and leaders worldwide, we define hate speech as everything that directly attacks people based on characteristics protected, such as race, ethnicity, national origin, religious affiliation, sexual orientation, gender , gender, gender identity or disability or serious illness, ”Kantor said.
Therefore, Facebook has implemented a Reinforced Integrity Optimizer (RIO), a learning framework so that those involved in content moderation can optimize their work.
While Facebook has long banned hate speech, the company’s moderation rules have on occasion led to some not-so-good results. In 2017, ProPublica reported on how Facebook’s rules offered protection to “white men” but not “black children” due to how the company interpreted subsets of protected classes. For this reason, the company has sought to improve its moderation and detection tools.