It happens all the time: You encounter a white nationalist troll spewing hate and threats on social media in clear violation of their terms of service, you promptly report them to the platform’s monitors, then wait for the posts to be removed and the troll to get the boot, and … nothing happens. Either the platform’s monitors write back saying they can’t see the violation, or you just hear nothing at all.
You’re hardly alone. Two reports this week—one from the Center for Countering Digital Hate (CCDH), and another from the Anti-Defamation League (ADL)—pull back the curtain on how well major social media platforms respond to reports from their users about hate speech—particularly of the antisemitic variety—and threats. The answer, unsurprisingly: Not well at all. Which leads to the inevitable conclusion that these companies are capable of monitoring and eliminating this kind of speech … they just don’t want to do so.
The CCDH report, entitled “Failure to Protect: How tech giants fail to act on user reports of antisemitism,” employed researchers who, using the platforms’ own reporting tools, filed some 714 reports of antisemitic and other kinds of hate speech in online posts that totaled some 7.3 million impressions. Only 1 in 6 of the complaints was acted upon by the five major social media companies, including Facebook and Twitter; overall, these platforms took no action to remove 84% of antisemitic posts.
Facebook and Twitter, the CCDH found, had the worst rates of enforcement action. Facebook acted on roughly 10.9% of the posts reported to them, while Twitter acted on 11%. YouTube and TikTok were comparatively responsive, acting on 21% and 18.5% of reports respectively.
However, the reach of these videos was significantly higher, with millions of views of the antisemitic content on both YouTube and TikTok, while the views on Twitter and Facebook numbered in the hundreds of thousands.
Instagram, Twitter, and Facebook generated the highest number of reports with 277, 137, and 129 respectively. Instagram was also more responsive, taking action on 18.8% of reports—the second-best on the list.
"The study of antisemitism has taught us a lot of things ... if you allow it space to grow, it will metastasize. It is a phenomenally resilient cancer in our society," Imran Ahmed, the CEO of CCDH, told National Public Radio.
The big social media platforms, he said, have been "unable or unwilling" to take effective action against hate speech. He noted that CCDH set out to establish that social media companies are perfectly capable of moderating content—they just choose not to.
Of all five social media platforms CCDH examined, Facebook was the worst offender, failing to act on 89% of antisemitic posts.
"There is this enormous gulf between what they claim and what they do," Ahmed said.
The ADL “report card” on online antisemitism looked at eight social media platforms and reached similar conclusions: Most platforms are slow or utterly fail at removing antisemitic content when it’s reported to them:
Both reports flag Facebook as the worst of the lot. The ADL observes:
The CCDH report demonstrated how hate speech thrives on all the social media platforms, largely because of the platform’s tolerance for anti-Jewish conspiracy theories, as well as the use of hashtags referencing antisemitic ideas and claims. The platforms were especially slow to respond to the conspiracists peddling such antisemitic theories as “Jewish Puppeteer” tropes, or tales about global control by the “Jew World Order,” the Rothschild family, or George Soros, as well as posts claiming Jewish involvement in COVID-19 and the vaccines to combat the virus and Jewish involvement in the 9/11 attacks.
The report notes:
Facebook’s wildly inconsistent and incoherent enforcement, the report notes, includes having one outrageously antisemitic post—promoting a story claiming that “the Holocaust of six million Jews is a hoax” and featuring a photoshopped image of the gates to Auschwitz edited to read “muh Holocaust,” a meme popular among white nationalists—that, instead of being removed, was labeled by Facebook as “false information.” The article has received over 246,000 likes, shares, and comments across Facebook.
Facebook also is notorious for permitting antisemitic groups to operate even after being exposed. The CCDH identified a number of such groups totaling 37,500 members. Their names include “Expose Soros & Other Far-Left Financiers Public,” “Exposing the New World Order!,” “George Soros: The Enemy Within,” “Official Talmud Exposed,” “Rothschild Zionism,” and “The Rothschild/Jesuit Conspiracy.”
Facebook claimed it has been working to fix the problem and will continue to do so. “While we have made progress in fighting antisemitism on Facebook, our work is never done,” Dani Lever, a Facebook spokesperson, told The New York Times, adding that “given the alarming rise in antisemitism around the world, we have and will continue to take significant action through our policies.”
“We were frustrated but unsurprised to see mediocre grades across the board,” said Jonathan Greenblatt, the ADL’s chief executive. “These companies keep corrosive content on their platforms because it’s good for their bottom line, even if it contributes to anti-Semitism, disinformation, hate, racism and harassment.
“It’s past time for tech companies to step up and invest more of their millions in profit to protect the vulnerable communities harmed on their platforms,” he added.
You’re hardly alone. Two reports this week—one from the Center for Countering Digital Hate (CCDH), and another from the Anti-Defamation League (ADL)—pull back the curtain on how well major social media platforms respond to reports from their users about hate speech—particularly of the antisemitic variety—and threats. The answer, unsurprisingly: Not well at all. Which leads to the inevitable conclusion that these companies are capable of monitoring and eliminating this kind of speech … they just don’t want to do so.
The CCDH report, entitled “Failure to Protect: How tech giants fail to act on user reports of antisemitism,” employed researchers who, using the platforms’ own reporting tools, filed some 714 reports of antisemitic and other kinds of hate speech in online posts that totaled some 7.3 million impressions. Only 1 in 6 of the complaints was acted upon by the five major social media companies, including Facebook and Twitter; overall, these platforms took no action to remove 84% of antisemitic posts.
Facebook and Twitter, the CCDH found, had the worst rates of enforcement action. Facebook acted on roughly 10.9% of the posts reported to them, while Twitter acted on 11%. YouTube and TikTok were comparatively responsive, acting on 21% and 18.5% of reports respectively.
However, the reach of these videos was significantly higher, with millions of views of the antisemitic content on both YouTube and TikTok, while the views on Twitter and Facebook numbered in the hundreds of thousands.
Instagram, Twitter, and Facebook generated the highest number of reports with 277, 137, and 129 respectively. Instagram was also more responsive, taking action on 18.8% of reports—the second-best on the list.
"The study of antisemitism has taught us a lot of things ... if you allow it space to grow, it will metastasize. It is a phenomenally resilient cancer in our society," Imran Ahmed, the CEO of CCDH, told National Public Radio.
The big social media platforms, he said, have been "unable or unwilling" to take effective action against hate speech. He noted that CCDH set out to establish that social media companies are perfectly capable of moderating content—they just choose not to.
Of all five social media platforms CCDH examined, Facebook was the worst offender, failing to act on 89% of antisemitic posts.
"There is this enormous gulf between what they claim and what they do," Ahmed said.
The ADL “report card” on online antisemitism looked at eight social media platforms and reached similar conclusions: Most platforms are slow or utterly fail at removing antisemitic content when it’s reported to them:
ADL investigators found that no platform performed above a B- in addressing antisemitic content reported to it. Also, no platform provided information or a policy rationale for why it did or did not remove flagged content. Reddit and Twitter earned the highest marks for data accessibility through their APIs that can enable ADL or other researchers to study the prevalence of antisemitism on their platforms. However, neither Reddit nor Twitter took action on the content ADL reported through ordinary user channels — Twitter did so through its trusted flagger program, while Reddit did nothing in response either to ordinary user flags or trusted flagger reports.
Both reports flag Facebook as the worst of the lot. The ADL observes:
As the world’s biggest social media platform, Facebook’s responsibility to curb antisemitism on its platforms is even greater than that of other platforms. But as this investigation shows, Facebook’s efforts are not commensurate with its size. The company did not take action on any of the content flagged through ordinary users’ accounts and its data accessibility is heavily limited.
The CCDH report demonstrated how hate speech thrives on all the social media platforms, largely because of the platform’s tolerance for anti-Jewish conspiracy theories, as well as the use of hashtags referencing antisemitic ideas and claims. The platforms were especially slow to respond to the conspiracists peddling such antisemitic theories as “Jewish Puppeteer” tropes, or tales about global control by the “Jew World Order,” the Rothschild family, or George Soros, as well as posts claiming Jewish involvement in COVID-19 and the vaccines to combat the virus and Jewish involvement in the 9/11 attacks.
The report notes:
Twitter continues to host hashtags ranging from #holohoax to #killthejews, while TikTok allows hashtags which organize and promote conspiracies such as #synagogueofsatan, #rothschildfamily, and #soros. These posts have gained 25.1 million views on the videosharing platform. Jewish TikTok creators’ comment sections are rife with anti-Semitic abuse. Despite TikTok saying they “do not permit content that contains hate speech,” we found that TikTok closed just 5% of accounts reported for sending racist abuse to Jewish people.
Facebook’s wildly inconsistent and incoherent enforcement, the report notes, includes having one outrageously antisemitic post—promoting a story claiming that “the Holocaust of six million Jews is a hoax” and featuring a photoshopped image of the gates to Auschwitz edited to read “muh Holocaust,” a meme popular among white nationalists—that, instead of being removed, was labeled by Facebook as “false information.” The article has received over 246,000 likes, shares, and comments across Facebook.
Facebook also is notorious for permitting antisemitic groups to operate even after being exposed. The CCDH identified a number of such groups totaling 37,500 members. Their names include “Expose Soros & Other Far-Left Financiers Public,” “Exposing the New World Order!,” “George Soros: The Enemy Within,” “Official Talmud Exposed,” “Rothschild Zionism,” and “The Rothschild/Jesuit Conspiracy.”
Facebook claimed it has been working to fix the problem and will continue to do so. “While we have made progress in fighting antisemitism on Facebook, our work is never done,” Dani Lever, a Facebook spokesperson, told The New York Times, adding that “given the alarming rise in antisemitism around the world, we have and will continue to take significant action through our policies.”
“We were frustrated but unsurprised to see mediocre grades across the board,” said Jonathan Greenblatt, the ADL’s chief executive. “These companies keep corrosive content on their platforms because it’s good for their bottom line, even if it contributes to anti-Semitism, disinformation, hate, racism and harassment.
“It’s past time for tech companies to step up and invest more of their millions in profit to protect the vulnerable communities harmed on their platforms,” he added.