What's new
The Brexit And Political discussion Forum

Brexit may have begun but it is not over, indeed it may never be finished.

Refusal to moderate social media misinformation in global languages harms communities of color

Brexiter

Active member
by Nick Nguyen and Carmen Scurato


This story was originally published at Prism.


So far the Facebook Papers have led to dozens of stories about how the company knew it was failing to remove hate speech, misinformation, and calls to violence in languages across the globe. As much as this focus on Facebook’s global harm is vital, we shouldn’t overlook the role that the social media language gap plays in harming communities within the United States.

On a recent episode of Last Week Tonight, John Oliver discussed online platforms’ failure to curb the spread of misinformation that wasn’t in English. While companies like Facebook and YouTube have made a few inroads to address the problem in English, they’ve allowed misinformation to spread unchecked in other languages—with disastrous results. In the lead-up to the 2020 election, disinformation campaigns targeted marginalized communities to suppress voter turnout. And during the pandemic, cruel disinformants have blanketed the Latino community with blatant falsehoods about the COVID-19 vaccine. The community already makes up a higher percentage of the essential workforce, and Latino people are four times more likely to be hospitalized with COVID-19 than the general population.

Oliver’s report of how the targeting of misinformation at diaspora communities in the United States is “exacerbated by the fact that there aren’t alternative sources of news” for these communities in their own languages isn’t a new revelation. This vital gap is one that our organizations, Viet Fact Check and Free Press, have long been fighting to fill. Those efforts include pushing social media platforms to crack down on misinformation not in English: Viet Fact Check has drawn attention to YouTube’s indifference to Vietnamese-language misinformation, and Free Press—along with the National Hispanic Media Coalition and the Center for American Progress—has urged Facebook to remedy the way the spread of Spanish-language conspiracy theories and other lies are fueling hate and discrimination.

We’ve examined how election and health misinformation have harmed our respective communities in the United States. The results confirm a clear pattern of neglect; while the platforms still have a ways to go in enforcing their own policies in English, it is far worse in other languages. Even though YouTube banned InfoWars, it ignored the Vietnamese-American version of Alex Jones for months—the company only took action after John Oliver’s segment aired. And despite our efforts to directly flag Spanish-language posts with explicit calls to violence, Facebook’s moderators relied on a shoddy translation to English to justify their inaction. To put it simply, these companies are not doing nearly enough to keep our people safe.

Facebook and YouTube roadblocks


Public pressure and awareness of this issue are critical to finding a path forward, but they’re not enough. Our efforts to engage directly with the platforms have been frustrated at every turn—both YouTube and Facebook have failed to be transparent about the full extent of the problem. Facebook is also systematically shutting down access to academics and researchers studying the way misinformation spreads across the platform.

We’ve run into roadblocks when speaking with staff at the two companies. No one has acknowledged whether anyone is in charge of moderating content that’s not in English within the United States. In our interactions, the companies tried to portray misinformation in other languages solely as an international issue and therefore none of our concern. Meetings that we pursued for months turned into basic presentations that did little to address whether YouTube or Facebook have built any systems to protect people from misinformation in languages other than English. We kept asking questions, but it was clear the companies were stalling and we wouldn’t get any straight answers.

As misinformation escalates about crucial matters like COVID-19 vaccines, a report from the Institute for Strategic Dialogue identified major gaps in Facebook’s fact-checking program when it comes to other languages. The report found that a higher number of fact checkers are dedicated to English, leaving the same viral content to spread in other languages. The platforms have refused to share any details about what they’re doing to limit the spread of toxic content in other languages. Facebook and YouTube’s responses to a series of letters sent by Sen. Ben Ray Luján, Sen. Amy Klobuchar, and dozens of other members of Congress were evasive, incomplete, and just plain disrespectful.

Most recently, Facebook whistleblower Frances Haugen provided documents detailing why the safety of our communities is not a priority, testifying before Congress about the company’s profits-before-people approach: “It seems that Facebook invests more in the users that make more money, even though the danger may not be evenly distributed based on profitability.” The disparity in moderation practices across languages reflects Facebook’s tunnel vision when it comes to prioritizing growth and profits. And this isn’t the first time Facebook’s failures and unwillingness to protect its users has come to light. Months before Haugen came forward, Sophie Zhang, a Facebook data scientist, spoke publicly about her work combating fake accounts and political unrest in other parts of the globe, while leadership at Facebook looked the other way simply because there was little risk of a public relations blowback. The Facebook Papers are further confirmation of the company’s inability to prevent hate and misinformation in other parts of the globe.

In other words, Facebook spends little time and effort protecting users who don’t directly contribute to the company’s profits or to negative press coverage in the United States.

Keeping all communities safe


In the face of mounting evidence, it’s clear these companies have no interest in solving this problem on their own. Solutions to misinformation require that platforms like Facebook and YouTube reject business models that are designed to profit from attention, regardless of how users and their wider communities are affected.

The way misinformation has been allowed to spread on social media is a perfect storm of willful neglect, social engineering, and prioritization of profit. The platforms constantly collect our personal and demographic data to hyperpersonalize our news feeds and video recommendations. Misinformation is created to appeal to the anxieties and vulnerabilities of specific groups with stunning accuracy to drive more clicks, more comments, and therefore more views. This engagement in turn feeds the algorithms designed solely to spread content regardless of whether or not it’s truthful. As we’ve seen in recent days, the lack of oversight and even the most basic investment in content in other languages creates a vulnerability that allows bad actors to profit—often flouting the rules the platforms claim to enforce for everyone.

To fully understand the cost of this disinformation on a democratic and open society, we need more clarity on how these algorithms determine what we see. Right now, Facebook and YouTube don’t train their algorithms to tell the difference between a truth and a lie. Every click is an amplifier of content that will keep us more engaged. And when clicks and engagement translate directly to dollars, the problem is greater than people posting lies online. The system is built for disinformers—and if their content is compelling enough, it can quickly reach millions. This engagement turns into ad dollars for the platforms and accelerates audience engagement. Our communities suffer because lies create profits.

So what’s next? If the platforms want to operate on a global scale, then language shouldn’t be a barrier to keeping communities safe. Congress and the Federal Trade Commission must work together to adopt a privacy framework that protects the civil rights of people living within our multilingual and diverse democracy.

Platforms must also produce regular transparency reports and allow access to independent researchers seeking to understand the depth and breadth of the harms caused by these companies’ engagement-driven business model. Legislation addressing some of these issues already exists in Sen. Ed Markey and Rep. Doris Matsui’s Algorithmic Justice and Online Platform Transparency Act.

Now more than ever, it should be obvious that language discrimination hurts all people in the United States. The health and safety of our communities is not something that should get lost in translation.


Nick Nguyen is a Viet Fact Check co-founder and PIVOT board member.


Carmen Scurato is the associate legal director and senior counsel at Free Press and Free Press Action.


Prism is a BIPOC-led nonprofit news outlet that centers the people, places, and issues currently underreported by national media. We're committed to producing the kind of journalism that treats Black, Indigenous, and people of color, women, the LGBTQ+ community, and other invisibilized groups as the experts on our own lived experiences, our resilience, and our fights for justice. Sign up for our email list to get our stories in your inbox, and follow us on
Twitter, Facebook, and Instagram.
 
Back
Top