What's new
The Brexit And Political discussion Forum

Brexit may have begun but it is not over, indeed it may never be finished.

As Ethiopia’s civil war becomes an ethnic cleansing event, Facebook again enables genocide

Brexiter

Active member
We already know, just from the company’s behavior in the United States, that Facebook seems content to continue to permit the spread of extremist disinformation and organizing on its social media platform, while paying lip service to its responsibilities and taking hollow half-measures to correct the problem—largely because its revenue stream is so powerfully dependent on the features fueling the phenomenon. Facebook’s profits, as whistleblower-provided evidence has established, are fundamentally built on creating social division and real-world strife.

The apotheosis of Facebook-fueled violence is genocide; its undeniable role in helping facilitate the bloody eliminationist campaign by Myanmar’s military against its Rohingya minority population is already well-established. Now, a fresh report from Nick Robins-Early of Vice details how it is replicating those results in Ethiopia, where military leaders and their authoritarian supporters are unleashing genocidal violence in the midst of an ongoing civil war.

The Facebook model for engendering social chaos for profit with which we have all become familiar is on full display in Ethiopia, as the report shows us: Just as occurred in Myanmar, the nation’s military leaders have leveraged the spread of disinformation on Facebook to encourage ethnic violence against a regional minority population and to organize lethal violence against them. And just as it has everywhere, the social media giant is claiming to take steps to correct the abuse of its platform, but with ineffective sops to public relations that have done next to nothing to slow the looming genocide.

Ethiopia has been embroiled in a civil war since mid-2020, after the federal administration of Prime Minister Abiy Ahmed refused to recognize the region’s newly elected government, leading Tigrayan forces to attack a government military base, to which Abiy responded by launching an all-out military offensive on the region last November. Abiy’s forces, which represent the larger Amhara region, have continued to wreak havoc in the Tigray region, whose ethnic population represents about 7% of Ethiopia’s total. When Eritrean soldiers aligned with Abiy invaded Tigray, a flood of reports of mass killings of soldiers and gang rapes followed, telling of shallow graves surrounding villages and mutilated bodies floating down rivers. Eritrea withdrew its forces in June.

The tide in the war shifted this past month when a Tigrayan counteroffensive comprised of an alliance of forces allied with Ethiopia’s other ethnic minorities neared Addis Ababa. Abiy declared a state of emergency on Nov. 2, calling on citizens to take up arms.

“There are sacrifices to be made, but those sacrifices will salvage Ethiopia,” Abiy said on Twitter on Saturday. On Facebook, he urged Ethiopians to “bury” the rebels; that post was removed. In Addis Ababa, the city administration called on citizens to use their weapons to defend their neighborhoods. House-to-house searches were conducted in search of Tigrayan sympathizers.

Many of them have taken to Facebook to organize the ethnic attacks, as well as to threaten and intimidate minorities. The shape of this online behavior is already familiar: Robins-Early describes how journalist Lucy Kassa was targeted by a deluge of online harassment after she reported on the burn injuries suffered by a teenage girl in an apparent incendiary-weapons attack. A pro-government account posted her photo and address, calling for her arrest. Death threats and sexual harassment followed; yet the Facebook post remains up.

The Facebook-organized ethnic cleansing campaign has spread widely and readily, with the company’s litany of inaction speaking for itself:

Last month a video went viral on Facebook showing a man telling a large crowd of people that anyone who associates with certain ethnic minorities is “the enemy.” It was reposted multiple times before the platform removed it. The same account that called for Kassa’s arrest also appeared to celebrate the Fano, a notorious Amhara militia, for carrying out an extrajudicial killing. That post that remains online. Another account with over 28,000 followers posted an instructional video on how to use an AK47 with a caption that suggested every Amhara should watch it. The post has been up since April and has nearly 300,000 views. In September, a local media outlet published unproven allegations on Facebook that members of the ethnic Qimant minority were responsible for a shooting. That same day a government-aligned militia and mob attacked a Qimant village, looting and burning down homes. The post remains on Facebook.

Facebook continues to insist that it’s taking serious steps to clamp down on the posts that violate its terms of service in Ethiopia, saying it has heavily hired moderation staff there directed at removing the threatening material. “Over the past two years, we have actively focused and invested in Ethiopia, adding more staff with local expertise, operational resources, and additional review capacity to expand the number of local languages we support to include Amharic, Oromo, Somali, and Tigrinya,” the company told Vice. “We have worked to improve our proactive detection so that we can remove more harmful content at scale.”

But researchers told Vice that Facebook’s big talk is hollow: Moderation and fact-checking in Ethiopia, they say, in fact is operated by “group of volunteers who send Facebook spreadsheets of posts to investigate and frequently have to explain to staffers why content on their platform is dangerous.”

“They completely lack context,” researcher Berhan Taye told Vice. “Every time we talk to them, they’re asking for context. That’s been a big issue—they don’t understand what’s happening in the country.”

The company also routinely ignores researchers when they point out violent or hateful content, telling them that the posts don’t violate Facebook policies.

“The reporting system is not working. The proactive technology, which is AI, doesn’t work,” Taye said.

If this sounds familiar, it should. When the Myanmar military used fake Facebook accounts to organize ethnic-cleansing violence against the Rohingya, it allowed the posts to remain online until The New York Times published an account of the platform’s culpability in the genocidal violence. An independent fact-finding commission by the United Nations Human Rights Council found that both the specific violence and the ethos that fostered it were spread readily on Facebook: “The Myanmar authorities have emboldened those who preach hatred and silenced those who stand for tolerance and human rights,” the report notes. “By creating an environment where extremists’ discourse can thrive, human rights violations are legitimized, and incitement to discrimination and violence facilitated.”

Facebook responded by taking down the accounts of multiple Myanmar military leaders, including Senior General Min Aung Hlaing, commander-in-chief of the Myanmar military. It also shut down numerous group pages and other networks focused on inciting anti-Rohingya violence, removing 484 pages, 157 accounts, and 17 groups in 2018 alone. However, these takedowns were not for their hateful content, but rather for “coordinated inauthentic behavior.”

This rationale is already familiar to Facebook users in the United States: When the company announced in 2020 that it was taking down large numbers of conspiracist QAnon pages and groups, it likewise did so because of “inauthentic behavior,” rather than because of its extremist content and disinformation. As a result, its pushback on the far-right cult—whose sin, in Facebook’s eyes, was not promoting hate and false information, but gaming Facebook—was a mere drop in the bucket.

Similarly, Facebook claimed that it was eager to fix what it could in Myanmar, but when the government of Gambia filed a suit in international court against Myanmar over the Rohingya genocide and demanded access to the data Facebook retained in its own investigation of the matter, the social-media giant balked, claiming that the request is “extraordinarily broad,” as well as “unduly intrusive or burdensome.”

A federal judge in Washington last month ruled that Facebook must release the data. In response, the company complained that the judge’s order “creates grave human rights concerns of its own, leaving internet users’ private content unprotected and thereby susceptible to disclosure—at a provider’s whim—to private litigants, foreign governments, law enforcement, or anyone else.”

But this is bogus rationale. As Matthew Smith of Harvard’s Carr Center for Human Rights Policy observed at Time:

Facebook might say it’s concerned about setting a dangerous precedent, but sharing information of genocidal intent through a U.S. federal court would seem to be precisely the “precedent” the company should want to set, i.e. to deter State actors from using its platform for criminal purposes. Not to mention that voluntarily complying with The Gambia’s request wouldn’t create any legal precedent, only an internal one at the company.

The Facebook model of engendering social chaos for profit has already had its effect in the United States, which particularly came home to roost at the Capitol on Jan. 6; the company’s own internal reports acknowledge that much of the extremism (particularly disinformation about the 2020 election) and violence, including the siege on Congress, that day was spread and organized on Facebook.

Now, as the insurrection’s defenders longstanding talk of civil war and targeted violence against liberals (“When do we get to use the guns?”) ratchets higher on social media and in real life, it’s becoming clear that what happened in Myanmar can happen anywhere. Ethiopia is only the latest nation to suffer from Facebook’s lethal revenue-generation model.
 
Back
Top