What's new
The Brexit And Political discussion Forum

Brexit may have begun but it is not over, indeed it may never be finished.

Study finds YouTube’s algorithms regularly promote content that violates content guidelines

Brexiter

Active member
Social media companies like YouTube—as we saw when the video platform suspended and then reinstated the channel for Right Wing Watch—are prisoners of their own algorithms, which are the keys to their profits. Those algorithms, however, are also proving to be the primary drivers in the spread of disinformation and conspiracist far-right ideologies and the radicalization that occurs online as a result—which means that the company’s profits are coming at the expense of democratic discourse and a stable society.

A recent study demonstrates that YouTube’s recommendations—which send users to videos the algorithm believes the viewer will like—are in fact promoting videos that violates the company’s content policies, including hate speech and disinformation. In many cases, the platform is recommending content that has little or no relation to the video that was watched previously. And the company has made clear it has no intention of changing things.

The study by the Mozilla Foundation, titled “YouTube Regrets,” is based on a web-browser extension the foundation created in 2019 for Firefox and Chrome users called RegretsReporter, which allows users to easily donate data about the videos they regretted watching on YouTube. Some 37,380 people signed up; the study was based on the data they contributed—some 3,362 reports from 1,662 volunteers—between July 2020 and May 2021.

The study’s primary findings:

  • Most of the videos people regret watching—some 71%—come from recommendations.
  • YouTube recommends videos that violate their own policies.
  • Non-English speakers are hit the hardest.
  • YouTube Regrets can alter people’s lives forever.

At the center of the problem, the study found, was YouTube’s algorithm:

  • Recommended videos were 40% times more likely to be regretted than videos searched for.
  • Several Regrets recommended by YouTube’s algorithm were later taken down for violating the platform’s own Community Guidelines. Around 9% of recommended Regrets have since been removed from YouTube, but only after racking up a collective 160 million views.
  • In 43.6% of cases where Mozilla had data about videos a volunteer watched before a regret, the recommendation was completely unrelated to the previous videos that the volunteer watched.
  • YouTube Regrets tend to perform extremely well on the platform, with reported videos acquiring 70% more views per day than other videos watched by volunteers.

Despite that, YouTube’s blog boasts: “We set a high bar for what videos we display prominently in our recommendations on the YouTube homepage or through the 'Up next' panel.” Yet the Mozilla study confirmed that YouTube removed some videos for violating its Community Guidelines after it had previously recommended them.

"That is just bizarre," Brandi Geurkink, Mozilla's senior manager of advocacy and coauthor of the study, said in an interview with CNET. "The recommendation algorithm was actually working against their own like abilities to...police the platform."

The study made several recommendations for policymakers to take action in the matter, since YouTube itself has indicated it is content with the status quo, including notably:

  • Require YouTube to release “adequate public data about their algorithm and create research tools and legal protections that allow for real, independent scrutiny of the platform.”
  • Similarly, require YouTube and other companies to publish audits of their algorithm and “give people meaningful control over how their data is used for recommendations, including allowing people to opt-out of personalized recommendations.”

“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” Geurkink said in a press release. “Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies. We also now know that people in non-English speaking countries are the most likely to bear the brunt of YouTube’s out-of-control recommendation algorithm.”

As the fiasco with Right Wing Watch’s channel manifested, these companies largely hide behind a dictum that treats all political speech as functionally equal, which buries the toxic effects of hate speech and conspiracism into a one-size-fits-all algorithmic approach to enforcement of their rules. But behind their steadfast refusal to address those algorithms is really a powerful bottom line: These companies’ revenue models are built on providing as much of this content as possible.

The top priority at YouTube, as Mark Bergen at Bloomberg News explored in 2019, is “Engagement,” getting people to come to the site and remain there, accumulated in data as views, time spent viewing, and interactions. Moderating extremist content is often devalued if it interferes with the company’s main goals.

"Scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread,” Bergen reported. “Each time they got the same basic response: Don’t rock the boat."

The company announced early in 2019 that it intended to crack down on the conspiracism. However, part of its problem is that YouTube in fact created a huge market for these crackpot and often harmful theories by unleashing an unprecedented boom in conspiracism. And that same market is where it now makes its living.

The result has been a steady, toxic bloom of online radicalization, producing an army of “redpilled” young men disconnected from reality and caught up in radical-right ideology. As The New York Times noted in 2019:

[C]ritics and independent researchers say YouTube has inadvertently created a dangerous on-ramp to extremism by combining two things: a business model that rewards provocative videos with exposure and advertising dollars, and an algorithm that guides users down personalized paths meant to keep them glued to their screens.

A 2018 study by Bellingcat researchers found that YouTube, in fact, was the No. 1 factor in fueling that radicalization:

15 out of 75 fascist activists we studied credited YouTube videos with their red-pilling. In this thread, a group of white supremacists debate with a “civic nationalist” who says he won’t judge an entire race by the actions of a few. It is suggested that he watch a video by American Renaissance, a white supremacist publication. The video, “What the Founders Really Thought About Race,” is essentially a history lesson about why the U.S. founding fathers thought race-mixing was bad.

The study also noted: “Fascists who become red-pilled through YouTube often start with comparatively less extreme right-wing personalities, like Ben Shapiro or Milo Yiannopoulos.”

A more recent study by researchers at Raditube found that YouTube’s moderation efforts, such as they are, systematically fail to catch problematic content before it goes viral. This means that even when such content is found and removed, it nonetheless continues to be circulated with another half-life on other channels and platforms.

YouTube’s claims to be cleaning up its act notwithstanding, it ultimately remains one of the biggest reservoirs of toxic misinformation and hate speech on the internet, and a powerful engine in the spread of far-right ideologies and activism.

Moreover, the company continues to insist it has the problem in hand. The company told CNET that its own surveys find users are satisfied by its recommendations, which generally direct people to authoritative or popular videos. YouTube can't properly review Mozilla's definition of "regrettable" nor the validity of its data, the company added, and it noted that it works constantly to improve its recommendations, including 30 changes to reduce recommendations of harmful videos in the past year.

YouTube asserted to Daily Dot that recent changes have decreased the number of times “borderline content” is viewed by users through recommendations.

“We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content, it said. “Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”
 
Back
Top