December 26, 2024
picture.jpg

Show caption People gather for the funeral of military servicemen in Lviv, Ukraine, on Tuesday. Photograph: Alessio Mamo/The Guardian Social media ‘Game of Whac-a-Mole’: why Russian disinformation is still running amok on social media Social media companies’ response amid war in Ukraine has been haphazard and confusing, experts say Kari Paul Wed 16 Mar 2022 05.00 GMT Share on Facebook

Share on Twitter

Share via Email

As the war in Ukraine rages on, Russia is ramping up one of its most powerful weapons: disinformation. Social media companies are scrambling to respond.

False claims about the invasion have been spread by users in Russia as well as official state media accounts. Russia frequently frames itself as an innocent victim and has pushed disinformation including that the US was providing biological weapons to Ukraine (denounced by the White House as a “conspiracy theory”) and that victims of an attack on a Ukrainian hospital were paid actors.

In response, companies including Meta, YouTube and Twitter have announced waves of new measures, spurred by pressure from the Ukrainian government, world leaders and the public.

But experts say the tech industry’s response has been haphazard and lacks the range and scope to tackle sophisticated disinformation campaigns. And even when policies exist, observers fear they are poorly and inconsistently enforced.

“By and large, platforms have responded to the challenge of state-backed disinformation campaigns by playing a futile game of Whac-a-Mole,” said Evan Greer, the deputy director of Fight for the Future, a digital rights non-profit group.

Tech companies have in some cases flagged misinformation as being state-sponsored or potentially false rather than removing it, or they have banned individual accounts rather than enacting more sweeping measures against mis- and disinformation.

When they have acted, it has often been too little, too late – YouTube, for example, finally took action on Friday against state-sponsored disinformation following weeks of pressure from human rights advocates, but not before that content was widely shared.

Opaque and inconsistent policies

One of the most recent examples of this on-the-fly approach was Meta’s surprising decision to make exceptions to its longstanding rule against calls for violence, allowing users on Instagram and Facebook in 12 eastern European and western Asian countries to call for death to Russian soldiers. Following some confusion, the company later clarified that users still cannot call for the death of Vladimir Putin or other leaders.

For many, the episode underscored just how opaque the platforms have been when it comes to decision-making on Russia-related policies.

And it’s not just new policies that have lacked cohesion. Facebook, Twitter, and YouTube already had rules about flagging state-sponsored content, but reports have found that these policies are often poorly enforced, allowing widely circulated posts featuring government propaganda to fall through the cracks.

A study released by the Center for Countering Digital Hate examined a sample of 3,593 recent articles posted by Russian state news sources and found Facebook was failing to label 91% of the posts as state-sponsored.

YouTube, for its part, has ramped up efforts to crack down on state-sponsored content, removing more than 1,000 channels and 15,000 videos. But the move came weeks after advocates called on the company to do so, and after videos spreading disinformation racked up thousands of views.

The biological weapons theory has proved a particular vulnerability for the platform. YouTube has not only failed to remove videos spouting these theories, a study from Media Matters for America found, but it has also profited off them through monetized channels.

“Despite their stated policies, many of these platforms are not labeling disinformation and propaganda appropriately, and that’s a big problem,” said Heidi Beirich, an expert on rightwing extremism at the non-profit Global Project Against Hate and Extremism.

At the root of the problem is that the core business model these platforms operate makes them ideal for manipulation and abuse, said Greer.

“Instead of calling for more aggressive platform-level censorship, we should focus on monopoly power and the way that big tech platforms are designed.”

In other words, as long as these platforms value engagement over all else, there is little incentive to crack down on the lies and sensationalist content that generate a disproportionate amount of traffic.

Frances Haugen told Congress: ‘There is no will at the top to make sure these systems are run in an adequately safe way.’ Photograph: Rex/Shutterstock

Frances Haugen, the former Facebook employee turned whistleblower, said as much in her testimony to Congress in October 2021, noting that “there is no will at the top to make sure these systems are run in an adequately safe way”. She added: “Until we bring in a counterweight, these things will be operated for the shareholders’ interest and not the public interest.”

TikTok ‘overlooked’ as a misinformation vector

While Facebook, Twitter, and YouTube are facing increasing pressure to crack down on misinformation, TikTok has arisen as a new and often overlooked frontier.

As a relatively nascent platform, TikTok has seen rapid growth and little regulation. Researchers say from the start of the invasion of Ukraine, they saw how quickly misinformation was shared on the platform.

“TikTok by its very design is meant to make it easy to splice videos, images, and sounds together, and that’s a useful tool for those attempting to push disinformation,” said Cindy Otis, a disinformation expert and author.

Some evidence points to the fact that disinformation may be Russia-coordinated. One study, published by Media Matters for America on 11 March, showed how more than 180 Russian influencers on the platform had participated in a concerted propaganda campaign to promote online support for Russia’s war, sharing hundreds of posts with the hashtag #RussianLivesMatter.

TikTok has been so central to the conflict that the Biden administration recently invited TikTok influencers to the White House to brief them on the realities of the war and involve them in the fight against disinformation.

TikTok shut down its services in Russia more than a week ago to avoid action from the Kremlin, but some influencers there appear to be posting despite the ban.

A spokeswoman for the company said it has policies in place to combat misinformation and said, in response to Ukraine, the company expedited the rollout of its state media policy, which applies labels to Russia-controlled media accounts.

“We continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation,” said spokeswoman Jamie Favazza. “We also partner with independent fact-checking organizations to support our efforts to help TikTok remain a safe and authentic place.”

Misinformation on the rise within Russia

The tech response has a mixed impact on users in Russia, where there is an increasing media blackout. Platforms including Netflix and TikTok have reduced their presence in the country voluntarily, while others such as Instagram have been blocked by the government.

This in turn has increased the Russian state’s monopoly over information, an alarming development in a country where dissent and unbiased news was already under threat.

It’s a situation that has long existed in other digitally closed-off countries, with serious consequences. Experts fear that a Russian shutoff of news outlets and social platforms, imperfect as they are, could further restrict people from organizing against state power or fighting human rights abuses.

“Soon millions of ordinary Russians will find themselves cut off from reliable information, deprived of their everyday ways of connecting with family and friends and silenced from speaking out,” tweeted Nick Clegg, president of global affairs for Facebook’s parent company, Meta.

China, for example, has long been able to control the narrative on its treatment of Uyghur people, which governments including the US have described as a genocide. Social media companies have been accused of indirectly supporting the human rights abuses by allowing misinformation about them to be shared on their platforms – and in some cases even accepting advertising money from the governments behind the actions.

“We have all looked the other way with what social media firms are enabling in China, and now we are seeing it happen again,” Beirich said.

“The big question for all of us is: what are the base values we should have around information?” she said. “This is showing why we want protected, free, and fair internet.”

This article was amended on 17 March 2022. An earlier version said Russia had reduced access to Netflix. In fact, Netflix voluntarily took action.