December 26, 2024
picture.jpg

Facebook, Twitter, Instagram, YouTube, and TikTok have failed to act on nearly 90 per cent of anti-Muslim and Islamophobic content on their platforms, a new report alleges.

Research from the Centre for Countering Digital Hate, published on Thursday, reported 530 posts, viewed 25 million times, that contained dehumanising content of Muslims via racist caricatures, conspiracies, and false claims.

This included Instagram posts that depicted Muslims as pigs and called for their expulsion from Europe, comparisons between Islam and cancer that should be “treated with radiation” on a photo of an atomic blast, tweets on Twitter that claimed Muslim migration was part of a plot to change the politics of other countries, and many more.

Many of these had offensive hashtags such as #deathtoislam, #islamiscancer and #raghead, which the CCDH used to identify posts to report.

The CCDH reported 125 posts to Facebook, with only seven acted on; 227 to Instagram, with only 32 acted on; 50 to TikTok with 18 acted on; 105 to Twitter with only three acted on; and 23 videos submitted to YouTube, none of which were reported on.

Facebook also hosted numerous groups dedicated to Islamophobia, with names such as “ISLAM means Terrorism”, “Stop Islamization of America”, and “Boycott Halal Certification in Australia”. Many of these groups have thousands of people in them, with 361,922 members counted in total, predominantly in the UK, US, and Australia. At time of writing, all these groups remained online despite being reported to Facebook.

Researchers also identified 20 posts featuring the Christchurch terrorist, of which just 6 were acted upon, despite Facebook, Instagram and Twitter making public commitments to removing terrorist and extremist content.

The shooter also published a 74-page manifesto which railed against Muslims and immigrants, which was quickly spread online.

At the time, Facebook said it removed 1.5 million videos showing the New Zealand mosque attacks in the first 24 hours following the mass shootings.

The video, which was streamed on Facebook, was originally viewed 4,000 times with social media sites struggling to take down reuploaded footage.

Many of the uploaders made small modifications to the video, such as adding watermarks or logos to the footage or altering the size of the clips, to defeat YouTube’s ability to detect and remove it.

Facebook’s community standards forbids “a direct attack against people on the basis of… race [or] ethnicity”, as does Instagram. Twitter states that users “may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity [and] national origin”. YouTube states that “hate speech is not allowed on YouTube”, and TikTok “do[es] not permit content that contains hate speech or involves hateful behavior, and we remove it from our platform.”

The Independent has reached out to all social media companies for comment.

“We welcome this report, which shines an important light on the unacceptable abuse many Muslims receive online every day. Social media companies have to do more to take meaningful action against all forms of hatred and abuse their users experience online”, Kemi Badenoch, the minister for communities and equalities, said in a statement.

Racism against Muslims is not the only hate speech that has slipped through social media companies’ moderation net. The Independent found that antisemitic conspiracy theories still get millions of views in a report from October 2020, despite the platform banning misinformation about Jews.

“We’ve always been open about the fact that we won’t catch every instance of inappropriate content or account activity, and we recognise that we have more to do to meet the standards we have set for ourselves today. This is why we continue to invest at scale in our Trust and Safety operations, which includes both technologies and a team of thousands of people around the world,” TikTok said at the time.

In the same year, researchers found that Facebook posts and pages spreading fascism are being “actively recommended” by its algorithm. In response, Facebook said it was updating its hate speech policies.