Study Reveals Major Social Media Platforms’ Failures in Moderating Suicide and Self-Harm Content

Molly Russell ended her life at age 14 after viewing harmful content on social media. Pic: PA

A study reveals that major social media platforms are struggling to effectively identify and remove dangerous suicide and self-harm content. The Molly Rose Foundation found that out of over 12 million content moderation decisions made by six leading platforms, more than 95% of the removals were handled by just two sites—Pinterest and TikTok. The other platforms examined in the report were Facebook, Instagram, Snapchat, and X, formerly known as Twitter.

The foundation criticized most platforms for their “inconsistent, uneven, and inadequate” responses to harmful content. It pointed out that Meta’s Instagram and Facebook each accounted for only 1% of all detected suicide and self-harm content, while X was responsible for just 700 decisions.

The foundation warned that the Online Safety Act falls short in addressing the systemic failures in content moderation by social media companies. Ian Russell, the foundation’s chairman, urged the government to commit to a new Online Safety Bill to strengthen regulation. The Molly Rose Foundation was established by Mr. Russell and his family in memory of his daughter Molly, who tragically took her life at age 14 in 2017 after viewing harmful content online.

“Nearly seven years after Molly’s death, it’s shocking to see major tech companies remain inactive while young lives are at risk,” Mr. Russell said.

“As recent weeks have shown, much more ambitious regulation is urgently needed. It’s time for the new government to complete the work and commit to a stronger Online Safety Act.

“Parents across the country will be rightly outraged that platforms like Instagram and Facebook offer empty promises while continuing to expose children to preventable harm. Assertive action is clearly required.”

The report also highlighted that social media sites frequently fail to detect harmful content in the most vulnerable areas of their services. For example, only one in 50 suicide and self-harm posts detected by Instagram were videos, despite Reels now accounting for half of all time spent on the app.

The study further accused platforms of not enforcing their own rules, noting that while TikTok detected nearly three million pieces of suicide and self-harm content, it suspended only two accounts.

The research, based on content moderation decisions in the EU, which are mandated to be publicly accessible, prompted a response from Meta. A spokesperson said, “Content that encourages suicide and self-injury violates our rules. We do not believe the statistics in this report accurately reflect our efforts. In the past year alone, we removed 50.6 million pieces of such content on Facebook and Instagram globally, with 99% of it actioned before being reported to us. However, in the EU, we are currently unable to deploy all of our measures that are active in the UK and the rest of the world.”

scroll to top