Instagram's Failed Self-Harm Content Moderation Puts Teen Safety at Risk

· 1 min read

article picture

A troubling new study reveals that Instagram is inadvertently facilitating the spread of self-harm content among teenagers while falling short on content moderation promises.

Danish research organization Digitalt Ansvar conducted a month-long experiment creating a private self-harm network on Instagram, endangering teens, including profiles of users as young as 13. The researchers shared 85 pieces of increasingly explicit self-harm content, including images of blood and razor blades.

Despite Meta's claims of removing 99% of harmful content before it's reported, none of the test images were taken down during the study period. When researchers developed their own basic AI tool, it successfully identified 38% of self-harm images and 88% of the most severe content, suggesting that effective detection technology exists but isn't being properly implemented by Instagram.

The study found Instagram's algorithm actually helped expand the self-harm network by recommending connections between users engaging with this content. After connecting with one member of the self-harm group, 13-year-old test accounts were prompted to befriend all other members.

Ask Hesby Holm, CEO of Digitalt Ansvar, expressed shock at the findings: "We thought we would hit the threshold where AI would recognize these images. But big surprise – they didn't."

Former Meta expert group member and psychologist Lotte Rubæk noted the platform's failure to remove harmful content is actively triggering vulnerable young users, particularly women and girls, to harm themselves. She described the situation as "a matter of life and death for young children and teenagers."

Meta responded that content encouraging self-injury violates their policies, stating they removed over 12 million pieces of suicide and self-injury content on Instagram in early 2024. They also highlighted new Teen Account settings designed to limit exposure to sensitive content.

However, the study suggests these measures may be insufficient, particularly in small private groups where moderation appears notably lacking. The findings raise serious concerns about Instagram's compliance with EU law requiring digital services to address risks to users' physical and mental wellbeing.