Instagram's Failed Self-Harm Content Moderation Puts Teen Safety at Risk
• 1 min read
A Danish study reveals Instagram's inability to detect and remove self-harm content, with none of 85 test images being taken down despite Meta's claims of 99% removal rates. The platform's algorithm actively connects users engaging with harmful content, raising serious concerns about teen safety and EU compliance.
AI-Generated Influencers: The Dark Side of Instagram's Digital Deception
• 1 min read
A disturbing trend of AI-generated accounts is plaguing Instagram by stealing and manipulating content from real creators. The investigation reveals hundreds of artificial accounts using deepfake technology to profit from stolen content while threatening legitimate creators' livelihoods.