The recent murder of UnitedHealthcare CEO Brian Thompson has triggered a wave of online misinformation and violent threats against other healthcare executives, revealing major gaps in social media content moderation.
Following Thompson's shooting in New York on December 4, social media platforms have been flooded with unfounded conspiracy theories and explicit calls for violence. Disinformation security firm Cyabra discovered hundreds of accounts on X (formerly Twitter) and Facebook spreading baseless claims, including false allegations about Thompson's wife and former House Speaker Nancy Pelosi's involvement in the murder.
High-profile social media influencers amplified these false narratives, with some posts receiving hundreds of millions of views. In one instance, an old video featuring a different Brian Thompson was misrepresented as showing the UnitedHealthcare CEO making admissions about working with Pelosi.
The incident has unleashed widespread anger toward health insurance companies, with many online users making direct threats against industry leaders. Threatening hashtags emerged, and multiple posts targeted CEOs of other major healthcare companies including Blue Cross Blue Shield and Humana.
"The danger here is clear: unchecked hate and disinformation online have the potential to spill over into real-world violence," warns Dan Brahmy, Cyabra's CEO.
In response to heightened risks, US corporations are bolstering security measures for senior executives and encouraging them to minimize their digital presence. Meanwhile, social media platforms, particularly X, have reduced their content moderation teams, creating what researchers describe as an environment conducive to spreading misinformation and hate speech.
The situation highlights ongoing challenges in social media content moderation, which remains a contentious political issue in the United States, with some viewing it as censorship rather than a necessary safeguard against harmful content.