AI-Generated Deepfakes Target Women in Congress, Exposing Digital Harassment Crisis

· 1 min read

article picture

A disturbing new study reveals that approximately one in six women serving in Congress have been victims of AI-generated sexually explicit deepfakes. The research, conducted by The American Sunlight Project (ASP), identified over 35,000 instances of nonconsensual intimate imagery targeting 26 members of Congress - with 25 of the victims being women.

The findings highlight an alarming gender disparity, with female members of Congress being 70 times more likely than their male counterparts to be targeted by such attacks. The study examined 11 prominent deepfake websites using a custom search engine to detect images of current Congressional members.

Neither political party affiliation nor geographic location influenced the likelihood of being targeted, though younger lawmakers faced increased risk. To protect privacy, the ASP did not disclose the names of the affected legislators but notified their offices and provided resources for handling online harassment.

The research raises serious concerns about women's participation in politics and public life. Studies show that 41% of young women between 18-29 years old already self-censor to avoid online harassment and deception. Experts warn this technology could deter women from seeking public office or engaging in civic discourse.

Currently, no federal law exists to establish criminal or civil penalties for creating and distributing AI-generated nonconsensual intimate imagery. While about twelve states have enacted related legislation, most only include civil penalties.

The issue extends beyond politics, affecting high school students and private citizens. The FBI has warned that sharing such imagery of minors is illegal. The implications reach into national security, as these deepfakes could potentially be used for blackmail or geopolitical manipulation.

Two bills addressing this issue - the DEFIANCE Act and the Take It Down Act - have passed the Senate with bipartisan support but await House action. The DEFIANCE Act would allow victims to sue creators and distributors of such content, while the Take It Down Act would criminalize the activity and require tech companies to remove deepfakes.

As technology advances, experts emphasize the urgent need for comprehensive legislation to protect individuals from this form of digital harassment. The study's findings serve as a stark reminder of the challenges women face in public service and the pressing need to address this growing threat.