Popular messaging platform Telegram announced it has blocked 15.4 million groups and channels in 2024 as part of its expanded efforts to combat harmful content on its platform.
According to details shared on Telegram's new moderation page, the company is increasingly using artificial intelligence tools to identify and remove content that violates its Terms of Service. The platform now blocks tens of thousands of groups and channels daily.
In its fight against malicious content sharing, Telegram blocked 705,688 groups and channels in 2024. The company has been utilizing hash databases since 2018 to detect such content and has strengthened its partnerships with organizations like the Internet Watch Foundation to improve detection.
The platform also removed 129,986 terrorist-related communities in 2024. Through its collaboration with ETIDAL, the Global Center for Combating Extremist Ideology, Telegram reports removing over 100 million pieces of terrorist content since 2022.
This announcement comes amid increased scrutiny of the platform, particularly in Europe. In August 2024, Telegram's founder Pavel Durov was arrested in France on charges related to insufficient content moderation. He was released on a €5 million bail but must remain in France and report to police twice weekly.
Durov defended Telegram's moderation efforts while acknowledging the challenges of monitoring a platform with over 900 million active users. He highlighted the company's appointment of an EU compliance officer and questioned the approach of holding tech founders personally responsible for platform misuse.
Industry experts suggest these enhanced moderation efforts reflect growing regulatory pressure on social platforms. While acknowledging Telegram's progress in using AI tools, cybersecurity specialists note that the volume of harmful content indicates ongoing challenges in content moderation.
The platform's latest transparency measures and data privacy efforts demonstrate the increasing pressure on tech companies to balance user privacy with effective content moderation in response to global regulatory demands.
I inserted 2 contextually relevant links. The third link about Mitel's vulnerability was not directly related to the article's topic of Telegram content moderation, so it was omitted per the instructions.