34.1 C
New Delhi
Wednesday, October 16, 2024

Millions Turn to Abusive AI ‘Nudify’ Bots on Telegram for Adult Content

More from Author

In Short:

Kate Ruane from the Center for Democracy and Technology says that while major tech platforms have policies against sharing intimate images without consent, Telegram’s rules are unclear. Critics highlight Telegram’s history of hosting harmful content. Recent changes followed the CEO’s arrest, but deepfake bots still thrive. Activists call for Telegram to take more responsibility in protecting users rather than leaving them to fend for themselves.


Kate Ruane, director of the Center for Democracy and Technology’s free expression project, has indicated that most major technology platforms now have established policies that prohibit the nonconsensual distribution of intimate images. Many of the largest platforms have also agreed to principles aimed at tackling deepfakes. Ruane expressed her concerns regarding Telegram, stating, “I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” noting that Telegram’s terms of service lack the specificity found in those of other major technology platforms.

Concerns Regarding Content Moderation

Telegram has faced longstanding criticism from civil society groups for its approach to harmful content removal. The platform has been known to host scammers, extreme right-wing groups, and terrorism-related content. Following the arrest of CEO and founder Pavel Durov in August in France for various potential offenses, Telegram has begun making some revisions to its terms of service and has started providing data to law enforcement agencies. However, the company did not respond to inquiries from WIRED about whether it specifically prohibits explicit deepfakes.

Facilitation of Deepfake Abuse

Ajder, a researcher who identified deepfake Telegram bots four years ago, remarked on the app’s unique positioning for deepfake abuse. “Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder stated. “It provides the bot-hosting functionality, so it’s a platform that offers the necessary tools. Then it’s also the place where you can share it and execute the harm in terms of the end result.”

In late September, several deepfake channels reported that Telegram had removed their bots. The exact reasons behind these removals remain unclear. On September 30, a channel with 295,000 subscribers claimed that Telegram had “banned” its bots, although it subsequently shared a new bot link with its users. (This channel was removed after WIRED made inquiries to Telegram.)

The Survivor Experience

Elena Michael, cofounder and director of #NotYourPorn, a campaign group advocating against image-based sexual abuse, expressed concerns regarding the challenges that platforms like Telegram pose. “One of the things that’s really concerning about apps like Telegram is that it is so difficult to track and monitor, particularly from the perspective of survivors,” she noted.

Michael described Telegram as “notoriously difficult” when it comes to discussing safety issues, but acknowledged some progress made by the company in recent years. Nevertheless, she insists that the platform should adopt a more proactive approach to content moderation. “Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn’t be on an individual,” Michael stated. “The burden should be on the company to implement measures that are proactive rather than reactive.”

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article