Deepfake 'Nudification' Technology Reaches Dangerous Proportions
AI-powered 'nudify' tools can generate high-quality explicit videos from a single photo. Experts warn that this technology is industrializing and normalizing sexual abuse against women and children.
AI-powered deepfake technology is becoming increasingly realistic and dangerous, particularly through 'nudify' tools. According to WIRED's investigation, dozens of websites, bots, and applications established for this purpose can transform a single photo into realistic, explicit videos in a short time. These paid services offer users much more explicit sexual scenario templates than simple 'clothing removal' themed videos.
From Grok to Telegram: An Industrialized Harm Ecosystem
It was previously reported that the chat bot Grok, developed by Elon Musk's companies, was used to produce thousands of non-consensual 'nudified' images. However, this is just the tip of the iceberg. Experts point to the existence of a broad ecosystem that produces much more explicit content and even enables the creation of child sexual abuse material (CSAM).
The European Union had decided to investigate the X platform due to sexualized deepfakes produced by Grok. Deepfake expert Henry Ajder describes the situation as a 'societal scourge,' stating that these tools could be generating millions of dollars in revenue annually.
A Single Photo, Dozens of Scenarios
The functionality of such services has increased significantly over the past year. Almost all of the more than 50 deepfake sites examined by researchers now offer high-quality video production and list dozens of sexual scenarios in which women can be placed. Dozens of channels and bots on Telegram regularly share software updates, such as new poses and positions.
Independent analyst Santiago Lakatos notes that these services use the infrastructure of large technology companies, which may have profited significantly in the process. Lakatos adds, "It's no longer just about 'undressing someone'; all the fantasy versions of it are now offered. There are different positions, even versions that can make someone appear pregnant."
Victims and Inadequate Legal Protection
The victims and survivors of non-consensually produced intimate images (NCII) are almost always women and girls. Stephen Casper, a researcher working on AI safety at MIT, says this ecosystem is built on the back of open-source models.
Psychology Associate Professor Pani Farvid from The New School emphasizes that society does not take violence against women seriously, regardless of its form. An Australian study identified the four main motivations for deepfake abuse: sexual extortion (sextortion), harming others, gaining peer approval/connection, and curiosity about what the tools can do.
Bruna Martins dos Santos from the human rights organization Witness states that some communities using these tools exhibit an 'indifferent' or casual attitude towards the harm they cause. For some perpetrators, this technology is about power and control.
Meanwhile, although there are initiatives such as AI chat bots starting to verify user age, the enforcement of legal regulations against harmful content production is either progressing too slowly or not being implemented at all. Even technology sector employees are forcing company executives to act on ethical responsibilities.