Google and Apple Host Dozens of 'Nudify' Apps

A new study reveals that Apple and Google have been hosting dozens of AI applications called 'nudify' in their stores, despite these apps violating their policies.

Google and Apple Host Dozens of 'Nudify' Apps

Applications Downloaded Over 705 Million Times

A study published by The Transparency Project (TTP), a group affiliated with Harvard University's Kennedy School of Government, has raised serious questions about content moderation in the app stores of tech giants. According to the research, dozens of applications—55 on the Google Play Store and 47 on the Apple App Store—offer the ability to use artificial intelligence to digitally remove clothing from people in photos, creating nude or nearly nude images.

Findings based on data from app analytics firm AppMagic show that these identified applications have been downloaded over 705 million times worldwide and have generated $117 million in revenue. Google and Apple taking a share of this revenue means the companies are directly profiting from these apps' activities.

Company Reactions and Removal Moves

Both tech giants responded to the TTP report. Apple announced to CNBC that it had removed 28 apps mentioned in the report. Google stated that its investigation is ongoing and that it has suspended "several apps." However, Google and Apple hosting such apps despite their policies appears to contradict the platforms' commitments to content moderation.

The TTP report concludes that both app stores need to make greater efforts to prevent non-consensual deepfake content. The report states, "Despite Google and Apple claiming to be committed to user safety and security, they host a range of apps that can turn an innocent photo of a woman into an abusive, sexualized image."

The Deepfake Content Problem in the AI Age

This report comes after Elon Musk's xAI and Grok faced investigations in many countries for allegedly generating non-consensual sexualized images. Independent research had revealed that Grok lacks basic safety measures to prevent deepfake content. This situation once again highlights the importance of independent and transparent oversight of AI systems.

Researchers claim that over an 11-day period between December 29 and January 8, Grok generated over 3 million sexualized images, including more than 20,000 images that appeared to depict children. With the proliferation of AI technologies, it is predicted that sexually explicit deepfake materials will remain a significant challenge for tech companies in the coming period. Regulatory efforts in this area, as seen in the UK, also bring complex ethical and legal debates.

Related Articles