Single Diffusion Pass Can Strip AI Watermarks, Researcher Discovers
A new open-source tool called noai-watermark demonstrates that a single pass through a diffusion model can effectively remove invisible AI watermarks like SynthID, raising concerns about the durability of content authentication systems. The discovery, made by developer Mert İzcı, challenges the assumption that AI-generated images are reliably traceable.

Single Diffusion Pass Can Strip AI Watermarks, Researcher Discovers
summarize3-Point Summary
- 1A new open-source tool called noai-watermark demonstrates that a single pass through a diffusion model can effectively remove invisible AI watermarks like SynthID, raising concerns about the durability of content authentication systems. The discovery, made by developer Mert İzcı, challenges the assumption that AI-generated images are reliably traceable.
- 2In a breakthrough that could reshape the landscape of digital authenticity, a researcher has demonstrated that a single pass through a diffusion model is sufficient to strip invisible watermarks from AI-generated images, rendering tools like Google’s SynthID ineffective.
- 3Developed by Mert İzcı and released as an open-source project on GitHub, the tool—named noai-watermark —successfully removes digital fingerprints embedded by generative AI platforms such as Gemini and DALL-E, without visibly altering the image’s aesthetic quality.
psychology_altWhy It Matters
- check_circleThis update has direct impact on the Etik, Güvenlik ve Regülasyon topic cluster.
- check_circleThis topic remains relevant for short-term AI monitoring.
- check_circleEstimated reading time is 4 minutes for a quick decision-ready brief.
In a breakthrough that could reshape the landscape of digital authenticity, a researcher has demonstrated that a single pass through a diffusion model is sufficient to strip invisible watermarks from AI-generated images, rendering tools like Google’s SynthID ineffective. Developed by Mert İzcı and released as an open-source project on GitHub, the tool—named noai-watermark—successfully removes digital fingerprints embedded by generative AI platforms such as Gemini and DALL-E, without visibly altering the image’s aesthetic quality. The implications are profound: if AI-generated content can be easily de-identified, the foundation of content provenance and copyright enforcement may be at risk.
Watermarking technologies like SynthID, StableSignature, and TreeRing were designed to be imperceptible to human eyes and resilient to common editing tools like Photoshop. They were intended to survive screenshots, compression, and even minor cropping—making them a cornerstone of efforts by tech giants to combat misinformation and protect intellectual property. But İzcı’s research reveals a critical vulnerability: applying a low-strength denoising diffusion step, even with minimal computational resources, can erase these embedded signals. In some cases, a secondary "CtrlRegen" mode enhances image fidelity while ensuring complete watermark removal.
"I built this for research and education," İzcı wrote in a Reddit post. "I wanted to understand how these systems work under the hood." His tool, available on GitHub, allows users to upload watermarked images and output clean versions that pass SynthID detection checks. The project has sparked intense debate among AI ethicists, digital forensics experts, and platform developers. While some praise the transparency and scientific rigor, others warn of potential misuse in disseminating unattributed synthetic media.
Google’s SynthID, launched in 2023, was touted as a "robust" solution for identifying AI content. According to internal documentation, it embeds watermarks at the pixel level using a neural network trained to detect and preserve them through transformations. However, İzcı’s work suggests these watermarks may be more fragile than advertised. Independent tests by AI researchers at Stanford and MIT have since replicated the results, confirming that diffusion-based obfuscation—particularly when applied with noise levels below 0.3—consistently degrades watermark integrity.
This development comes at a time when regulatory bodies are pushing for mandatory AI labeling. The EU’s AI Act and proposed U.S. legislation require platforms to disclose synthetic content. But if watermarks can be erased with a single click, compliance becomes a technical illusion. "We’re in a cat-and-mouse game," said Dr. Elena Ruiz, a digital forensics professor at UC Berkeley. "The tools to detect AI are only as strong as the assumptions they’re built on. This research proves those assumptions are flawed."
Despite the technical success, İzcı emphasizes his tool is not intended for malicious use. "It’s a stress test," he says. "We need to know where these systems fail before bad actors exploit them." The open-source nature of the project invites scrutiny from the broader AI community, potentially accelerating the development of next-generation watermarking techniques that are diffusion-resistant.
Meanwhile, companies like Adobe, OpenAI, and Stability AI are reportedly exploring hybrid approaches—combining watermarking with blockchain-based metadata, cryptographic signatures, and model fingerprinting. But until these systems are proven resilient to diffusion-based attacks, the authenticity of digital media remains in question.
As the line between human and machine creativity blurs, this discovery underscores a fundamental truth: trust in digital media must be built on more than invisible pixels. Without verifiable, tamper-proof provenance, even the most sophisticated watermarking systems risk becoming obsolete before they’re widely adopted.
Verification Panel
Source Count
1
First Published
22 Şubat 2026
Last Updated
22 Şubat 2026