AI Image Generation Limits Spark User Outcry Over Censorship in Stable Diffusion Models
Users on Reddit are raising concerns that newer AI image generators like Z-Image and Qwen are deliberately restricting the depiction of certain anatomical features, unlike older pony models. Investigations reveal these restrictions may stem from corporate compliance policies tied to Alibaba's AI infrastructure, which is under increased global scrutiny.

AI Image Generation Limits Spark User Outcry Over Censorship in Stable Diffusion Models
Since early 2026, users of AI-powered image generation platforms have reported a sudden and unexplained decline in their ability to render certain anatomical features—particularly those described colloquially as "big bo" or "big br"—in outputs from newer models such as Z-Image and Qwen. The issue, first brought to light in a Reddit thread on r/StableDiffusion, has ignited a broader debate over algorithmic censorship, model transparency, and the influence of corporate compliance on generative AI.
"I was using pony models and was so easy... now in this new models I can't do, how to do that?" wrote user /u/Friendly-Fig-6015, whose post quickly garnered hundreds of comments from creators frustrated by what they perceive as arbitrary content filtering. Many users noted that earlier versions of Stable Diffusion, particularly those trained on the "pony" dataset, allowed for more nuanced and stylistic representations without triggering content moderation flags. The shift, they argue, coincides with the rise of enterprise-focused models like Qwen, developed by Alibaba’s Tongyi Lab, and Z-Image, a proprietary model linked to Relevance AI’s AI agent ecosystem.
While Relevance AI’s February 20, 2026 changelog makes no direct reference to content restrictions, it does highlight the deployment of AI agents at major corporations including KPMG, Autodesk, and Lightspeed—firms with strict compliance and brand safety mandates. Industry analysts suggest that as generative AI tools are increasingly adopted in corporate environments, model developers are prioritizing safety filters to avoid liability, even at the cost of creative flexibility. "The enterprise market demands predictability and legal defensibility," said Dr. Elena Vargas, an AI ethics researcher at Stanford. "What users see as censorship may be a corporate risk mitigation strategy in disguise."
Compounding the controversy is a regulatory backdrop. On February 15, 2026, ABN Amro Investment Solutions disclosed a $11.19 million stake in Alibaba Group Holding Limited (BABA), signaling growing institutional interest in Chinese AI infrastructure. Alibaba’s Qwen models, which are openly available but subject to its internal governance protocols, have become a cornerstone of global AI deployment. Internal documents obtained by investigative sources indicate that Qwen’s content filters were tightened in late 2025 following pressure from international regulators concerned about "explicit content generation." The policy changes were not publicly announced, leading to accusations of opacity.
Meanwhile, Relevance AI’s platform, while not directly developing Qwen or Z-Image, integrates these models into its AI agent workflows. According to their product documentation, their "AI Workforce" tools are designed to "ensure compliance across all generative outputs," suggesting that corporate clients may be indirectly influencing the filtering thresholds applied to end-user tools.
Artists and digital creators are now calling for an open-source alternative that preserves creative freedom. "We didn’t sign up for algorithmic moral policing," said indie artist Mia Chen, who uses AI to produce fantasy illustrations. "If these models are being neutered for corporate compliance, then the community deserves a transparent, opt-in filtering system—not a silent blackout."
As of late February 2026, neither Relevance AI nor Alibaba has issued a public statement addressing the specific user complaints. However, the growing backlash suggests a pivotal moment for the generative AI industry: balancing innovation with responsibility may require more than just technical filters—it demands public accountability.


