TensorArt and TensorHub Quietly Disable User-Uploaded LoRAs, Sparking Outrage
Users report being locked out of their own LoRA models on TensorArt and TensorHub, with no warning or explanation. The unannounced removal of privately uploaded AI training models has ignited concerns over platform governance and intellectual property rights in the generative AI community.

Since early 2024, a growing number of Stable Diffusion users have reported being abruptly locked out of LoRA models they personally uploaded to TensorArt and its affiliated platform, TensorHub. The issue, first brought to light in a Reddit thread on r/StableDiffusion, reveals that while users can still see their uploaded LoRAs listed in their profiles, clicking on them leads to error messages or blank pages — effectively rendering the models inaccessible. The affected models span a wide spectrum: character-specific LoRAs, artistic style embeddings, and even celebrity-inspired weights, all of which were created and shared by individual artists and developers.
According to the original report by user /u/Trapdaar, the problem is not isolated to a single account or model type. Multiple users have since corroborated the issue in the comments section, describing similar experiences across different upload dates and model categories. None received prior notification, nor were they provided with explanations for the sudden deactivation. This lack of transparency has raised alarms about the ethical and legal boundaries of platform control over user-generated AI content.
LoRAs (Low-Rank Adaptations) are compact machine learning models used to fine-tune large generative AI systems like Stable Diffusion. Unlike full models, they require minimal storage and computational power, making them ideal for community sharing. Many artists rely on platforms like TensorArt and TensorHub to distribute their creations, often for free, to foster innovation and collaboration. For some, these LoRAs represent months of iterative training and creative labor — and in some cases, the sole source of recognition or income within the AI art ecosystem.
The silence from TensorArt’s official channels has only deepened distrust. As of this report, neither TensorArt nor TensorHub has issued a public statement addressing the issue. This absence of communication stands in stark contrast to the platforms’ previous marketing, which emphasized user ownership and creative freedom. Industry observers note that while platforms retain terms-of-service rights to moderate content, the unilateral deletion of user-uploaded models — especially those not violating copyright or community guidelines — constitutes a breach of implicit trust.
Legal experts suggest that users may have a claim to ownership of their trained LoRAs under copyright law, particularly if the models are derived from non-copyrighted training data or original artistic input. The Digital Millennium Copyright Act (DMCA) and similar frameworks do not typically grant platforms the right to erase user-generated content without due process — especially when no infringement has been proven. Some affected users have begun archiving their LoRA files locally and migrating to decentralized alternatives like Hugging Face or CivitAI, where transparency and permanence are core tenets.
The incident highlights a broader tension in the generative AI space: the power imbalance between centralized platforms and the decentralized creators who fuel them. As AI tools become more accessible, the infrastructure that hosts user content must evolve to respect contributor rights — not just monetize them. Without clear policies, notice mechanisms, and appeal processes, platforms risk alienating the very communities that make them valuable.
For now, affected creators are left in limbo — their work visible but unusable, their contributions erased without recourse. The silence from TensorArt speaks volumes. In an industry built on collaboration, the most dangerous algorithm may be the one that operates without accountability.


