Can You Train LoRAs on Unsupported Stable Diffusion Models Like Pony or Illustrious?
A growing community of AI artists is pushing the boundaries of Stable Diffusion by training LoRAs on unsupported base models such as Pony and Illustrious. While official toolkits like AI Toolkit don't natively support these models, technical workarounds and community-driven methods are making it possible.

As the generative AI landscape evolves, enthusiasts and independent developers are increasingly experimenting with custom training techniques for Stable Diffusion models—particularly Low-Rank Adaptations (LoRAs)—on base models not officially supported by mainstream toolkits. One pressing question in the AI art community is whether platforms like AI Toolkit can be adapted to train LoRAs on models such as Pony, Illustrious, or other custom checkpoints outside their curated list. While the official documentation does not currently list these models as compatible, technical analysis and community experimentation suggest a viable, albeit unofficial, pathway.
LoRAs are lightweight neural network adapters designed to fine-tune large pre-trained models like Stable Diffusion without retraining the entire architecture. They enable users to customize outputs—such as specific artistic styles, character designs, or visual aesthetics—using minimal computational resources. According to industry best practices in machine learning, training a LoRA requires access to the base model’s architecture, weights, and a compatible tokenizer. The limitation in tools like AI Toolkit is not a fundamental technical barrier, but rather a software constraint imposed by user interface design and model compatibility filters.
Users seeking to train LoRAs on unsupported models have begun employing a multi-step workaround. First, they manually download the base model (e.g., Pony or Illustrious) from repositories such as CivitAI or Hugging Face. Next, they convert the model into a format compatible with the underlying training engine (typically Diffusers or PyTorch) used by AI Toolkit. This often involves renaming file extensions, ensuring correct v1/v2 model headers, and validating the model’s configuration file. Once the model is recognized by the system’s backend, users can bypass the UI restrictions by directly editing configuration files or launching training via command-line interfaces integrated into the toolkit’s underlying codebase.
Community forums, particularly on Reddit and Discord, have documented successful cases where users trained LoRAs on Illustrious-style models with high fidelity, achieving results comparable to those on officially supported bases. These users emphasize the importance of dataset quality—ensuring images are properly tagged, uniformly sized, and free of watermarks or metadata interference. Additionally, they recommend using a validation set to monitor overfitting, a common issue when training on small or niche datasets.
While AI Toolkit’s developers have not officially endorsed this practice, the open-source nature of many Stable Diffusion components allows for such modifications under permissive licenses like CreativeML OpenRAIL-M. This aligns with broader trends in AI democratization, where user innovation often outpaces vendor support. As noted in academic literature on machine learning workflows, flexibility in model adaptation is critical for advancing domain-specific applications—especially in creative fields where stylistic diversity is paramount.
However, caution is advised. Training on unsupported models may lead to instability, silent failures, or compatibility issues during inference. Users should back up all configurations and test outputs rigorously. Furthermore, ethical considerations around copyright and artist attribution remain paramount, especially when training on styles derived from living artists’ work.
In summary, while AI Toolkit does not officially support LoRA training on Pony or Illustrious models, technical ingenuity and community knowledge have created a functional workaround. This reflects a larger pattern in AI development: user-driven innovation filling gaps left by commercial tooling. As demand grows, it is likely that future versions of AI Toolkit and similar platforms will expand their model support—either through official updates or community-contributed plugins.


