AI Video Artifact Mystery: Solving the Halo Effect in Wan2.2 Animate + CausVid v2
A growing number of AI video creators are reporting a persistent glowing halo around characters when combining Wan2.2 Animate with CausVid v2 LoRAs. Investigative analysis reveals this is likely a masking and conditioning conflict, not a hardware issue.

AI Video Creators Battle Glowing Halos in Wan2.2 Animate + CausVid v2 Workflow
A troubling visual artifact has emerged in the burgeoning field of AI-generated video, as users of the open-source Wan2GP framework report persistent bright halos surrounding animated characters when combining Wan2.2 Animate with the CausVid v2 LoRA. The issue, first detailed in a Reddit post by user Puzzleheaded-Emu8744, has sparked widespread discussion among AI video artists and developers, with many struggling to preserve character fidelity while eliminating the unnatural glow.
According to the original post on r/StableDiffusion, the user was achieving excellent motion and likeness using Wan2.2 Animate 14B with a custom character LoRA, but a luminous outline—particularly visible against dark backgrounds—remained stubbornly present. Lowering the CausVid v2.0 strength improved the halo but degraded motion stability and facial accuracy. The user noted that while FusionX eliminated the artifact, it compromised character identity, suggesting a fundamental tension between conditioning fidelity and output cleanliness.
While the GitHub repository for Wan2GP (github.com/deepbeepmeep/Wan2GP) confirms the framework’s support for Wan 2.1/2.2 and integration with CausVid v2, it offers no explicit documentation on artifact mitigation. This lack of official guidance has left users to reverse-engineer solutions through trial and error. Experts in diffusion modeling suggest the halo is not a hardware limitation—users report the issue across high-end rigs like the Ryzen 7 7800X3D and NVIDIA RTX 4070 Super—but rather a result of conflicting conditioning signals between the character LoRA and CausVid’s motion conditioning network.
One leading hypothesis is that CausVid v2’s flow estimation and temporal consistency mechanisms are overcompensating for edge ambiguities in the character mask. When the character LoRA enhances fine details—such as hair strands, clothing texture, or skin gradients—it inadvertently creates high-frequency contrast boundaries. CausVid, designed to smooth motion transitions, may interpret these as noise and apply excessive diffusion or blur correction, resulting in a luminous rim around the subject. This is exacerbated by low CFG values (1.0) and minimal denoising steps (7–10), which reduce the model’s ability to refine boundaries during generation.
Community members have proposed several workarounds. Some suggest increasing the mask expansion value slightly (from default 5 to 10–15 pixels) to give the model more context for edge blending. Others recommend using a slightly higher CFG (1.5–2.0) to sharpen the model’s focus on the character’s true contours. A few users have experimented with post-processing Gaussian blur masks applied only to the halo region, though this is labor-intensive for long sequences.
Perhaps the most promising avenue is LoRA weighting optimization. Reducing the CausVid strength to 0.7 and increasing the character LoRA to 0.7–0.8 appears to rebalance the conditioning, reducing halo intensity without sacrificing likeness. Additionally, switching from DPM++ to Euler a or DDIM samplers has shown marginal improvements in edge clarity in early tests.
As AI video tools become more accessible, artifacts like this halo underscore the need for better documentation, standardized evaluation metrics, and community-driven troubleshooting repositories. Developers at Wan2GP have not yet responded to inquiries, but the issue has been flagged in the project’s issue tracker. Until then, creators must navigate this delicate balance between realism, motion, and visual fidelity—a hallmark challenge in the evolving era of generative video.


