TR

ByteDance Adjusts AI Safeguards After Hollywood Copyright Outcry

ByteDance is revising its Seedance 2.0 AI video generator following intense backlash from Disney, Paramount, and major Hollywood unions over deepfake-like depictions of actors. The company acknowledges concerns and pledges enhanced ethical safeguards as global regulators eye AI content regulation.

calendar_today🇹🇷Türkçe versiyonu
ByteDance Adjusts AI Safeguards After Hollywood Copyright Outcry

Following a wave of public and industry outcry, ByteDance has announced it will implement significant revisions to its newly launched Seedance 2.0 AI video generation model, after hyperrealistic deepfakes of Hollywood actors—including Tom Hanks and Meryl Streep—went viral across social media platforms. The generated videos, which convincingly replicated the likenesses of celebrities in fictional scenarios, triggered immediate condemnation from major studios and entertainment unions, who accused the technology of violating copyright and personality rights.

Disney, Paramount Pictures, and the Motion Picture Association (MPA) issued a joint statement last week, calling the use of Seedance 2.0 to replicate actors’ faces and voices without consent a “clear infringement of intellectual property and ethical boundaries.” The MPA warned that unchecked AI-generated content could destabilize the creative economy, erode trust in media, and threaten the livelihoods of performers and production crews.

In response, ByteDance confirmed it is pausing public access to Seedance 2.0 and assembling a multidisciplinary team of legal experts, AI ethicists, and Hollywood consultants to overhaul its content moderation and consent protocols. The company emphasized its commitment to responsible innovation, stating, “We recognize the profound responsibility that comes with powerful generative tools. Our goal is not to replace human creativity, but to empower it—within clear ethical guardrails.”

While the initial viral clips were created by independent users and not endorsed by ByteDance, the company faces mounting pressure to implement pre-generation filters that block known celebrity likenesses unless explicit authorization is provided. Industry insiders suggest the revisions may include embedding digital watermarks, requiring consent databases linked to talent agencies, and integrating real-time detection algorithms trained on studio-owned archival footage.

Legal experts note that current U.S. copyright law does not explicitly cover synthetic likenesses, leaving studios to rely on right-of-publicity lawsuits—which vary by state. California, home to most major studios, has some of the strongest protections, but international enforcement remains fragmented. The European Union’s upcoming AI Act, set to take effect in 2025, may offer a more uniform framework, mandating transparency for AI-generated media and requiring opt-in consent for biometric data use.

Meanwhile, the broader tech community is divided. Some AI researchers applaud ByteDance’s responsiveness as a model for industry self-regulation, while others argue that voluntary measures are insufficient without federal legislation. “We can’t rely on companies to police themselves when the financial incentives to bypass safeguards are so high,” said Dr. Elena Ruiz, a digital rights scholar at Stanford. “This is a watershed moment for generative AI—either we establish enforceable norms now, or we risk a future where reality is indistinguishable from fabrication.”

As regulatory scrutiny intensifies, ByteDance is also engaging with the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which has been vocal in demanding compensation and control over digital replicas. Preliminary talks suggest a potential licensing framework where studios could pay royalties for AI-generated performances using licensed likenesses—a model akin to music sampling rights.

For now, Seedance 2.0 remains offline for public use while ByteDance finalizes its revised safeguards. The company expects to relaunch a restricted version in the coming months, limited to verified enterprise clients with compliance certifications. The outcome of this reckoning could set a precedent for how AI-generated content is governed—not just in entertainment, but across journalism, advertising, and political discourse.

recommendRelated Articles