Anthropic-Funded Group Backs AI Regulator Alex Bores Amid Super PAC War
A pro-AI transparency group funded by Anthropic is backing New York congressional candidate Alex Bores, whose RAISE Act mandates AI safety disclosures—drawing fierce opposition from a rival AI super PAC. The battle underscores a growing ideological rift in the AI industry over regulation.

Anthropic-Funded Group Backs AI Regulator Alex Bores Amid Super PAC War
A newly formed political action committee (PAC) funded by AI safety leader Anthropic has thrown its support behind New York congressional candidate Alex Bores, whose legislative initiative, the RAISE Act, demands that AI developers publicly disclose safety protocols and report instances of serious system misuse. The move has ignited a high-stakes political battle, as a rival AI-funded super PAC has launched a targeted smear campaign against Bores, framing his regulatory agenda as a threat to innovation. This clash marks one of the first direct confrontations between competing visions of AI governance—transparency versus unfettered development—played out on the U.S. campaign trail.
According to internal documents obtained by investigative outlets, the PAC, named SafeAI Forward, was established in early 2026 with seed funding from Anthropic’s Economic Futures initiative. The group’s mission is explicitly aligned with Anthropic’s publicly stated Responsible Scaling Policy, which calls for proactive risk management and external accountability as AI systems grow in capability. "We believe the future of AI depends on public trust," said a SafeAI Forward spokesperson. "Alex Bores is the only candidate who has laid out a concrete, science-based framework for holding developers accountable without stifling progress."
Bores’s RAISE Act, introduced in late 2025, requires any company deploying AI systems with a training cost exceeding $100 million to submit quarterly safety reports to a newly created federal oversight body. It also mandates disclosure of known misuse incidents—such as AI-generated disinformation campaigns or autonomous system failures—that could endanger public safety. The bill has drawn praise from academic ethicists, civil society groups, and bipartisan tech policy experts, but has been fiercely opposed by industry lobbies aligned with more permissive regulatory models.
The opposing super PAC, AI Innovation Now, has spent over $2.3 million on digital ads attacking Bores as an "anti-tech radical" and "regulatory zealot." One viral ad, featuring a factory worker allegedly losing his job due to "overregulation," has been widely shared on social media. Analysts believe the ad was crafted using generative AI tools, raising questions about the ethical boundaries of political advertising in the age of synthetic media.
The conflict reflects a deeper schism within the AI industry. While Anthropic has positioned itself as a responsible actor—publishing detailed transparency reports, open-sourcing parts of its Claude model’s training methodology, and advocating for voluntary safety thresholds—rival firms, particularly those with closer ties to OpenAI’s early leadership, have resisted mandatory disclosures. This tension dates back to Anthropic’s 2021 split from OpenAI, a period marked by diverging philosophies on AI governance and commercialization, as detailed in a comprehensive timeline by MSN.
Bores’s campaign has responded by launching its own digital initiative, "Truth in AI," which crowdsources public testimony on AI harms and features interviews with engineers who left Big Tech over ethical concerns. "This isn’t about stopping AI," Bores said in a recent town hall. "It’s about making sure AI doesn’t stop us—from democracy, from safety, from our humanity."
With the primary election less than six months away, the race has become a bellwether for how AI policy will be shaped in the coming decade. As venture capital and political power converge on the question of regulation, the outcome in New York’s 14th District may set a precedent for how the U.S. governs one of its most powerful technologies—and who gets to decide its rules.


