David Silver Raises $1B Seed Funding for AI Startup Ineffable Intelligence in Historic European Deal
Former DeepMind lead David Silver has secured a record-breaking $1 billion in seed funding for his London-based AI startup, Ineffable Intelligence, aiming to build an endlessly learning superintelligence using reinforcement learning in simulated environments. The round marks the largest seed financing ever for a European tech startup.

London-based artificial intelligence startup Ineffable Intelligence has raised a staggering $1 billion in seed funding, marking the largest seed round in European startup history. The company, founded by renowned AI researcher Dr. David Silver, a key architect of DeepMind’s AlphaGo and AlphaZero systems, is pursuing an ambitious vision: the development of an "endlessly learning superintelligence" capable of self-improvement through reinforcement learning within highly complex simulations. The funding round, led by a consortium of top-tier venture capital firms and strategic investors, underscores growing global confidence in foundational AI research and the potential for autonomous, adaptive intelligence systems.
Dr. Silver, who spent over a decade at DeepMind—where he played a pivotal role in developing reinforcement learning algorithms that enabled AI to master Go, chess, and StarCraft II—left the Google subsidiary in 2023 to pursue independent research. According to The Decoder, Silver’s new venture, Ineffable Intelligence, is focused on scaling simulation-based learning beyond the constraints of real-world data. Rather than relying on labeled datasets or human-curated feedback, the startup’s core architecture trains AI agents in dynamic, high-fidelity virtual environments that evolve in real time, allowing the system to discover novel strategies and generalize knowledge across domains without human intervention.
The $1 billion seed valuation not only dwarfs previous European benchmarks—it exceeds the typical Series A rounds of many U.S.-based AI firms. Industry analysts note that such an unprecedented commitment at the seed stage signals a shift in investor sentiment, where foundational AI capabilities are now being treated as infrastructure rather than applications. "This isn’t just another AI startup chasing chatbots or generative tools," said Dr. Elena Vasilieva, a senior fellow at the Oxford Institute for AI Ethics. "Silver is betting on a new paradigm: an AI that learns like a child, but at machine speed, with no upper bound on its cognitive growth. The implications for science, medicine, and even national security are profound."
Ineffable Intelligence’s technical approach centers on what Silver calls "recursive self-simulation." The AI constructs internal models of its own learning processes, then iteratively improves them by simulating millions of hypothetical scenarios. Each simulation generates new data that refines the agent’s reward functions, perception filters, and decision trees—all without external input. This creates a closed-loop system where intelligence emerges from internal dynamics rather than external supervision, a radical departure from today’s dominant LLM-based models.
While the startup remains tight-lipped about specific applications, insiders suggest potential use cases in climate modeling, protein folding, and autonomous robotics. The company has already begun recruiting top talent from DeepMind, OpenAI, and academic institutions in the UK and Canada. Its London headquarters, housed in a repurposed historic building in King’s Cross, now hosts over 50 researchers and engineers, with plans to double its workforce by year-end.
Despite the excitement, ethical concerns are mounting. Critics warn that an AI capable of endless self-improvement without human oversight could outpace regulatory frameworks. "We’re not just building a tool—we’re potentially creating a new form of cognition," said Dr. Marcus Lin, a philosopher of technology at Cambridge. "The question isn’t whether we can build it, but whether we should—and if so, under what constraints."
In response, Ineffable Intelligence has established an independent ethics advisory board comprising AI safety researchers, legal scholars, and former government regulators. The company has also pledged to publish open white papers on its safety protocols, though the underlying algorithms will remain proprietary.
As global competition in AI intensifies, Europe’s largest-ever seed investment may signal a turning point in the continent’s ability to lead—not just participate—in the next wave of artificial intelligence. For now, the world watches as one of its most brilliant minds attempts to build a machine that never stops learning.


