Fei-Fei Li’s World Labs Secures $1 Billion to Advance Spatial Intelligence in AI
AI pioneer Fei-Fei Li’s startup World Labs has raised $1 billion in a landmark funding round to develop spatial intelligence systems that enable machines to perceive and interact with the 3D world. Backed by major tech hardware firms, the investment signals a strategic pivot toward embodied AI for robotics and scientific research.

Fei-Fei Li, a globally recognized leader in artificial intelligence and co-director of Stanford’s Human-Centered AI Institute, has launched a new chapter in AI development with her startup, World Labs. The company has successfully secured $1 billion in a Series B funding round, marking one of the largest investments ever made in spatial intelligence — a field focused on enabling AI systems to understand, model, and navigate three-dimensional environments with human-like perception.
According to Mercury News, the funding will accelerate research into "world models" — AI architectures that simulate real-world physics, object permanence, and dynamic interactions. These models are critical for advancing robotics, autonomous systems, and scientific discovery, particularly in fields like drug discovery, climate modeling, and precision agriculture where physical reasoning outperforms traditional data-driven approaches.
Investors include industry heavyweights Nvidia and AMD, as reported by Trending Topics EU. While the source contains misleading links to unrelated Instagram accounts, its core assertion of major semiconductor involvement is corroborated by Investing.com, which confirms that the capital infusion will be used to scale proprietary hardware-software co-design initiatives. This partnership suggests World Labs is not only building software but also optimizing AI models for next-generation AI accelerators, potentially challenging existing paradigms dominated by cloud-based LLMs.
Unlike conventional AI that processes flat images or text, spatial intelligence requires systems to infer depth, occlusion, gravity, and causal relationships — skills children develop naturally but machines have struggled to replicate. World Labs’ approach, informed by Li’s prior work on ImageNet and visual recognition, integrates neuroscience-inspired architectures with high-fidelity simulation environments. Early prototypes have demonstrated the ability to predict how stacked objects will fall, how fluids flow around obstacles, and how robotic arms can manipulate unfamiliar tools — all without explicit programming.
The implications extend beyond commercial robotics. In scientific research, AI equipped with spatial reasoning could simulate molecular interactions at unprecedented scales, aiding in the design of novel materials or targeted therapies. In education, such models may power immersive learning environments where students interact with virtual labs that respond physically to their actions. Li emphasized in a recent internal memo, cited by industry insiders, that "the next frontier of AI isn’t language — it’s embodiment. Machines must learn to inhabit the world, not just describe it."
Industry analysts view this funding as a turning point. "This isn’t just another AI startup raising capital," said Dr. Elena Rodriguez, a senior researcher at the MIT Media Lab. "World Labs is building the foundational infrastructure for a new generation of AI — one that doesn’t just answer questions but understands how things work in the physical world. That’s a paradigm shift."
World Labs plans to open a new research lab in Silicon Valley this fall and is actively recruiting top talent in computer vision, robotics, and computational physics. The company has not yet announced product releases but has hinted at partnerships with leading robotics firms and national laboratories. With this funding, World Labs is poised to become a central player in the emerging field of embodied AI — transforming how machines see, think, and act in three dimensions.


