Simile Launches AI Society Simulator Amid Microsoft’s Copilot Expansion
Simile, a newly funded AI startup, has unveiled the world’s first societal simulation platform powered by generative agents that mimic human behavior at scale. As Microsoft rolls out Copilot Memory and business-focused AI tools, Simile represents a parallel leap into predictive social modeling — raising profound ethical and operational questions.

Simile Launches AI Society Simulator Amid Microsoft’s Copilot Expansion
In a landmark development at the intersection of artificial intelligence and social science, Simile — a startup backed by $100 million from Index Ventures and led by AI luminaries including Fei-Fei Li and Andrej Karpathy — has unveiled what it claims is the first-ever AI-powered simulation of human society. Unlike traditional predictive models, Simile’s platform generates thousands of synthetic agents based on real-world behavioral data, enabling organizations to rehearse high-stakes decisions before deployment. From corporate earnings calls to public policy reforms, the company asserts its technology can forecast outcomes with unprecedented accuracy, transforming intuition-driven governance into evidence-based simulation.
The announcement arrives as Microsoft accelerates its own AI integration across enterprise workflows. On July 14, 2025, Microsoft introduced Copilot Memory, a feature that enables AI to recall user-specific context across documents, meetings, and communications to personalize assistance. Just months later, on November 18, 2025, Microsoft launched Microsoft 365 Copilot Business, designed to empower small and medium enterprises with task automation and decision support. On the same day, Microsoft also revealed Project Opal, a system aimed at streamlining task-based workflows through AI orchestration. While these tools enhance individual and organizational productivity, Simile takes a radically different approach: instead of assisting humans in their tasks, it simulates entire populations of humans to predict collective outcomes.
Simile’s foundation model is built on generative agents — AI entities trained on vast datasets of human behavior, from economic decisions to social interactions. These agents are not mere chatbots; they are designed to exhibit nuanced responses shaped by psychological profiles, cultural contexts, and historical patterns. According to the company’s public pitch, these agents have been validated against real-world outcomes in litigation modeling, market response simulations, and policy impact assessments. Leading corporations are already using the platform to test how a new pricing strategy might affect consumer sentiment across demographic segments, or how a regulatory change could ripple through supply chains and employee morale.
The implications extend far beyond business. Simile envisions simulating entire nations — modeling how misinformation spreads during elections, how climate policies influence migration patterns, or how universal basic income might alter labor participation. Such capabilities could revolutionize governance, but they also raise urgent ethical concerns. Who owns the behavioral data used to train these agents? Can simulations be weaponized to manipulate public opinion? And what accountability exists when a policy fails after being "tested" in simulation?
While Microsoft’s Copilot suite enhances human agency, Simile seeks to replace it — at least in the decision-making phase. This divergence highlights a fundamental tension in AI’s evolution: augmentation versus automation. As Simile prepares to scale its simulations to trillions of interactions, regulators, ethicists, and technologists must collaborate to establish guardrails. The future, as Simile declares, is too important to be left to chance. But if that future is simulated, who gets to write the script?


