Is AI Chat Over? Now It's Time to Build a Truly 'Intelligent' Team Member

A new and bold venture called Humans& believes that the next giant leap for AI is not individual intelligence, but social coordination. And they are building a brand new 'brain' for it.

Is AI Chat Over? Now It's Time to Build a Truly 'Intelligent' Team Member

Let's be honest. At last month's team meeting, deciding who would take notes, who would write the summary afterward, and which idea to highlight was more exhausting than completing the actual project. Coordinating people has always felt harder than solving the complex problems we try to teach machines. Now, a startup called Humans&, which gathered its founders from the corridors of giants like Anthropic, OpenAI, and Google DeepMind, is diving headfirst into this very battle: transforming AI from an answer machine into a coordination machine.

A $480 Million Bet: The 'Central Nervous System'

This isn't just an idea; it's a serious wager. The company, only three months old, has secured a $480 million seed investment based solely on this philosophy and the founding team's background. Let that number sink in. It's proof that a team wanting to build a new foundation model architecture from scratch convinced investors by declaring, 'the era of intelligent assistants that answer individual questions is over.'

As co-founder Andi Peng told TechCrunch: "We're reaching the end of the first paradigm of scaling. The era of question-answering models that are very smart in specific domains is over. Now, we're entering the second wave, where the average user is trying to figure out what to do with all these AIs."

The Goal Isn't to Train a Librarian, But a Team Captain

Today's large language models (LLMs) are like hyperactive librarians who have memorized the entire internet. You ask, they answer. But managing the different priorities of 10 people in a Slack channel, an ongoing debate in a Google Doc, or endless negotiations over a startup's logo is not their job. This space is messier, more complex, and more human.

Humans& CEO and former xAI researcher Eric Zelikman says they are targeting exactly this gap. "We are building a product and a model to help people work together and communicate more effectively. Both with each other and with AI tools." Ironically, they laugh, citing the debate their own team had over its logo. AI could perhaps become a 'social fabric' that accelerates such collective decision-making processes.

Parallels with LeCun's Vision

This 'coordination-focused AI' idea is actually part of a larger vision floating in the air. Pioneers like Yann LeCun say AI's ultimate goal is to build a 'world model'. That is, an intelligence that has internalized how the physical and social world works and can operate on common ground. Humans&' quest for an 'architecture for social intelligence' seems like an effort to bring this vision to the most chaotic realm of the business world – human relationships. The goals behind the scenes at LeCun's AMI Labs and this startup's claim are like two branches of the same river.

No Product, But Big Dreams

There is no concrete product at the moment. The idea could be a new communication/collaboration platform where AI is a natural participant, potentially replacing multi-user environments like Slack, Google Docs, or Notion. The target is both corporate and individual users. However, securing an investment of this size at such an early stage is both exciting and suspicious. Is this the most inflated corner of the AI bubble, or a truly overlooked giant opportunity?

My opinion? A technology that solves the challenge of bringing people together would certainly be transformative. But the problem is, this isn't just a technology problem; it's a problem of psychology, sociology, and power dynamics. How will an AI bridge the gap between a boss's demands and an employee's concerns? Whose word will carry more weight? This isn't just a matter of a better algorithm. The Humans& team, perhaps unknowingly, is racing toward one of AI's thorniest and most ethical dilemmas.

Frequently Asked Questions

How will Humans&'s model differ from other AI models (ChatGPT, Claude)?

The fundamental difference lies in the focus. Current models are optimized to take a single user's command and generate a response. Humans&'s goal is to build a 'social intelligence' core designed to understand, guide, and facilitate a group of people moving toward a common goal. That is, to manage the intentions, conflicts, and progress of all parties involved in a conversation.

Does it deserve such a large seed investment? Is this a sign of a bubble?

This is one of the biggest questions in the AI ecosystem right now. $480 million is certainly unusual for a startup without a product yet. However, investors are betting not just on the idea, but on the 'pedigree' of the founding team from places like Anthropic, Meta, OpenAI, DeepMind, and their belief that this is AI's next logical step. Whether it's a bubble will be shown by time, i.e., the first prototype to emerge.

Will this model take our jobs? What will happen to human managers?

The startup's own rhetoric is to move beyond the fear story of AI taking our jobs. The aim is not to replace people but to empower them for a more effective, less exhausting collaboration process. In practice, this could transform some mid-level coordination and reporting roles. However, ultimate decisions and leadership roles requiring emotional intelligence and strategic vision will likely remain with humans for a long time. AI might become an 'assistant captain' managing those weekly progress meetings we all love to hate.

Related Articles