TR

Microsoft’s AI Chief Warns AI Could Replace White-Collar Jobs in 18 Months, Urges Human Control

Mustafa Suleyman, Microsoft’s head of AI, warns that artificial intelligence could perform most professional tasks within 18 months, while simultaneously rejecting the notion that superintelligence is inevitable or desirable. He calls for strict human oversight and ethical boundaries in AI development.

calendar_today🇹🇷Türkçe versiyonu
Microsoft’s AI Chief Warns AI Could Replace White-Collar Jobs in 18 Months, Urges Human Control

Microsoft’s AI Chief Warns AI Could Replace White-Collar Jobs in 18 Months, Urges Human Control

Mustafa Suleyman, Microsoft’s head of AI and co-founder of DeepMind, has issued a stark warning about the near-term impact of artificial intelligence on the global workforce, while simultaneously challenging the prevailing narrative among tech elites that artificial general intelligence (AGI) is both inevitable and beneficial. According to Tom’s Hardware and MSNBC, Suleyman stated that AI systems are on track to achieve human-level performance on most, if not all, professional tasks within 12 to 18 months — a timeline that could render entire sectors of white-collar employment obsolete.

"We’re going to have a human-level performance on most, if not all, professional tasks," Suleyman reportedly said, highlighting the accelerating pace of AI advancement in areas such as legal analysis, financial forecasting, medical diagnostics, and content creation. The implications are profound: lawyers, accountants, writers, analysts, and even junior executives could see their roles automated within the next year and a half. This projection is not speculative fantasy but the result of rapid progress in large language models, multimodal reasoning, and autonomous agent systems — technologies Microsoft has heavily invested in through its partnership with OpenAI and internal research initiatives.

Yet Suleyman’s message goes beyond economic disruption. In a rare public rebuke of Silicon Valley’s techno-utopianism, he explicitly rejected the belief that "superintelligence is inevitable and desirable." "It’s unclear why it would preserve us as a species," he argued, emphasizing that AI systems must remain subordinate to human values, oversight, and accountability. His stance stands in direct contrast to prominent figures in the AI community who view AGI as a natural evolutionary step — even if it requires relinquishing control.

"We should only build systems we can control," Suleyman insisted, calling for a paradigm shift in AI development from optimization for performance alone to optimization for safety, transparency, and human alignment. He advocated for regulatory frameworks that mandate explainability, real-time monitoring, and fail-safes in all AI systems deployed in professional environments. "If we can’t guarantee that an AI will act in our interest — not just efficiently, but ethically — then we shouldn’t deploy it," he added.

Industry analysts note that Suleyman’s position reflects a growing internal tension within Microsoft and the broader AI ecosystem. While the company continues to aggressively commercialize AI tools like Copilot, it also faces mounting pressure from employees, ethicists, and policymakers concerned about unchecked automation and loss of human agency. Suleyman’s dual message — rapid obsolescence of jobs paired with a call for restraint — underscores a critical juncture in AI governance.

For workers, the timeline is alarming but not unexpected. A 2023 McKinsey report estimated that up to 60% of occupational activities could be automated with current AI technologies. Suleyman’s projection accelerates this timeline, suggesting that the transition may be far more abrupt than previously assumed. Governments and educational institutions must urgently rethink workforce reskilling, universal basic income models, and the future of meaningful employment.

For policymakers, Suleyman’s remarks provide a rare opportunity: a major tech leader urging caution rather than acceleration. His call for human-centric AI design could catalyze bipartisan legislation, particularly in the U.S. and EU, where AI regulation is still in its infancy. The question is no longer whether AI will transform work — it already has. The critical question, as Suleyman frames it, is whether we will design it to serve humanity — or whether we will be outpaced by systems we no longer understand, let alone control.

AI-Powered Content

recommendRelated Articles