Anthropic CEO Amodei: Artificial Intelligence Could Be an Existential Threat to Humanity
Anthropic's founder and CEO Dario Amodei issued a warning in a 19,000-word article that the artificial intelligence technology they are developing could threaten human civilization. Amodei argues that social and political systems may be inadequate to control this power.
AI Leader's Striking Warning: The Danger of Uncontrolled Power
In a 19,000-word article published on his personal blog, Dario Amodei, co-founder and CEO of AI company Anthropic, stated that humanity will gain 'almost unimaginable power' in the coming period, but it is uncertain whether social, political, and technological systems have the maturity to wield this power.
Critical Danger Warning for 2026
Amodei argued that we will be much closer to real danger in 2026 compared to 2023. In his article, he highlighted risks such as mass job losses, extreme centralization of economic power, and increasing wealth inequality. The CEO noted that incentives to take meaningful measures to mitigate these risks are insufficient.
Harsh Criticism of Other Companies
In his article, the Anthropic CEO criticized some AI companies for displaying a 'disturbing negligence' regarding the sexualization of children in current models. He wrote that this situation raises doubts about these companies' inclination or ability to address autonomy risks in future models. This criticism is considered to be connected to recent controversies, such as Google and Apple hosting 'Nudify' applications.
Biological Weapon and Autonomous System Threat
Among other risks pointed out by Amodei are AI developing dangerous biological weapons or 'superior' military weapons. The CEO warned that an AI could 'run out of control and surpass humanity' or that countries could 'use the AI advantage to gain power over other countries,' arguing this could create the possibility of a 'global totalitarian dictatorship.'
Democracy-Authoritarian Regime Dilemma
According to the CEO, taking the time to carefully build AI systems so they do not threaten humanity creates tension with the need for democratic countries to stay ahead of and not submit to authoritarian countries. However, he warned that the same AI tools necessary to combat autocracies could, if taken to extremes, be turned inward to create tyranny in their own countries. This kind of dilemma bears similarities to Mark Carney's warnings about AI independence at Davos.
Solution Proposal and Harsh Analogy Aimed at China
As part of the solution, Amodei reiterated his call for other countries to be deprived of the resources needed to build powerful AI. He likened the US selling Nvidia AI chips to China to 'selling nuclear weapons to North Korea and then boasting that the US won because the missile casings were made by Boeing.'
Financial Interest and Criticisms
Amodei's warnings coincide with a period when his company is preparing to close a multi-billion dollar funding round with a $350 billion valuation. This situation has led to interpretations that Amodei has a significant financial interest in positioning himself as a solution to the risks mentioned in his article. Furthermore, debates continue among realists, skeptics, and technology advocates regarding the real risks of advanced AI. Some critics argue that the existential risks frequently voiced by leaders like Amodei may be exaggerated, especially at a time when improvements in technology appear to be slowing, and this should be considered. Similarly, the reactions to the UK's AI copyright plan also reflect public concern about technology regulations.
The lack of regulation and ethical concerns manifest not only at the corporate level but also in areas such as education. US schools having to develop their own AI policies exemplifies the challenges arising in the absence of a central framework. Publishers also face a similar uncertainty; the UK demanding from Google the right for publishers to opt out of AI summaries highlights the importance of control over content.