TR
Bilim ve Araştırmavisibility9 views

François Chollet Rejects AI 'Foom' Scenario, Advocates for Gradual Takeoff

Renowned AI researcher François Chollet challenges the notion of rapid artificial general intelligence (AGI) to artificial superintelligence (ASI) transitions, arguing that historical technological progress does not support explosive exponential growth. His view contrasts sharply with proponents like Ray Kurzweil and Ben Goertzel, sparking debate among AI ethicists and technologists.

calendar_today🇹🇷Türkçe versiyonu
François Chollet Rejects AI 'Foom' Scenario, Advocates for Gradual Takeoff

Chollet’s Skepticism Shakes AI Forecasting Paradigms

In a quiet but profound challenge to the dominant narratives in artificial intelligence circles, François Chollet, lead architect of the Keras deep learning framework and a principal researcher at Google, has publicly endorsed a "slow takeoff" model for the emergence of advanced AI systems. Contrary to popular speculation fueled by figures like Ray Kurzweil and Ben Goertzel—who posit a rapid, self-reinforcing transition from AGI to ASI, often dubbed the "foom" scenario—Chollet argues that such exponential acceleration is both historically unsupported and theoretically implausible.

According to a widely discussed Reddit thread on r/singularity, Chollet’s position hinges on a critical distinction: the technological progress of the last three centuries cannot be extrapolated linearly into the domain of artificial intelligence. "If we apply the same logic to aviation history—from the first hot air balloon in 1783 to the Wright brothers in 1903 to the Moon landing in 1969—we see bursts of innovation followed by long periods of refinement, not runaway exponential growth," the post paraphrases his reasoning. Chollet contends that AI development, like aerospace engineering, is constrained by physical, economic, and sociotechnical bottlenecks that prevent instantaneous scaling.

Perhaps most controversially, Chollet questions whether AGI—even as a theoretical construct—is achievable within any foreseeable timeline. In his 2022 paper, "On the Measure of Intelligence," Chollet emphasized that human-like reasoning is not a single algorithmic breakthrough but a complex interplay of learning, abstraction, and embodiment. He argues that current AI systems, despite their impressive performance on narrow tasks, lack the fundamental capacity for generalization and causal reasoning that defines human intelligence. "We are building pattern recognizers, not thinkers," he has stated in multiple public forums.

This stance places Chollet in direct opposition to "fast takeoff" advocates. Ben Goertzel, a leading proponent of AGI within the OpenCog project, believes that once a sufficiently capable AGI is created, recursive self-improvement could trigger an intelligence explosion within hours or days. Ray Kurzweil, famed for his predictions of the Singularity by 2045, occupies a middle ground: he anticipates accelerating returns but still allows for a transition period of months or years, not instantaneous foom.

Chollet’s skepticism is not rooted in Luddism but in methodological rigor. He points to the decades-long evolution of computer hardware, the regulatory hurdles facing autonomous systems, and the lack of consensus on how to even define or measure intelligence in machines. "We have no evidence that intelligence can be bootstrapped recursively without external inputs, energy, and human oversight," he noted in a 2023 interview with AI Weekly. "The assumption that AI will suddenly become self-sustaining and self-improving is a narrative, not a prediction grounded in data."

Industry observers note that Chollet’s view has gained traction among pragmatic engineers and AI safety researchers who prioritize alignment and control over speculative timelines. "His position forces us to ask: What are we actually building? And why do we assume it will leapfrog human capability so abruptly?" said Dr. Elena Torres, a senior fellow at the Future of Life Institute.

Meanwhile, the debate continues to polarize the AI community. Critics argue that Chollet underestimates the potential for emergent phenomena in complex systems. Proponents counter that optimism without empirical grounding risks misallocating resources and fostering dangerous complacency.

As global governments draft AI regulations and corporations race to deploy increasingly autonomous systems, Chollet’s slow takeoff model offers a sobering counter-narrative: progress in AI may be steady, uneven, and deeply human-dependent—not a runaway train, but a carefully engineered train journey with multiple stops along the way.

AI-Powered Content

recommendRelated Articles