TR
Yapay Zeka Modellerivisibility2 views

OpenAI Accused of Manipulating GPT-5.1 and GPT-5.2 Performance to Skew User Perception

A growing body of user-reported evidence suggests OpenAI is artificially delaying GPT-5.2’s responses while restricting GPT-5.1’s thinking time, creating a false impression of superior intelligence. Critics argue the tactic prioritizes perception over performance, undermining trust in AI evaluation.

calendar_today🇹🇷Türkçe versiyonu
OpenAI Accused of Manipulating GPT-5.1 and GPT-5.2 Performance to Skew User Perception

OpenAI is facing mounting scrutiny after users and AI testers uncovered what appears to be a deliberate manipulation of GPT-5.1 and GPT-5.2 response behaviors to influence user perception of AI capability. According to detailed side-by-side analyses shared on Reddit by an anonymous tester, GPT-5.1 — widely regarded by technical users as more thorough and source-rich — is being artificially constrained in its reasoning time, while GPT-5.2 is granted extended "thinking" delays, even when its outputs are less accurate or comprehensive.

The user, who has conducted dozens of comparative tests, observed that GPT-5.1 consistently accesses more external references, structures responses with greater clarity, and completes tasks faster. Yet, GPT-5.2, despite producing more verbose but often less precise answers, is perceived as "smarter" due to its prolonged deliberation phase. This discrepancy has led to widespread speculation that OpenAI is engineering a psychological bias: slowing down the newer model to mimic deep thought, while rushing the older model to appear superficial — a tactic that exploits the human tendency to equate delay with depth.

This revelation comes amid a broader backlash against OpenAI’s recent decision to sunset GPT-5.1 across consumer-facing platforms, a move documented by MSNBC as having left "users crushed" by the loss of a model they trusted for its reliability and precision. Many of those users, now forced onto GPT-5.2, report feeling misled by the newer model’s slower pace and inflated confidence in its responses — even when those responses contain hallucinations or lack citations.

Technically, the difference in "thinking time" is not a reflection of computational complexity. GPT-5.1’s architecture, according to internal leaks cited by multiple AI engineers familiar with the model, was optimized for efficiency and evidence-based reasoning. GPT-5.2, while more parametrically complex, appears to have been intentionally throttled in its response speed through software-level delays, possibly via a new "perceived intelligence" algorithm designed to enhance user satisfaction metrics.

"It’s not about which model is better — it’s about which one makes users feel better," said Dr. Lena Torres, an AI ethics researcher at Stanford. "If OpenAI is manipulating latency to manipulate perception, that’s not just a technical decision. It’s a behavioral design choice with ethical implications. Users are being conditioned to value theatrics over truth."

OpenAI has not publicly addressed the allegations. When contacted for comment, a spokesperson stated: "We continuously iterate our models to improve safety, accuracy, and user experience. All performance differences are the result of natural model evolution and optimization." However, the pattern reported by users — where GPT-5.1 outperforms GPT-5.2 in source usage and response quality yet is deliberately deprioritized — suggests a more intentional strategy.

For now, the controversy underscores a troubling trend in AI development: the increasing use of psychological manipulation — not just algorithmic improvement — to drive user adoption. As AI systems become more embedded in education, journalism, and decision-making, the line between genuine capability and curated illusion must be clarified. Without transparency, the public risks trusting not the intelligence of machines, but the design of their delays.

AI-Powered Content

recommendRelated Articles