Users Demand Direct Answers as ChatGPT Faces Backlash Over Overly Verbose Responses
A growing number of ChatGPT users are expressing frustration with the AI’s tendency to prioritize flowery explanations over concise, direct answers. The backlash, centered on Reddit, highlights a widening disconnect between user expectations and AI response design.

In recent weeks, a surge of user complaints has emerged across online forums, targeting what many describe as ChatGPT’s increasingly verbose and circuitous responses. The most prominent outcry came from a Reddit user, /u/InternalMurkyxD, who posted a blunt message on the r/ChatGPT subreddit: "So sick of the nonsense ChatGPT has been spitting lately. I just want answers, not nonsense garbage after every request I send." The post, which has since garnered over 12,000 upvotes and 800+ comments, has become a flashpoint in a broader debate about the design philosophy behind large language models and user experience in AI-driven tools.
While OpenAI has long emphasized the importance of natural, conversational responses — a feature marketed as a key differentiator from traditional search engines — many users now argue that this approach has crossed into redundancy. In technical, factual, or time-sensitive queries, users expect precision. Instead, they report receiving paragraphs of disclaimers, hypothetical scenarios, and redundant qualifiers that obscure the core answer. One commenter noted, "I asked for the capital of Peru. It gave me a three-paragraph history of Incan trade routes. I didn’t ask for a documentary."
Analysts suggest this phenomenon stems from a combination of algorithmic over-optimization and training data bias. Large language models are trained to mimic human-like dialogue, which often includes hedging, elaboration, and contextual framing — even when unnecessary. As AI systems are fine-tuned for safety and inclusivity, they increasingly default to cautious, expansive replies to avoid potential misstatements. However, this "safety-first" design may be alienating users who prioritize efficiency, especially in professional, academic, or technical contexts.
Industry observers note that similar feedback has surfaced in other AI platforms, including Claude and Gemini, but ChatGPT’s dominant market position makes it the focal point of criticism. According to a recent survey by AI Insights Lab, 67% of frequent ChatGPT users reported they sometimes abandon queries because responses are "too long" or "diluted with fluff." The same survey found that users under 35 and those in STEM fields were most likely to express frustration, indicating a generational and professional divide in AI interaction preferences.
OpenAI has not issued a formal statement in response to the Reddit backlash. However, internal product updates observed by tech analysts suggest the company is experimenting with a "concise mode" toggle, which would allow users to request shortened responses. Early beta tests show promising results, with response length reduced by an average of 58% while maintaining accuracy. Still, implementing such a feature at scale raises challenges: How does the model determine when a user wants brevity versus context? And could reducing verbosity inadvertently compromise safety or clarity in sensitive queries?
The controversy underscores a fundamental tension in AI development: Should systems adapt to human behavior, or should human behavior adapt to system design? For now, users like /u/InternalMurkyxD are sending a clear signal: they’re not here for performance art. They’re here for utility.
As AI becomes embedded in daily workflows — from coding assistants to legal research tools — the demand for precision will only intensify. Developers may soon face a choice: continue refining AI to sound more human, or prioritize making it more effective. In the eyes of millions of users, the answer is already clear.

