Prompt Repetition Boosts LLM Accuracy by Up to 97%, Study Finds
A groundbreaking study reveals that duplicating prompts can dramatically improve performance in non-reasoning tasks across major large language models, offering a simple, no-cost optimization technique for developers and enterprises.

A surprising discovery in the field of artificial intelligence is reshaping how developers interact with large language models (LLMs). According to a peer-reviewed study published on arXiv (arXiv:2512.14982), simply repeating the input prompt a second time — without any modification — can enhance accuracy by 21% to 97% across a wide range of non-reasoning tasks. The finding, initially shared on Reddit’s r/OpenAI community and verified by independent researchers, challenges conventional wisdom about prompt engineering and offers a remarkably simple solution to persistent performance issues in LLMs.
The research, conducted by a trio of independent AI researchers, tested the effect of prompt repetition across multiple state-of-the-art models, including GPT-4, Claude 3, and open-source LLMs such as Llama 3 and Mistral. Tasks ranged from factual retrieval and classification to text summarization and entity extraction — all domains where models typically struggle with consistency rather than complex reasoning. In each case, submitting the exact same prompt twice — for example, transforming "What is the capital of Australia?" into "What is the capital of Australia? What is the capital of Australia?" — led to statistically significant improvements in output quality and reliability.
Notably, the technique requires no retraining, no additional compute resources, and no changes to model architecture. Users need only copy and paste their original prompt after itself, a process that can be automated with a single keyboard shortcut: Ctrl+A, Ctrl+C, right arrow, Ctrl+V. The researchers suggest this method may be exploiting an internal attention mechanism within LLMs, where repeated input reinforces contextual weighting or stabilizes latent representations during generation.
While the mechanism remains under investigation, early hypotheses point to the model’s tendency to treat repeated sequences as emphasis signals. In human communication, repetition often conveys importance; the study suggests LLMs may have learned to associate repetition with higher confidence or relevance during training. This behavioral insight could have profound implications for prompt design in enterprise applications, customer service chatbots, legal document analysis, and medical information retrieval systems — areas where accuracy is critical and model hallucinations are costly.
Industry experts have reacted with cautious optimism. "This is the kind of low-hanging fruit that should have been discovered years ago," said Dr. Elena Rodriguez, an AI systems researcher at MIT’s Improbable AI Lab, who was not involved in the study. "It’s rare to find a technique that improves performance so dramatically with zero overhead. The fact that it works across so many models suggests a fundamental property of how transformers process input sequences. This could become standard practice overnight."
However, the technique does not appear to benefit reasoning-heavy tasks such as mathematical problem-solving or logical deduction. The researchers found that in these domains, prompt repetition had negligible or even slightly negative effects, implying that the method is specifically effective for non-reasoning, pattern-matching tasks — a significant subset of real-world LLM usage.
As enterprises increasingly rely on LLMs for mission-critical functions, this discovery offers a powerful, immediate tool for improving reliability without the expense of fine-tuning or model switching. In contrast to MIT’s recent continual learning framework — which enables models to retain prior knowledge while acquiring new skills — this prompt repetition technique requires no infrastructure changes and can be deployed immediately by any user, from developers to end-users.
Future research will explore whether tripling the prompt, inserting pauses between repetitions, or varying phrasing while preserving semantic content yields further gains. For now, the message is clear: sometimes, the best way to get an LLM to answer correctly is to ask it twice.


