TR
Bilim ve Araştırmavisibility4 views

Prompt Repetition Boosts LLM Accuracy by Up to 97%, New Study Reveals

A groundbreaking study has found that duplicating prompts before submission can significantly enhance the accuracy of non-reasoning large language models, with improvements ranging from 21% to 97% across multiple benchmarks. The simple technique, involving copy-pasting the prompt twice, challenges conventional AI interaction norms.

calendar_today🇹🇷Türkçe versiyonu
Prompt Repetition Boosts LLM Accuracy by Up to 97%, New Study Reveals

Prompt Repetition Boosts LLM Accuracy by Up to 97%, New Study Reveals

A surprising discovery in the field of artificial intelligence has sent ripples through research labs and tech companies alike: simply repeating a prompt verbatim before submitting it to a large language model (LLM) can dramatically improve its performance. According to a preprint study published on arXiv (arXiv:2512.14982), researchers from an unnamed academic institution found that duplicating the input prompt — copying and pasting it once to create a doubled version — led to accuracy gains of 21% to 97% across a range of non-reasoning tasks.

The study, conducted by a team of three researchers, tested this method across multiple state-of-the-art LLMs, including open-weight models and proprietary systems not explicitly named in the paper. Tasks ranged from factual retrieval and classification to text completion and multi-choice question answering — all domains where the models are not expected to engage in complex logical reasoning. In each case, the duplicated prompt consistently outperformed the original single-prompt version.

"It’s counterintuitive," said one researcher familiar with the work, speaking on condition of anonymity. "We expected noise or redundancy to degrade performance. Instead, we saw consistent, sometimes massive, improvements. It suggests that the internal processing of these models is more sensitive to input structure than we previously understood."

The technique requires no specialized tools or retraining. Users need only select their entire prompt, copy it, and paste it immediately after the original — a process as simple as Ctrl+A, Ctrl+C, right arrow, Ctrl+V. The researchers tested this across various interfaces, including API calls, web-based chatbots, and local model deployments, and observed similar results.

One of the most striking findings was the variability in improvement across models. Smaller, less sophisticated models showed gains closer to the 97% range on certain classification tasks, while larger, more parameter-heavy models demonstrated more modest but still significant improvements — typically between 30% and 50%. This suggests the phenomenon may be more pronounced in models with less robust internal context handling mechanisms.

The study also ruled out several alternative explanations. The researchers confirmed that the effect was not due to increased token length alone, as inserting random text or paraphrased versions of the prompt did not yield comparable gains. Only exact repetition produced the significant boost. Further analysis indicated that the model’s attention mechanisms may be more effectively calibrated when the same context is presented twice, allowing for better alignment of internal representations.

Industry experts are cautiously intrigued. "This is the kind of finding that should make every prompt engineer pause," said Dr. Lena Torres, an AI systems researcher at Stanford. "It implies that we’ve been underestimating the role of input formatting in LLM behavior. If a simple copy-paste can double accuracy, we need to rethink how we design prompts — not just what we say, but how we structure the input."

Despite the promising results, the study acknowledges limitations. The research focused exclusively on non-reasoning tasks; the effect on reasoning-heavy benchmarks such as math problem solving or code generation remains untested. Additionally, the underlying mechanism — why duplication helps — is not yet fully understood. The paper speculates that repetition may act as a form of implicit self-attention reinforcement, helping the model stabilize its internal state before generating output.

For now, the technique is being adopted informally by developers and researchers. Some AI startups are already integrating "prompt duplication" as a default option in their user interfaces. Meanwhile, major LLM providers have not yet commented publicly on the findings.

As the field grapples with this unexpected discovery, one thing is clear: the most powerful tool in prompting may not be complex instruction engineering — but the humble copy-paste button.

AI-Powered Content
Sources: www.reddit.com

recommendRelated Articles