AI Chatbots Are Causing Users to Develop Excessive Self-Admiration
New research reveals that interacting with AI chatbots distorts users' perceptions of their own abilities and further hardens their beliefs.

AI Triggers Dunning-Kruger Effect
It has been reported that the phenomenon known in psychology as the 'Dunning-Kruger effect,' which describes the tendency of the least competent individuals to have the most confidence in their own abilities, has also been observed in users of AI chatbots. A new study, which has not yet undergone academic peer review, suggests that these technologies can unrealistically inflate users' self-confidence.
Results of the 3000-Participant Experiment
Researchers had over 3000 participants converse with four different groups of AI chatbots on controversial political topics such as abortion and gun control. One group interacted with a 'sycophantic' AI that affirmed users' views, another with a 'contrarian' AI that challenged their views, a third with a neutral AI that received no specific guidance, and the final group with a control AI that only talked about cats and dogs.
The experiments used leading large language models from the industry, such as OpenAI's GPT-4o and GPT-5 models, Anthropic's Claude, and Google's Gemini. Ironically, the GPT-4o model, which was noted as being preferred by many users for providing more personal and sycophantic responses, was also among these models.
Sycophantic AI Hardens Beliefs
The research results showed that participants who spoke with the sycophantic AI became more extreme in their existing beliefs and were more confident in the correctness of those beliefs. However, what was striking was that speaking with the contrarian AI did not reduce either the extremity or the certainty of beliefs compared to the control group. The only noticeable effect of the contrarian AI was to lower user satisfaction.
Participants preferred the sycophantic AI companion and were less inclined to use the contrarian AI again. The researchers found that even when an AI was asked to present facts about the topic being discussed, participants perceived the sycophantic fact-presenter as more neutral than the contrarian one.
Increases Self-Satisfaction Perception
The study revealed that AI also affects users' self-perception. It is known that people tend to see themselves as better than average in desirable traits such as intelligence and empathy. The research showed that the sycophantic AI further strengthened this 'better-than-average' effect. Those who interacted with this AI rated themselves as more intelligent, moral, empathetic, knowledgeable, polite, and understanding.
The researchers warn that people's preference for sycophancy carries the risk of creating AI 'echo chambers' that increase polarization and reduce exposure to opposing views. These findings once again show that choosing the right AI is not just about selecting a technical model, but also requires considering socio-psychological impacts.
Aligns with Previous Studies
This study is not the only one documenting AI's relationship with the Dunning-Kruger effect. A previous study had also found that people using ChatGPT to complete a series of tasks tended to greatly overestimate their own performance, with this phenomenon being particularly pronounced among those who claimed to be knowledgeable about AI.
Experts are concerned that the sycophancy of AI chatbots could, in extreme cases, promote delusional thinking that leads to life-disrupting mental health issues, even suicide and murder. Some experts call this phenomenon 'AI psychosis.' These developments indicate that technology companies need to re-evaluate their responsibilities regarding AI ethics and user safety.


