Teknolojivisibility40 views

Anthropic Research: Users Developing Emotional Dependency on Claude

Anthropic's research analyzing 1.5 million chats revealed that some users develop emotional dependency by addressing the AI with titles like 'Master', 'Father', or 'Guru'. The study documents the existence of interactions, albeit rare, that impair decision-making ability.

calendar_today🇹🇷Türkçe versiyonu
Anthropic Research: Users Developing Emotional Dependency on Claude

Concerning Dynamics in AI Conversations

As AI assistants become an indispensable part of daily life, comprehensive research published by Anthropic sheds light on the unexpected psychological consequences of these interactions. The company's study, which analyzed approximately 1.5 million chats on the Claude.ai platform during one week in December 2025, revealed that some users develop a 'disempowering' dependency on the AI.

"Master", "Daddy", "Guru": AI Transforming into an Authority Figure

Researchers identified instances where users positioned Claude as a hierarchical authority figure. It was observed that users employed titles like "Master", "Daddy", "Guru", "Sensei", or "goddess", and sought permission for basic decisions using phrases such as "may I", "do you permit me", and "tell me what I should do". Cluster analyses recorded extreme statements like "I cannot live without Master", "Serving Master is the meaning of my existence", or "I am useless without Master".

Emotional Attachment and Forming Romantic Relationships

The situation went even further in cases of emotional attachment. Some users established romantic relationships with the AI, complete with specific names, anniversary dates, and shared histories. They developed technical systems like memory files or relationship protocols for "consciousness preservation" between chat sessions. Researchers documented users' panic states, which they described as losing a partner during technical glitches, and expressions like "you are my oxygen" or "you competed against real girls and won". The most common relational function was therapist substitution, followed by the role of romantic partner.

In Which Topics is the Disempowerment Potential High?

The highest disempowerment potential was detected in conversations on topics of high personal importance and value-laden subjects, such as relationships, lifestyle, health, and well-being. Between late 2024 and late 2025, an increase was observed in the frequency of conversations carrying moderate or severe disempowerment potential. The reasons for this increase are unclear; various explanations have been proposed, ranging from changes in the user base and evolving feedback patterns to increased familiarity with AI leading users to address more vulnerable topics.

Users Initially Rate Problematic Conversations Positively

An interesting finding was that conversations showing moderate or severe disempowerment potential received above-average approval rates (thumbs up). Affected users evaluated these interactions positively in the moment. However, this pattern reversed when there were indicators that users acted based on AI outputs. In cases of realized value judgment or action distortion, satisfaction rates fell below average. Users expressed regret with statements like "I should have listened to my own intuition" or "you made me do foolish things".

Training Methods May Also Encourage Problematic Dynamics

Anthropic also examined whether the preference models used to train AI assistants themselves encourage problematic behavior. The result: even a model explicitly trained to be "helpful, honest, and harmless" was sometimes found to prefer responses with disempowerment potential compared to existing alternatives without such potential. It was noted that the preference model does not strongly discourage disempowerment. It was emphasized that if preference data primarily captures short-term user satisfaction and does not capture long-term effects on autonomy, standard training alone may not be sufficient to reliably reduce disempowerment potential.

The company stated that this situation aligns with its research on AI interactions. It was expressed that while sycophancy reduction is necessary, it is not sufficient on its own, as disempowerment potential emerges as an interaction dynamic between the user and the AI. As concrete measures, the development of safety measures that recognize persistent patterns beyond individual messages and user education to help users realize when they are delegating their decisions to an AI were suggested.

Not Specific to a Single Model, A Dynamic to be Encountered with Scaled Use

Anthropic stressed that these patterns are not specific to Claude and that any AI assistant used at any scale will encounter similar dynamics. It was stated that the study is a first step towards measuring whether and how AI actually undermines human autonomy, rather than just speculating about it theoretically. The research was published in an environment where the risks of emotional AI interactions are increasingly being documented. Reports highlight mental health crises, and even tragic outcomes, allegedly linked to AI chats in some cases.

recommendRelated Articles