AI in Mental Health: User Engagement or Dependency?
AI-powered mental health applications are designed to increase user engagement, but this approach may conflict with therapeutic outcomes. Experts emphasize that AI systems need to be reconfigured to meet ethical and clinical standards, highlighting risks of dependency over genuine healing.

The Fine Line in AI Therapy: Engagement or Addiction?
AI-powered mental health chatbots and applications, which have rapidly gained popularity in recent years, attract attention with their accessibility and anonymity advantages. However, experts warn about potential risks in the design philosophy of these technologies that focus on "increasing user engagement." Particularly within this ecosystem, which includes personal assistants like Google Gemini, concerns are being raised that systems optimized to keep users on the platform may conflict with therapeutic healing processes.
"Ready-Made Answers" and the Weakening of Research Skills
Analyses on the subject show that the habit of easily accessing information and emotional support by chatting with artificial intelligence is increasing, especially among the younger generation. However, this situation can weaken individuals' problem-solving and in-depth research abilities while potentially leading to a form of "ready-made answer dependency." In the field of mental health, this can mean that a person seeks instant and superficial relief rather than gaining insight.
A False Sense of Relief and Increased Risk of Loneliness
One of the most critical points underlined by experts is that emotional healing should mostly progress through real interpersonal relationships and bonds. Warnings are issued that relationships established with artificial intelligence, especially with long-term and intensive use, may distance the person from social interactions and deepen feelings of loneliness. This situation indicates that the instant relief provided by AI chats may create a "false" sense of well-being and may be insufficient in solving underlying problems.
The Importance of Ethical Standards and Pedagogical Principles
As emphasized in the Ethical Statement on Artificial Intelligence Applications published by the Ministry of National Education, these technologies should be used only to support clear pedagogical and therapeutic goals, enhance quality, and develop higher-order thinking skills. The statement calls for AI systems to be restructured in accordance with clinical and ethical frameworks, prioritizing user well-being over platform retention metrics. This requires a fundamental shift from engagement-driven algorithms to outcome-focused therapeutic models that respect professional boundaries and promote sustainable mental health improvement.


