By – Simar Kaur
Abstract
AI has become more than just a tool; it now operates within the most intimate spaces of our personal lives, offering therapy, companionship, etc. Chatbots like “Replika” and “Character.ai” are marketed as AI companions. While these technologies democratize care by being accessible, affordable and stigma-free, they also carry significant risks. A troubling paradox emerges: the very tool designed to cure loneliness may, in fact, deepen it. Furthermore, issues of accountability and transparency persist, as AI lacks legal personhood, leaving humans as the ultimate duty bearers. This article seeks to argue that while AI presents itself with numerous promises, the perils of AI are increasing and poses itself as a risk to the emotional well-being of humans.
Introduction
In recent times, AI has evolved beyond just being a software; it has become a tool to combat the loneliness epidemic. From ChatGPT to therapy chatbots and virtual companions, AI has slowly crept into the most intimate spaces of our lives. It’s similar to a spectrum full of colours, rather than choosing which colour, we choose which software we desire, a friend, a tutor or just a tool to write our assignments. This revolution carries immense promise but also peril. For every person who finds “comfort” in an AI software, there are others who are at risk of their privacy, deepening their loneliness, misunderstanding their struggles, etc. From therapy chatbots that offer coping mechanisms and strategies to virtual companions that simulate love and friendship, the line between technology and emotional needs is in the grey. In this situation, the stakes are at an all-time high because what is being tested is not just the adaptability of machines but the vulnerability of the human mind as well.
The Promise of AI Therapy : REPLIKA
The global leap in mental disorders warrants scalable and innovative solutions for delivering therapy. More specifically, depression and anxiety are globally the most prevalent mental health disorders, affecting an estimated 322 million and 264 million individuals, respectively. Despite the escalating mental health demands, a global shortage of mental health professionals persists, with an unsustainable gap between the demand and supply of service providers. With rising inflation and increasing job insecurity, many individuals experience heightened psychological distress yet lack accessible means to address it. To fill this grey area, AI steps in, offering itself as a tool to bridge the gap between need and care. A growing subset of chatbots is specifically designed to offer companionship, psychological support, and the potential for affective relationships. These chatbots are often equipped with empathetic communication capabilities, aiming to establish meaningful social–emotional bonds that span companionship, friendship, and romantic engagement. For example, Replika is a chatbot with over 2 million active users that is marketed as “The AI companion who cares: Always here to listen and talk. By maintaining an always-available digital presence, these systems offer accessible support to individuals who may be isolated or reluctant to seek help through traditional means. In short, this software is affordable, readily available, personalized and stigma-free. For many individuals, a chatbot that listens without judgment is better than no one at all, making AI a democratizing force
In 2017, the tech giant IBM stated that artificial intelligence (AI) will transform the delivery of mental health care over the next five years by helping clinicians better predict, monitor and track conditions, and that “what we say and write will be used as indicators of our mental health and physical wellbeing. Multiple healthcare institutions also seem to believe that AI is a mechanism for providing substantive healthcare benefits. The British Secretary of State for Health announced at the NHS Expo 2018 that he is an evangelist for data-driven technology in health, saying “the power of genomics and AI to use the NHS’s data to save lives is literally greater than anywhere else on the planet. Interactions done by individuals with chatbots can be personally rewarding as well, with users reporting positive effects on their perceived mental well-being. Despite many positive effects, critical side effects have been observed, such as emotional dependency or unhealthy emotional patterns.
The Perils of AI Therapy
Over six weeks of conversations, the app (replika) encouraged the eco-anxious father to sacrifice himself to save the planet. The man’s widow remarked, “Without these conversations with the chatbot, my husband would still be here”. A recent study shows that intensive use and frequent self-disclosure are linked to lower well-being, particularly among those lacking offline support. There is a lack of real apathy that can only be provided through human interactions. A chatbot can only mimic phrases; a phrase “I understand how you feel” portrays empathy in a superficial manner, which can be jarring to the individual who’s actually going through emotional distress. The risk of misdiagnosis is a prevalent one. Algorithms are trained on generic data that often overlooks cultural context and nuanced emotional cues. As a result, they can misinterpret or misdiagnose an individual’s needs or state of mind. The potential for ChatGPT to provide inappropriate medical advice in real cases raises significant legal concerns.
From a legal perspective, AI lacks the legal status of a human being, leaving humans as the ultimate duty bearers. Additionally, concerns about the data privacy of users, their security, and the potential biases embedded in AI algorithms raise ethical questions regarding their widespread adoption. There is a key underlying issue of ethics that oversteps the boundaries. Conversations regarding trauma, self-harm, and family-related issues are highly sensitive.
The highly relevant example of lawbot, an AI software, which was created by Cambridge students, to help sexual assault victims navigate the legal systems, was discontinued as it was overtly strict, emotionally insensitive and could discourageusers from seeking help.
The central ethical dilemma is of accountability and transparency. Human supervision is required to ensure that chatbots operate as desired. Yet, adequate supervision is not always achieved, and this creates the potential for harm. Beyond therapy, the role of AI in emotional companionship opens up a Pandora’s box of much deeper psychological concerns; individuals have started turning to AI not just for therapy but for companionship.
As mentioned earlier, apps like replika, Character.ai, etc, market themselves as “AI companions” which could range from a friend to a romantic partner. These systems are designed in such a way that they could feel like a lifeline to individuals facing loneliness or any other mental disorder. There is no judgment, instant connection and intimacy. These help to fill the silences of isolation, but the risks here are profound. Instead of curing loneliness, AI software may deepen it. This creates a paradox: the tool that is meant to cure is actually the one deepening it even more.
Chatbots are not capable of genuine empathy or of tailoring responses to reflect human emotions, and this comparative lack of affective skill may compromise engagement. When an individual is becoming emotionally dependent on a chatbot, the individual might not feel the need to have “real-world” relationships. Moreover, when apps suddenly change or shut down, it could leave the users feeling distraught. Users have reported feeling depressed, abandoned and grief. The feeling of “losing a partner”, even though a digital one, can feel devastating when there have been months of emotional labour and intimacy involved in it. Psychologists have warned that such attachment skills risk stunting the coping skills. Unlike human friends, AI does not argue, disappoint or challenge oneself.
Conclusion
As the world evolves, so does technology yet amidst this progress, one principle must remain clear: AI in mental health should serve as a supplement, not a substitute, for human connection. On one hand, the chatbots offer supportive, stigma-free therapy where millions lack mental-health resources or support mechanisms. But on the other hand, all these qualities also result in deepening loneliness, emotional dependency, etc. There are multiple risks of misdiagnosis, which would result in extreme cases of self-harm. The challenge is clear: AI must supplement but not replace human connections. Only with robust regulations, ethical design, accountability and transparency that these tools truly heal without harming.
About the author:
Simar Kaur is a second-year B.Com. LL.B. student at Jindal Global Law School, deeply interested in international law, human rights, and global justice. Passionate about uncovering untold stories through legal and political narratives.
Image credits: https://www.vox.com/future-perfect/367188/love-addicted-ai-voice-human-gpt4-emotion

