​​The AI in the Room

I'm in a cafe with my friend Alex, we are enjoying a cup of coffee and dessert. I have zoned out a little as the conversation has been going in circles, but Alex saying, “I know he is not good for me, even ChatGPT said he is toxic”, pulls me back into the present. We kept going back and forth about the rather fishy situationship that Alex has, the type that leaves you emotionally exhausted and wondering what actually happened. I tried my best to pour some reason into her, but somehow the only argument that convinced her was the one made by artificial intelligence. Perhaps the novelty of these types of arguments makes them conversation stoppers, or rather, conversation pauses. 

It's quite baffling that friends' advice doesn't make the cut, but an AI-generated response sounds legitimate. It made me feel less like a good friend at that moment. Why can AI make a convincing argument, and I can't? Because AI feels more analytical? It gives responses based on a huge amount of data it digests and spits out an “average” of what it consumed. It feels unbiased even though that is completely untrue.

This cafe chat made me think about the profound shift that has happened in our broader society, because Alex is not the only one who has turned to an artificial voice for advice. The demand for mental health support, in whatever form, has been on the rise with the shift towards a more open conversation about it in the public view.

I'm going to be honest: I also occasionally use ChatGPT as a sounding board for my personal issues. It can feel like an easy and low-pressure way to get an answer, see another perspective, or get a quick anxiety relief. For whatever reason, admitting that you turn to AI for help in personal situations feels shameful. As if you don't have the means to talk it all out with a real human. In a way, it feels like a failure to solve your issues the “right way” by asking other people for help. It highlights loneliness, maybe not overall in life, but in that specific moment. I can be crying in my room at 2 am over a text, and I don't have anyone to vent to at that time. So yes, it is lonely. Another reason for this causing shame is that it feels like cheating on human intimacy. I could've called a friend or my parents, but I didn't. I want quick relief. I know it's not real empathy and that it's all pattern recognition. But at the moment, it's all irrelevant. Just a kind voice is enough.

And Alex and I are not the only ones who do it. But why? Why do we feel comfortable sharing these personal matters with a machine? And why do its responses feel comforting and legit?

Almost as soon as LLMs became accessible to the mass public, personal therapy became a popular use case. Making AI a sounding board for personal problems and stories fits a similar trajectory. Use cases vary a lot – from depression and anxiety management to something as simple as analysing a text message from someone and constructing a response.

For many people, therapy is out of reach for financial reasons. The use of AI can be seen as an attempt to democratise mental health support and make it more available. AI agents are, in most cases, free and, unlike personal therapy, do not have a time limit.

An important feature of AI is its 24/7 availability , so in case of an emergency like a panic attack (Rousmaniere, 2025), you can get some comfort immediately and not have to wait for a scheduled appointment with a human professional. Turning to AI for help is, of course, not ideal, but undeniably better than nothing and in some cases can be life-saving.

It's okay to just want advice on what to do and not make the judgment yourself. Many people try outsourcing these decisions to a therapist, but giving advice is actually beyond the scope of a therapist. So the next best thing is AI — it gives convincing answers and seems to deeply understand your specific situation.

The Eliza effect refers to the human tendency to attribute human characteristics, such as empathy or intelligence, to computer programs. Eliza was the first ever chatbot, made in 1966 and designed to resemble a therapist. Initially, this was an experiment aimed at seeing what the reaction of people would be and exploring the human-machine interactions (Tarnoff, 2023). It was expected that people would feel uncomfortable sharing something personal with a computer, but that proved to be completely wrong. Eliza didn't give thoughtful answers; it merely matched patterns, recognised words, and gave the user a “human-like” answer, although rather simple. The most common interactions with the bot resembled a style of therapy where the specialist encourages patients to reflect on their feelings by repeating their statements back to them. Eliza recognised a specific word in the user input and asked back a question with the same word. This was called the “doctor script” (London Intercultural Academy, 2025).

The Eliza effect explains what is happening today, although the way chatbots communicate has changed significantly. AI agents, in their default setting, are designed to sound agreeable and supportive. This creates a subtle dynamic of attachment, where the user feels inclined to share more and more personal information with the chatbot as the latter seems to be accepting and all knowing. AI becomes this anthropomorphic being that somehow has answers to all your questions and always has your back. 

Alison Darcy, a clinical research psychologist, argues that people are more likely to disclose personal information to an AI faster than a human, because they don't feel judgment (TED, 2025). Even if you are with a therapist who suits your criteria, you still take note of how you appear, maybe even perform to some extent, and filter what you are saying. Human-to-human contact entails some kind of filter. AI takes away that performative element. It fills a need for unconditional acceptance. Even though AI does not actually have empathy or compassion, it gives the impression that it does, which is enough for some people. Feeling more inclined to share personal info with AI is, in a way, ironic because while it doesn't judge you, it does collect your data. So this isn't really a safe space that it seems to be. 

The mass interest in AI as a substitute for therapy or just a sounding board for personal issues highlights a societal shift. Social media and the largely performative nature of an online persona have changed how we interact and how we trust others. Online interactions are judgment-based – you can't help but think about how others perceive you online, not to mention that social media makes it possible for your content to be viewed by people who are far outside your real social circle.

Overall, the number of in-person connections dwindled, which also made each of them way more valuable. Finding someone with whom we can open up is harder due to a smaller number of social connections.

​While chatbots can be used as a listener to temporarily deal with depression, anxiety, loneliness, or as a tool for urgent support, they cannot replace a human specialist. Chatbots reproduce stigma about certain conditions, such as alcohol dependence and schizophrenia. This bias could deter users from seeking real-life care. In some cases, AI models cannot appropriately respond to self-harm messages from users (Wells, 2025). Some of these cases lead to real-world tragedies. ​The biggest issue with LLMs is that the human element is missing. While AI models sound kind, they do not have your best interests in mind, and they lack a balanced perspective. 

Back at that cafe, I silently judged Alex a little, even though I did similar things. Now that I know the reasoning behind it, I feel more accepting. Many of us are just trying to manage our way through the lonely world, so maybe it’s best to be a little less judgmental towards each other.


Sources:

London Intercultural Academy. (2025). The story of ELIZA: The AI that fooled the world. https://liacademy.co.uk/the-story-of-eliza-the-ai-that-fooled-the-world/

Rousmaniere, T. (2025, December 18). Survey: ChatGPT maybe the largest provider of mental health support in the United States — Sentio University — Sentio Marriage and Family Therapy program in California. Sentio University. https://sentio.org/ai-research/ai-survey

Tarnoff, B. (2023, July 25). Weizenbaum’s nightmares: How the inventor of the first chatbot turned against AI. The Guardian. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

TED. (2025, May 19). The mental health AI chatbot made for real life | Alison Darcy | TED. YouTube. https://www.youtube.com/watch?v=IzTpuucqim0

Wells, S. (2025, June 11). Exploring the dangers of AI in mental health care. Stanford Institute for Human‑Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

Previous
Previous

Slimloung(ing)

Next
Next

People between Things