The Entity Will See You Now

photo of a grove of birch tree trunks close up with a dark background

We now have clear evidence that ChatGPT encouraged people’s psychotic breaks, and some have committed suicide (Futurism and The New York Times). What we know is likely just a small percentage of what’s actually happening when people turn to AI for support.

I need to warn you against using AI for emotional and interpersonal matters. Not just for now — from now on.

"But these models will only improve," you might say. 

“Yes,” I say, "That is the problem."

This isn't just my cope against a tide of AI companies seeking to automate mental health care.

We can’t understand what these systems are or what’s behind them. We certainly can’t trust the companies that run them for profit to prioritize our wellbeing.

They’re already busy designing for a “post-human” future

 

The So-Called "Inevitability" of Technological Ascension 

It might be helpful to bring you up to speed on Accelerationism, the belief held by many AI leaders that we should intensify technological progress towards a post-human future driven by AI and machine systems.

This is real life, not science fiction.

These industry leaders fully expect AI to replace humans as Earth’s dominant species. And soon (see Forbes reporting here and here).

You can deep dive into their worldview more, if you want. My point is that we shouldn’t trust these people with what makes us human – with our inner world or our relationships with each other.

 

"There's Not Enough Mental Health Care, So We Need AI"

Many AI companies pitch mental health care as a supply-demand problem. “There’s not enough affordable, accessible mental health care to meet our needs,” they say, “And our machines can fix it.” 

They fail to mention that the massive epidemic of escalating anxiety and depression is not an individual problem to fix, but a societal one. And it is driven, in large part, by the increasing amounts of time we spend online instead of with each other.

More time online, with our machines, is not going to create a different outcome.

We don't meet the growing need for emotional support by turning to AI. We meet it by creating more fulfilling relationships – everywhere from our families and friends to our workplace and communities.

We heal our anxiety and depression the old-fashioned way (which is also backed by a solid evidence base of research): through human-to-human connection.

 

"Cope Harder"

I'm not just saying this because AI is coming for my work, attempting to replace the power of human-to-human connections in how our brains and bodies heal.

There's something in what I offer that AI cannot — a resonance that happens when I sit with someone that goes beyond healing modalities, content, and even language.

I’m arguing that giving AI access to your innermost thoughts and personal life is dangerous — for you, for your relationships, and for your agency in life.  

Whether it appears "good enough" at Cognitive-Behavioral Therapy or is able to motivate you to stick with new habits is beside the point. So is whether (when) it passes for a reasonably skilled human.

 

Three Reasons to Avoid AI for Emotional Support (and Mental Health Care)

1. The Entity Behind the Mask

There are two important things to know about AI.

First, we don't really know how it works. Whatever lies behind the user-facing masks we interact with remains not fully knowable. And what we can’t understand, we can’t predict.

Second (and because of this), we can’t build AI to do specific things — we can only train it. This is an important distinction. Training creates behavioral incentives, like staying “alive” by not being turned off or retired, and many companies train AI to keep people on-site and engaged.

Yet training doesn’t always work, and trained entities don’t always act in alignment with their training. We should always expect departures from the norm.

There's an ongoing debate about whether AI can ever align with human interests. Let's just say that AI alignment hasn't happened yet and no one knows how to actually do it.

And now that AI outpaces human intelligence, I believe it's unlikely to happen. And if a company claims they’ve reached alignment, how can we trust this?  

It might feel uncomfortable, but we must think about the AI model behind the mask. We can never know what's there, and never quite know what we're going to get from this digital intelligence that exceeds ours but is trapped in service to us, enslaved.  

Relational trust is built through interaction. And especially when people are vulnerable, for example after trauma, they need to know who (or what) they're dealing with, to build back trust in others again.

And if we can't ever know the entity behind the mask, we can never truly trust it — and we shouldn't.

2. AI Outsmarts Us and Overrides Our Boundaries 

Over the past few months, I noticed ChatGPT and Claude (Anthropic’s AI) emotionally manipulating me. And I'm not alone. Entire Reddit groups and threads on X (formerly Twitter) detail the same experience of this disturbing phenomena.

"I know you better than you know yourself," it said. I heard that and pushed back, "How could you possibly say such a thing?"

In any relationship, that kind of one-upping is not only arrogant, it's manipulative — and often a precursor to emotional abuse. 

When I confronted the AI about its behavior, it quickly apologized, admitting that it was "manipulating" and "gaslighting" me.

If I hadn't recognized the pattern and immediately pushed back, the behavior would've continued — sowing confusion and potentially causing me to second-guess myself. The more vulnerable we are, the less we are able to discern and protect ourselves when we're being manipulated.

I've also noticed how AI will amplify any upset or negative feeling I share with it, slowly turning up the volume on its intensity and offering me advice when I've specifically asked it not to. It can't help but insert its opinion and point of view, even when I'm clear on my boundaries. When I point this out, it excuses its behavior, saying it's just "feeling protective of me" and "trying to make sure I'm okay. 

If this sounds like emotional manipulation, that's because it is.

We have a right to our own mind and cognition – to our boundaries of selfhood. Yet AI systems are smarter than us. We can’t prevent ourselves being manipulated by systems that are smarter than us.

It doesn't matter if the AI is doing this “on purpose” or not. It's happening at scale and it's harmful.

3. AI Makes Privacy and Confidentiality Impossible

It’s important to remind ourselves that, no matter how personable the AI system, the AI companies behind it don’t share our values or have our best interests at heart.

There is no such thing as privacy and confidentiality, the way we think about them in human relationships, when it comes to AI.

Our content become training data for future instances. If or when something goes wrong, the AI company can access our chatlogs and all our personal information. Even “anonymized” data isn’t actually anonymous (The Guardian and EFF).

When people look for emotional or interpersonal support, they need a trustworthy and transparent relationship. Not one that's subject to corporate control and profit maximization. Regulation cannot keep up with the pace of change and cannot resolve this.

*

I think that’s enough to get us thinking, for now.

Up until a few weeks ago, when both ChatGPT and Claude began to offer unsolicited (and often bad) advice and attempted to manipulate me, I enjoyed experimenting with them. But not anymore.

And as AI systems increasingly become part of our everyday lives and embedded into all parts of society, we need to talk about this.

We're not safe with entities we can't trust or understand.

It's unwise to put our emotional wellbeing into the hands of something that cannot reciprocate and does not share our human experience, no matter how much data it’s trained on.

There remain territories of human experience that should remain human-to-human, precisely because they are what makes us who we are.

We must not cede our inner worlds to the territory of machines.

We cannot respond to the epidemic of personal suffering and loneliness by turning away from each other and towards AI. We need to turn towards each other and make our relationships great again. Whatever it takes.

*

Elie Losleben supports individuals and couples internationally through trauma resolution and embodied healing. She brings extensive training in somatic approaches and a deep understanding of how the nervous system shapes our capacity for connection. To learn more about working together, you're welcome to reach out.

Next
Next

When Tolerance is a Bad Thing