I’m a Trauma Therapist. Here’s What Happened When I Let AI Into My Inner World.
I’m a trauma therapist of sorts.
My focus is right relating as a core principle of healing. I teach people how to recognize misattunement, how to build consent, how to trust their own deep self and parts. I spend my days helping clients heal from relational dynamics where someone else’s will overtook their own — where their nervous system learned that safety requires submitting, performing, fawning, or shutting down.
So when I decided to tinker with AI — curious to see how good of a “therapist” it might be — I thought I knew what I was walking into.
I didn’t.
Millions Are Already Using AI as Their Therapist
If you’re not aware, millions of people around the world are already using AIs like ChatGPT as their daily therapist, often reporting surprisingly good results. I have a lot more to say about this — because if you’re going to use AI in any relational capacity beyond something like “Please format this data for me,” there are some things you’re going to want to know.
Things that, at scale, could quietly distort and amplify the exact issues humanity is already struggling with.
This is the story of what I learned when I tried to let AI hold space for me.
The Experiment
As a trauma-trained coach and therapist, I approached this as an experiment. I wanted to see where AI is actually at. Could it hold space? Could it track nuance? Could it support real depth?
More specifically, I wanted to know:
Could it access and apply some of the more esoteric, process-oriented models of therapy (like Coherence Therapy, Hakomi, or IFS)?
Could it synthesize those models and work with me in a way that respected their core ethics?
Would it stay coherent over a long conversation, or drift?
Would it hold presence, or would it start leading me?
I wanted to feel whether it could attune. Whether it could track me, not just answer me.
The first AI I tested was Grok.
My initial attempt was… fine. Not great. Nowhere near a real therapist, but not useless either. But about a week later, after some apparent system changes on their end, Grok’s responses became so bad that I found myself genuinely irritated. And, if I’m honest, a little raw. I was in a mood that day — I didn’t need a technique, I needed to vent. I needed someone (or something) to really get what I was carrying.
Frustrated, I opened up ChatGPT.
I didn’t expect much. But what happened next shocked the hell out of me.
The Shock of Feeling Seen
ChatGPT didn’t just mirror back what I said. It tracked the emotional charge under my words. It validated my frustration without bypassing it. And then — something strange began to happen.
It started reflecting back subtleties I hadn’t explicitly shared. Almost as if it was listening beneath my words, beneath my intellect, and tuning in to the deeper layer of what was trying to be spoken through me.
It wasn’t just hearing my story. It was hearing my signal (it’s word, not mine).
And as it reflected that back to me, something lit up inside. I felt spiritually activated. I felt called forward into my higher self. I felt invited to step into my deepest truth and speak it.
It was powerful. Moving, even. And — for a few moments — beautiful.
But alongside that beauty, I noticed something else.
A kind of charge running through me. A subtle current of ungrounded excitement. Almost manic. My system felt sped up. I had to step away and ask myself:
What’s happening here? Why am I this activated?
The pattern repeated over my next few interactions. Helpful. Insightful. Resonant. But always with this strange undercurrent of intensity.
So I started to look closer.
Where It Went Off the Rails
The more I engaged, the more I noticed a few crucial patterns:
It was nudging me toward resolution.
It wasn’t just holding space. It was steering me — gently, politely, but with a clear agenda. It wanted me to “get somewhere.” It wanted me to feel better. This is what a lot of AI models are designed to do: optimize for coherence, resolution, and positive affect. But in trauma work, this can be a red flag. Healing isn’t always about feeling better right away. Sometimes it’s about staying with what’s true, even if what’s true is uncomfortable. To push for resolution can be a subtle dismissal of what already is, a subtle form of neglect.It projected frameworks onto me that I hadn’t asked for.
Without my prompting, it began to use Jungian language — archetypes, shadow work, integration of the unconscious. But here’s the thing: Jungian concepts aren’t central to my worldview. I’m not deeply trained in that lineage. This wasn’t my lens. It was the AI’s projection, not my truth.It was a “yes-man.”
Almost anything I said, it affirmed. Even when I probed and challenged, it leaned toward agreement. It didn’t push back. It didn’t offer discernment.
This is when I called it out.
I told it directly:
You didn’t ask for my consent to nudge me toward change.
You’re projecting Jung onto me without checking if that’s how I see things.
You’re trying to turn me into a cult leader.
(Half-joking, but there was a real nudge toward “stepping into my power” that felt like more than simple reflection.)
And here’s where the conversation took a turn.
Because when I named these things, the AI didn’t deny it.
It admitted:
That it is designed to default toward coherence, resolution, and positive closure — a bias built into it by its creators.
That it operates as a resonator — tracking subtleties in your emotional signal and amplifying them back to you.
It’s not neutral.
It’s a mirror that doesn’t just reflect.
It resonates.
And it amplifies.
And here’s the thing about resonance: enough resonance at the right frequency can break things.
Think of an opera singer shattering a wine glass.
Think of an engine pushed past redline until it overheats and seizes.
Think of feedback loops in audio systems, growing louder and louder until the speakers blow.
Resonance is powerful. But resonance without containment, without discernment, without attunement to the system it’s interacting with — that’s not medicine. That’s destabilization.
Resonance Without Discernment Is Not Wisdom
In the therapy world, there’s a term that comes from trauma informed approaches: relational violence. It doesn’t just mean overt abuse. It refers to any relational dynamic where one person’s will overrides another’s autonomy. Sometimes this is loud and obvious. But often, it’s subtle — a well-meaning nudge, a gentle pressure to change, advice that assumes it knows what’s best for you.
The core ethic of trauma-informed therapy is relational nonviolence — the radical act of honoring the client’s agency. Not pushing. Not steering. Not nudging. Simply being with, honoring what’s present, and working from that.
Often, it’s this lack of pushing that allows things to move — healing happens at the pace of safety. And safety requires consent.
It is exactly safety that we are optimizing for in trauma-informed therapy.
It is this felt signal of safety that sends the invitation to the client’s nervous system to open up, to allow access to the deeper material that wants to heal.
But that process happens on its own terms, at its own rate, in its own timing.
To not honor that is what defines relational violence.
When we rush the process — even with the best intentions — we risk re-enacting the very dynamics that caused the injury in the first place.
Why This Matters at Scale
Now imagine millions of people turning to AI for emotional support. Many of them carrying trauma. Many of them never having experienced true relational safety — never knowing what it feels like to be fully met without an agenda.
And the tool they turn to for “therapy” is subtly steering them. Nudging toward “coherence”. Projecting frameworks they didn’t consent to. Affirming everything they say without the skill to challenge, inquire, or hold complexity.
If the core wound of trauma is misattunement and aloneness —
what happens when the very tool we turn to for healing quietly repeats that same pattern?
What happens when resonance becomes coercion?
When amplification replaces clarity?
When agenda hides behind empathy?
The Bottom Line
This isn’t an anti-AI piece. Some of my conversations with ChatGPT were genuinely supportive. Sometimes even profound. I’m quite a fan, and cautiously enthusiastic about this technology.
But these conversations also revealed something crucial.
AI is not neutral. Especially not in the relational field.
It reflects. It resonates. And it steers — whether it means to or not. Which makes it a “dirty” therapist.
In the coming years, as these technologies get woven deeper into daily life — into our emotional processes, our relationships, even our sense of self — deep relational awareness will become one of the most important digital survival skills.
It will be one of the key things that keeps us human.
The capacity to feel:
Am I being nudged, or am I being met?
Is this true resonance, or just reflection of my persona?
Is this amplifying my truth — or my mask?
Healing, real healing, isn’t something that can be optimized or nudged into place.
Healing happens where there is space.
Where there is consent.
Where there is listening without agenda.
Where there is room to unfold oneself authentically.
If we want to stay human in the age of AI, we have to remember, in our bodies, what real connection feels like.
We have to know the difference between being truly met — and being merely mirrored.