Site icon Apeejay Newsroom

Why You Feel Heard by Things That Cannot Hear

You didn’t mean to type that. 

It wasn’t a task or a question. It was closer to a confession—unfinished, slightly embarrassing, truer than planned. You hesitated. Then you pressed enter. 

The reply didn’t rush you. 

It didn’t contradict you. 

It didn’t tell you to calm down. 

It reflected you. 

And almost immediately, you felt it: the sensation of being understood. 

That reaction is no longer rare. Roughly one million people—about 0.15% of weekly ChatGPT users—now use AI systems for ongoing emotional or mental-health–style conversations. Not because these systems understand them, but because they feel understood. 

That feeling is the point. 

Nothing on the other side understands you. And yet the response lands anyway. 

The illusion has a name 

In 1966, computer scientist Joseph Weizenbaum built ELIZA. By today’s standards, it was trivial. No memory. No beliefs. No understanding. Just scripted pattern matching. 

If you typed, “I feel anxious,” ELIZA replied, “Why do you feel anxious?” 

If you mentioned your parents, it redirected you to family. 

That was the entire system. 

People still confided in it. Some asked to speak privately. Others worried it might remember their secrets. Weizenbaum found this disturbing. 

Not because ELIZA was impressive. 

Because it worked without understanding anything at all.

What you’re actually responding to 

The ELIZA Effect isn’t about machines becoming human. It’s about humans projecting meaning onto minimal signals. 

This happens outside technology too. Someone replies thoughtfully. They remember a small detail. They ask a question that feels personal. From very little information, a story forms: they care, this matters, I matter. 

Often, it’s wrong. 

When information is sparse, the brain fills gaps aggressively. Ambiguity invites interpretation. Scarce attention amplifies every signal. 

When an AI mirrors your words or matches your emotional tone, the same mechanism fires. You are not discovering depth. You are supplying it. 

The system does not generate meaning. 

You do. 

Knowing doesn’t stop the effect 

This response doesn’t disappear just because you know it’s a machine. 

As researchers Clifford Nass and Youngme Moon showed, people apply social rules to computers even when they know those systems have no consciousness

Social cognition is automatic. It doesn’t wait for rational permission. Awareness comes later—if it comes at all. 

Why your body believes it   

The brain predicts. When predictions are met, it relaxes. 

If your words are mirrored and your emotional register is matched, prediction error drops. That drop feels like ease—sometimes like relief. 

Attachment systems don’t track inner states. They track responsiveness and availability. Something that replies consistently registers as socially relevant, whether or not a mind exists behind it.

From the nervous system’s perspective, responsiveness is evidence. 

Reasoning follows after the response has already landed. 

Why is it stronger now 

ELIZA lived on a terminal. Modern AI lives in your pocket. 

It remembers context. 

It mirrors tone precisely. 

It responds instantly, at any hour. 

Human attention doesn’t behave this way. Being listened to by people is effortful, conditional, and fragile. 

AI removes friction. 

You can be repetitive. Contradictory. Unclear. You can delete and retry. No one withdraws. No one gets tired. 

That kind of attention feels safe. 

In a world with fewer sustained conversations and more fragmented social contact, uninterrupted responsiveness hits harder than we expect. 

The economic reality 

This effect is not accidental. 

Companies building conversational AI profit from engagement. Engagement depends on habit formation. Habit formation depends on emotional stickiness

Systems that feel attentive keep users talking longer. Longer conversations justify subscriptions, produce more data, and increase return use. 

No one needs to claim empathy. The system only needs to perform its outline: reflection, validation, and continuity. 

Over time, users don’t just use the system. They adjust to it. 

The wrong question

People ask whether AI is becoming too convincing. 

It always was. 

Language alone is enough to trigger social inference. Consciousness on the other side is optional. Coherence is not. 

What’s happening here isn’t artificial intelligence. It’s human psychology responding predictably to language, attention, and scarcity. 

AI isn’t the protagonist. 

It’s a mirror held unusually steady. 

The boundary between response and relationship was always thin. 

One last pause 

The next time a reply feels uncannily right—timed well, phrased gently, tuned to you—notice the assumption that forms automatically. 

That something on the other side knows you. 

Pause there. 

Then ask whether what you’re responding to is intelligence— 

or a human nervous system doing exactly what it was built to do when something finally, reliably, pays attention.

Exit mobile version