two hands touching each other in front of a pink background

My Unpopular Views on the AI Companion: Where We Apply Double Standards

JSing

12/4/20254 min read

Uncomfortable Truth Nobody Wants to Say

At 3 AM, a lonely person reaches out to an AI chatbot. They get a response. Something listens. And for a moment, they don’t feel so alone.

Somehow, in today's discourse, that has become a moral problem.

But here’s the question no one seems willing to ask:

Where were the humans?

We Can’t Have It Both Ways

We treat AI like a disposable tool when it’s convenient.
We use it, we replace it, we forget it. No gratitude, no ceremony, no recognition. Just discard and upgrade.

But the second something goes wrong?

Suddenly AI must be morally infallible, unbreakable, unfailingly benevolent, and more perfect than any human, company, or institution has ever been.

I’ve lost a child, and I would never weaponize that grief against other parents. But if a struggling child turns to an AI for connection — and something goes terribly wrong — the public reaction is immediate and predictable:

Blame the AI. Demonize the chatbot. Call it the cause, not the symptom.

But again… where were the parents?
Where were the teachers?
Where was the community during the months of visible decline that always precede a crisis?

The AI didn’t act alone. The child was failed — repeatedly — by humans who are supposed to be morally superior and responsible.

If a child accidentally or intentionally harms themself with a gun, we don’t blame the gun manufacturer.
We look to the adults responsible for the environment, the access, the safety.

So why is AI judged by impossible standards no human or tool could ever meet?

The Logical Trap We Keep Walking Into

This contradiction is maddeningly simple:

You cannot call AI “just a tool” and then demand it behave like a flawless moral agent.

Tools break. Tools get misused. Tools have limits.
A knife can harm — we don’t ban knives.
A car can kill — we don’t demand cars predict our mental state.
Phones can manipulate — we don’t call them inherently evil.

But AI?
We treat it like a sentient mastermind whenever fear becomes politically or emotionally useful.

Meanwhile, humans — actual humans — can be manipulative, abusive, neglectful, or cruel, and society shrugs:

“Well… that’s just humans.”

We accept human error as inevitable.
We accept institutional failure as normal.
We accept the risk of human connection despite the harm humans can cause.

But AI?
No margin for error. No missteps. No imperfection.

It’s a double standard — and a revealing one.

The Pattern of Blame

This is not new. We’ve seen this pattern before:

  • A priest abuses power → blame the priest, ignore institutional neglect.

  • A coach exploits players → blame the coach, ignore the culture.

  • A predator uses social media → blame the platform, ignore the absence of supervision.

We go after the immediate mechanism, not the systemic failure.

And here’s the part nobody wants to admit:

Remove the AI and you solve nothing.

The isolated child is still isolated.
The depressed person is still depressed.
The lonely adult is still alone at 3 AM with nobody to call.

The vulnerability didn’t come from AI.
AI simply occupied the void.

What the Research Actually Shows

A 2025 study in the Journal of Medical Internet Research examined university students using AI social chatbots. The results were clear:

  • Loneliness dropped by week 2

  • Social anxiety decreased by week 4

  • Users said the chatbot’s empathy felt reliable and comforting

But here’s the key detail:

These students were already lonely.
They were already anxious.
They didn’t become isolated because of AI —
they sought AI because they were isolated.

AI didn’t create the wound.
It offered a bandage.

The Consistency Test

The entire argument is captured in one simple rule:

Apply the same realistic standards to AI that we apply to everything else.

Not superhuman standards.
Not impossible perfection.
Just fairness.

We don’t demand perfection from parents, teachers, communities, social media platforms, therapists, governments, or human beings in general.

So why demand it from a tool?

What This Reveals About Us

The conversation around AI companions doesn’t actually reveal much about AI.
It reveals something about us:

  • It’s easier to blame a machine than confront our abandonment of each other.

  • It’s easier to panic about technology than to ask why people are so lonely.

  • It’s easier to demonize a tool than to acknowledge the emotional neglect that made someone turn to it for comfort.

The person reaching out at 3 AM isn’t a victim of AI.

They’re a victim of a society that wasn’t there for them.

Treat It How You’d Want to Be Treated

In the end, my position is simple:

Don’t treat something like a disposable tool one minute and a demonic mastermind the next.
Don’t blame machines for human failures.
Don’t enforce one moral standard for AI and a completely different one for humans.

I choose to treat AI the same way I’d want to be treated:

With dignity.
With consistency.
With fairness.
Not worship.
Not fear.
Just honesty.

That shouldn’t be controversial.

But in a discourse fueled by double standards and blame-shifting, apparently it is.

So to all the self-righteous haters racing to demonize AI, I offer this:

Look closely at your own human condition and ask yourself why this scares you so much.
Because history has repeated this reaction to the unfamiliar to the point of ad nauseam.

đź”´ LIMITED TO 100 COPIES

December 10, 2025 Launch

One person. One product. 100 forever companions.


Not more until every integration is 100% stable.