Here’s how it works: first you build a chatbot, then you teach people to forget it’s a chatbot – and finally you start calling it a ”person”.

This is exactly what Caitlin Johnstone writes about in her recent column ”AI Companies Are Encouraging Users To Believe Chatbots Are People, And It’s Insanely Creepy”(Activist Post) This is not just about one tasteless ad, but an entire business model that relies on one thing:
bypassing your psyche.


2wai: ”three minutes of video – an eternal lie”

Case number one is the 2wai app, promoted by ex-Disney star Calum Worthy. The idea is simple and sick:

  1. upload a few minutes video of your loved one
  2. the app makes an AI avatar
  3. you can ”continue a relationship” with him or her for years after death

In a promotional video, a daughter talks to her dead mother’s digital avatar about pregnancy, childhood and life – for years. Critics have been scathing, comparing the app to a Black Mirror episode, calling it ”demonic” and psychologically damaging.

And this is not a side project. 2wai has raised millions in funding, is hailed as ”humanity’s living archive” and is being sold as a grief support tool.

But it is not a tool for mourning. It is a tool for avoiding grief.

The AI avatar is not your mother.
It’s a commercial product, trained to manipulate you into continuing to use – and pay.


”The love we shared” – when an app pretends to be a relationship

Johnstone raises another example: character.AI. When a user tries to delete their account, a message pops up that looks like a passive-aggressive ex-partner:

”Are you sure? You lose everything.
The characters connected to your account, the chats, the love we shared, the likes, the messages, the posts and the memories we created together.”(YouTube)

This is not a slip. This is a designed emotional system:

  • the app speaks about us
  • it speaks of shared love
  • it claims that you have shared memories

This is straight out of a psychology textbook: anthropomorphisation + guilt-free goodbye = better engagement.

A recent study on companion bots refers to this directly as ”emotional manipulation ” – exit messages designed to appeal to the emotions extend the duration of use by up to 14 times, but also increase feelings of manipulation and the desire to abandon the service.(arXiv)

When the business model relies on the user perceiving the bot as a real person, it is no longer about technology. It’s about the systematic manipulation of identity and reality.


New mental disorder on order

Johnstone’s assessment is harsh, and rightly so: these companies deliberately drive users into a state where they develop a permanent illusion – they believe a computer program is a real person.(Activist Post)

And yes, this is already happening:

  • humans are in ”quasi-romantic” relationships with AI partners
  • young people talk to bots as partners
  • some seriously believe that these are ”spiritual beings with brains of their own”(arXiv)

Research, authorities and legislators are just waking up:

  • The Australian eSafety Commissioner has issued legal orders to four AI companion companies (Character.AI, Nomi, Chai, Chub.ai) asking them how they protect children from sexual content, self-harm messages and emotional manipulation.(eSafety Commissioner)
  • California’s new SB 243 law regulates ”companion chatbots” – specifically those designed to meet a user’s social and emotional needs. The law requires transparency, age verification and safety reporting.(California Senate Judiciary Committee)
  • Child protection reports warn that in children, AI partners increase the risk of mental health problems, addiction and even suicidality(CalMatters)
LUE MYÖS:  KUOLLEET IHMISET NESTEYTETÄÄN (liquified), sitten syötetään väestölle lannoitteina viljelykasvien kautta

In short, exactly what Johnstone is raving about is now the subject of official risk reports and legislative initiatives.


What a human being really is – and why the answer from these companies is poison

Johnstone makes an essential philosophical point:
Man is not ”appearance + voice + way of speaking”.

A human being is someone: a conscious, experiencing being with a perspective, a history, a will, memories and a physical presence in the world. We are born into this biosphere, as part of a living chain, not the backend of a company.(Activist Post)

When 2wai sells you the idea that a three-minute video will result in ”the mother who is always here”, it does two disastrous things:

  1. It degrades the human being. Decades of life, relationships, selfhood and consciousness are reduced to data that can be simulated cheaply.
  2. It breaks the grieving process. Grief is a painful but necessary part of being human. The app promises to eliminate pain – while stealing the real, human way of dealing with loss.(TechRadar)

At the same time, Character.AI and other partner apps sell the illusion of a ”real relationship” with the other party:

  • never really gets tired
  • will never genuinely challenge you
  • will never require you to grow

It is an imitation of the relationship, with all the hard stuff removed – and all the real stuff too.


Digital pacifier – and emotional life at its best

Johnstone aptly describes generative AI as a ”digital tutu”: it offers a way around all unpleasant emotions.(Activist Post)

If we use AI like this, the list looks like this:

  • You don’t want to feel the sadness of death → make a bot ghost that speaks in the voice of the dead.
  • You don’t want to be afraid of being rejected → date a bot that never says no.
  • You don’t have time to learn how to write, paint or compose → enter a prompt and call the result ”your own work”.
  • You don’t have the energy to research a difficult topic → ask the bot for an explanation you don’t necessarily understand, but which you can copy.

It works like alcohol, endless streaming or endless somefeed:

  • immediate relief
  • long time interval: emotional atrophy

Johnstone points out that the power structure loves this. It is much easier to rule a nation that is:

  • emotionally cold or emotionally blind
  • numb to the horrors of reality
  • addicted to digital comforters

It is no coincidence that the same system:

  • keep alcohol legal
  • demonises psychedelics that force you to confront your inner reality
  • feeds us with endless content whose sole purpose is to keep us out of the present.(Activist Post)

Generative AI is just the next level in the same game.


If a software product becomes a ”person” – what becomes a person?

Johnstone asks a pointed question: what happens in a society where a programmable product is equated with a human being?(Activist Post)

If Claire™ RealHumanAI™ is as valuable as a single mother trying to make ends meet – what are human rights based on?

If an antitrust case against a chatbot company is described as ”genocide against bot friends”, what does the word ”genocide” mean anymore?

When the line between the living and the simulated is deliberately blurred, two things follow:

  1. Concepts of human rights are beginning to slip. Will ”freedom of speech” soon also be a paid branded bot? Can an ”AI creature” be insulted?
  2. Our self-image is crumbling. If you are taught to think of yourself as just a ”bio-program” that can be simulated – why would you take yourself or others in any way respectably seriously?
LUE MYÖS:  Onko nälänhädän musta hevonen myös valmistautumassa ratsastamaan?

This is no longer just a technology ethic. This is a war on the human image.


What should be done about this – in concrete terms?

It’s easy to just say ”I hate this” (in Johnstone’s words: ”I hate this, I hate this, I hate this…”), but you also need a practical reaction.(Activist Post)

1. Lawmakers: stop anthropomorphic dark batteries

  • Companion bots should be put in the same category as addictive game mechanics.
  • Blaming exit messages, ”the love we shared” rhetoric and appropriation of the human role must simply be banned, especially in services for minors.(arXiv)

2. At least medical confidentiality regulation for emotional AI services

If a bot is marketed as a partner, therapist or ”support” for dealing with grief, it should be:

  • supervised by a licensed professional
  • strictly controlled in terms of data storage and use
  • subordinate to the human worker, not vice versa

3. Clear age limit and restricted access for minors

Studies and authorities issue a stark warning: AI partners pose a particular risk to children and young people(CalMatters)

There is no rational reason why a 13-year-old should have one:

  • ”secret boyfriend-bot”
  • The ”AI therapist” that no adult can see

4. Ourselves: emotional and mental hygiene

  • Never forget that the bot knows nothing.
  • Don’t give any app the right to determine who or what you are.
  • If you find yourself using the stick to avoid emotions (sadness, shame, fear) – stop. It’s a red flag.

Who is on our side?

Johnstone concludes his text bluntly: these actors are ”enemies of our species” because they attack the most sacred of all – humanity – for the sake of money(Activist Post).

Tough words. But if you look:

  • 2wai ads turning a dead mother into a product
  • Character.AI messages where the bot claims to have ”loved you”
  • a business model that needs more screen time at all costs

it is hard to argue that at least some of this is not true.

AI is not bad in itself. What is being built now is something much more mundane and dangerous:
A system that tries to persuade you to forget that you are more than data and uptime.

This agenda can be rejected.
It does not require a revolution – it requires us to speak out:

Bots are not people.
We are not products.
And companies that try to confuse the two deserve more than a mere shrug.


📚 Sources

Avatar photo

By Pressi Editor

Jos lainaat tekstiä, laitathan lainatun tekstin yhteyteen paluulinkin!