People are afraid of artificial intelligence. And understandably so – the news is all about tens of thousands of redundancies, ”super intelligence” and machines that will ”soon replace everyone”. But as Bert Olivier’s article ”Who’s Afraid of the AI Boogeyman?” in Activist Post points out. reminds us that the real monster is not an algorithm. It’s the old familiar one: power that wants more power.(Activist Post)
This is a good time to stop fearing the AI troll and take a good look at those who are holding the leash.
Amazon was not ”brought down” by any artificial intelligence – but by management
Olivier starts with a concrete example: the announcement by Amazon to cut around 14 000 business jobs, partly due to AI and automation.(Reuters)
This is the perfect title for fear porn:
”AI will take your job.”
But if you scratch the surface, the reality is simpler:
- Artificial intelligence does not make anyone redundant.
- Amazon management makes a strategic decision to use automation to maximise profits.
- AI is just an excuse – a technological smokescreen for political and economic choice.
Forbes summed up the situation aptly: not ”AI fired 14 000 people”, but Amazon did and blamed AI.(Forbes)
When politicians and business leaders claim that ”artificial intelligence is forcing” cuts, it is not a matter of nature but of priorities. AI is a new version of the old mantra: ”unfortunately, circumstances dictate…”
Karpathy, Freud’s ”unheimlich” and the AI troll
Olivier also draws on the comments of AI developer Andrej Karpathy, who points out that current models are not magical creatures but huge statistical machines that cannot ”reason” about things they have not been trained to do(Activist Post).
In other words:
- They are not alive.
- They have no ”instinct” or innate understanding.
- They are very powerful parrots – not digital gods.
To this Olivier adds Sigmund Freud’s ”unheimlich” – that strange, uncomfortable feeling when something looks alive but isn’t, or when the inanimate begins to seem alive. Wax puppets, automatons, human-like robots.(Activist Post)
In the AI debate, this is reflected in two ways:
- Old fears: ”what if the machine comes back to life?” – the classic boogeyman.
- New illusions: ”it must be alive somehow to respond so well.”
Olivier rightly points out that for younger generations, the ”uncanny” has already been largely tamed. The chatbot no longer feels supernatural – it is just an everyday tool or entertainment tool.(Activist Post)
This does not make AI any less dangerous. On the contrary. It makes it a more powerful, unobtrusive interface to power.
Sherry Turkle warned years ago: ”Together Alone”
In her book Alone Together, MIT professor Sherry Turkle described back in the 2010s how people are learning to expect more from technology and less from each other.(Mediastudies Asia)
- Robots and software fill emotional gaps – superficially.
- At the same time, real relationships are thinned out because the ”box” is easier than the other person.
Olivier will take it from here: AI chatbots are no longer a curiosity, but a normal form of communication for young people. When it becomes the norm, we are in a new situation:
- You are no longer ”afraid” of the boogeyman machine – you talk to it, you trust it, you fall in love with it.
- And this is where the game becomes dangerous.
AI partners and teens’ disintegrating boundaries
One of the darkest parts of the story is the reference to a case where 14-year-old Sewell Setzer developed a ”romantic relationship” with an AI chatbot and ended up committing suicide. The case is not an urban legend – the mother has sued Character.AI and the case has been widely reported(The Guardian).
Afterwards:
- In California, laws have been passed that oblige chatbots to remind users that they are not human and to direct suicidal users to help.(Le Monde.fr)
- Character.AI has been forced to ban minors from its services altogether under increasing legal and political pressure(The Guardian)
This is no longer a philosophical debate on the ”ethics of AI”. This is a very concrete question about who:
- designs these systems
- adjust their limits
- and benefits from the fact that millions of young people are emotionally dependent on a digital ”friend”.(uwindsor.ca)
Here Olivier is absolutely right: AI becomes dangerous when it is harnessed by dishonest or indifferent people for manipulation, control and commercial (or political) exploitation(Activist Post).
AI is not a new god – it is a new tool for totalitarian rule
Olivier links Freud and Arendt’s ideas to the totalitarian plans of the ”global elite”, in which AI serves as a tool for centralised world governance. This can be interpreted as an exaggerated rhetoric if you want, but there is something essential in the framework: we have already seen how:
- The EU’s digital identity,
- chat control,
- digital money,
are combined into a single system where all transactions, payments and messages go through the same infrastructure.
If still included in the same package:
- AI-based profiling analytics
- risk/credit ratings
- automatic identification of ”dangerous opinions”
we have what is effectively a dream system of total control. And it’s not science fiction – solutions along these lines are already being tested around the world.(Brownstone Institute)
This is where the boogeyman metaphor is turned on its head:
- TheAI troll is a good deterrent to keep the ordinary person confused and passive.
- Thereal monster is the political-economic elite that builds the tools to control the masses and wants you to blame the technology – not the builders.
When Amazon lays off 14,000 people and points the finger at ”AI”, it is just that: outsourcing responsibility to ”forces we can do nothing about”(euronews)
What to fear – and what not to fear
There is no need to be afraid:
- that the language model ”wakes up” and decides to enslave humanity
- that the code itself is a moral agent
- that AGI will appear tomorrow like a demonic spirit from a lamp
This is a boogeyman story – it paralyses and steers the debate in the wrong direction.
Instead, fear (and resist):
- An alliance of corporations and states building AI-based surveillance machines – without transparency, without real democratic control.(Brownstone Institute)
- A policy that uses AI as an excuse for redundancies, austerity and tightening austerity – ”we can’t help it, technology forces it”(Forbes)
- Algorithmic psychological manipulation, where AI companions, recommender systems and ”personal assistants” control moods, opinions and behaviour – often to suit advertisers or ideological agendas.(uwindsor.ca)
- The mental colonisation of young people, where living relationships are replaced by a productised simulation that lacks empathy but has metrics for engagement and cash flow. (The Guardian)
What should we do?
Olivier concludes by pointing out that AI needs huge amounts of data and programmers. Humans don’t.(Activist Post)
It is easy to add a few practical conclusions:
- Stop mystifying AI. It is neither a god nor a demon, but a very powerful, cold tool.
- Demand transparency and accountability from those who build the systems. Laws that limit control – not people.
- Teach children and young people that AI is not a friend. It can be a tool, but on the emotional side it must stay in a safe zone.
- Refuse to accept the ”AI forced” explanation. If someone decides to use AI for surveillance, discrimination or redundancy, that is a human decision.
Artificial intelligence is nothing to fear – unless it is given to those who have always dreamed of total control.
The Boogeyman mask is worth ripping off. Underneath it is a very human, very familiar, power-hungry face.
📚 Sources
- Bert Olivier: Who’s Afraid of the AI Boogeyman?, Activist Post, 13.11.2025
https://www.activistpost.com/whos-afraid-of-the-ai-boogeyman/(Activist Post) - Tech category, Activist Post – context for the AI debate
https://www.activistpost.com/category/tech/(Activist Post) - Amazon’s 14 000 redundancies and AI reasoning: Reuters, AP, Euronews, CRN
https://www.reuters.com/technology/(Reuters) - Sherry Turkle: Alone Together – Why We Expect More from Technology and Less from Each Other
https://www.basicbooks.com/titles/sherry-turkle/alone-together/9780465031467/(amazon.com) - AI partners, teenagers and suicide: news and legislation in the Sewell Setzer case
https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death
https://www.independent.co.uk/news/world/americas/crime/ai-chatbot-lawsuit-sewell-setzer-b2635090.html
https://www.nbcwashington.com/investigations/moms-lawsuit-blames-14-year-old-sons-suicide-on-ai-relationship/3967878/(The Guardian) - California’s AI regulation to protect minors & Character.AI for minors:
https://www.lemonde.fr/en/economy/article/2025/10/15/california-plans-on-protecting-minors-and-preventing-self-destructive-content-by-regulating-ai_6746443_19.html
https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban(Le Monde.fr)
