Hollywood loves the story of a rebellious AI that takes over, launches missiles and wipes humanity off the map. Skynet, Terminator, War of the Machines.
It’s a spectacular threat – but also the perfect smokescreen.
In Charles Hugh Smith’s recent article ”The Risk of AI Isn’t Skynet”, the point is simple and uncomfortable: the real risk is not that AI will become a visible tyrant, but that it will become an imperceptible infrastructure that replaces reality with an ultra-processed version – and eats society from the inside.(Activist Post)
The formula for innovation: gains for the fast, losses for everyone else
Smith points out that technological revolutions always follow the same pattern:
- innovators and speculators pushing new technologies forward as fast as possible
- no one stops to think about the long-term consequences, because that would mean losing market share to competitors
- the market prices only the present – production, logistics, advertising, materials – but not the future side effects(Activist Post)
Railways, cars, TV, internet – it’s always the same story:
the first rounds are allure and quick riches, the real decline comes years or decades later in the environment, health, labour market and politics.
The difference is that now it is no longer just about new technology, but about who owns the filters of reality.
Why AI is really different from previous disruptions
One of Smith’s key observations is brutally simple:
AI does not create more human labour – it is built to replace it.(Activist Post)
Previous technological breakthroughs (railways, electricity, internet) created new industries that absorbed huge labour forces, even if they destroyed old occupations.
The core logic of AI is the opposite:
- automates what can be
- concentrate knowledge and decision-making among an increasingly small elite
- shifting costs and risks to the outside of the system
This is not a ”neutral tool”. It is an economic weapon tuned to maximise the return to the owners of capital – not to society.
Ultra-processed information: junk food for the mind
Another of Smith’s powerful analogies hits a nerve: AI produces a ”fascist” version of reality in the same way that ultra-processed food imitates real food.(Activist Post)
- It looks real: the language is fluent, the answers are convincing, the arguments sound logical.
- But in the background, everything goes through black boxes – templates, weights, filters and agendas – that the average user cannot see or control.
- The end result is a ”safety net for consciousness”: easily digestible, pleasant but nutrient-dense information that leaves out the most important corners.
If your daily data source is outsourced to AI, this is easily the case:
- You can no longer see what has been left out.
- You no longer notice which aspects have been cleaned up as ”noise”.
- You no longer remember what the real source feels like: raw data, original text, contradiction, trouble.
And then – in Smith’s words – the ability to make a realistic assessment begins to erode unnoticed.(Activist Post)
Future Shock 2.0: accelerating change without the handbrake
Alvin Toffler spoke back in the 1970s of ”future blindness”: people lose their grip when change outpaces their ability to cope with it. In the 2000s, Douglas Rushkoff updated the theme to ”present shock”: we live in a constant state of present shock, where nothing has time to settle into context(Activist Post).
AI makes this a double problem:
- change is no longer just technical, but cognitive – it is about the way we think and perceive.
- the same system that accelerates change also provides ”solutions”: personalised content, filtered results, pre-chewed explanations
When people get exhausted, they start to outsource their thinking – and then control inevitably trickles down to those who own the algorithms.
”Children with a petrol can in a dry forest”
Smith sums up the current situation in a picture that is hard to forget:
we are like children playing with matches and gasoline in a dry forest.(Activist Post)
At the same time:
- giant corporations and governments are driving ”AI supremacy” – with budgets that exceed the defence budgets of many countries
- political debate revolves around either fantastic promises (”AI will save the economy”) or Hollywood threats (”Skynet and robots”)
- the basic question itself is not asked:
should we allow these systems everywhere at all – and if so, under what conditions?
It is entirely possible that those who limit their exposure to this ”ultra-processed” reality will ultimately be the winners – not those who throw all the knowledge, infrastructure and decision-making into the AI pipeline.(Activist Post)
What should we do – if we don’t want to slip unnoticed into the surveillance infrastructure?
A few concrete lines that Smith’s analysis suggests – and that we in Finland and Europe should be addressing now, not five years from now:
- AI must be locked into the role of servant, not master.
Meaning: strict limits on what it can touch (infrastructure, surveillance, social benefits, justice). - Mandatory transparency for black boxes.
– Model training data, weights and steering constraints are part of a democratic debate, not just corporate ”trade secrets”. - Analogue security bands: right information, right world.
– physical books, local communities, offline learning, teaching source criticism
– media that does not base everything on AI production, but continues to do the raw work. - Timeframe back to politics.
If the technical and economic logic is ”everything now”, politics must be a counterforce that asks ”what will this do to society in 10-20 years?”. - Exposure controls for children.
– Same logic as with ultra-processed food: the younger you get hooked, the less you know what the real thing tastes like.
Finally: the biggest risk is not that AI will attack – but that we will stop noticing when it is already guiding us
It is tempting to say: ”this is just one technological wave among others, it will die down.”
Smith points out pointedly: there is no law of nature that says every technological revolution will end in a net positive outcome.(Activist Post)
The question is:
- do we let marketing departments and politicians sell us AI as the new generation of smartphones
- or do we see it as a tool that, in the wrong hands, can turn all of reality into an ultra-processed, transparent space controlled by algorithms?
Skynet is not the real threat.
A much more likely scenario is a quieter one:
we are less and less sure what is true, what is worth defending – and whose language we speak when we think we speak our own.
It’s a risk that doesn’t look like a science fiction movie.
It looks like a coffin where we no longer notice what was taken from us.
📚 So urce
- Charles Hugh Smith: ”The Risk of AI Isn’t Skynet”, Activist Post, 11.11.2025.
https://www.activistpost.com/the-risk-of-ai-isnt-skynet/(Activist Post)
