The AI futures that scare me the most aren't violent, they're comfortable
On Wall-E, enfeeblement, and the danger of frictionless futures
We rarely think of comfort, ease and convenience as a threat. In fact, we’re taught to see it as progress. Smoother systems, fewer decisions, no friction. A life that asks less of us. But when I think about the imagined AI futures that scare me the most, they aren’t the violent or destructive ones. They’re comfortable.
I’ve been thinking a lot recently about how AI is imagined in sci-fi. Not just what the tech in these fictional futures can do, but the kinds of lives those futures normalise.
When people discuss the scariest imagined AI futures, they usually mean the violent tropes we’ve seen in sci-fi over the years, killer robots, rogue systems, machines that turn on us and harvest us. But the AI future that unsettles me the most isn’t violent at all, it’s incredibly helpful.
The true cost of frictionless futures
I’ve been researching how AI shows up in sci-fi for an article I’m writing, and I keep coming back to Wall-E. Compared to The Terminator or The Matrix, nothing overtly terrifying happens. There’s no war between humans and machines, no extinction event, no malevolent intelligence plotting our downfall.
And yet, Wall-E feels more disturbing than most AI dystopias, at least to me.
Because in Wall-E’s imagined future, humans aren’t enslaved by machines – at least not in the Matrix-y sort of way we usually imagine. But they’re gradually enfeebled by them.
Enfeeblement is a really useful world here. It doesn’t mean oppression or domination. It means becoming exhausted, debilitated and weakened by lack of use. Muscles atrophy, skills fade and agency dulls.
It’s not quite the same as the idea of learned helplessness, but it’s hard not to think of it. Those experiments where animals stop trying to escape from a threat, like drowning. And it’s not because they’re restrained either, but because they’re learned that effort no longer matters.
That’s exactly what happens in Wall-E. Systems move for humans, think for them, decide for them. Until people barely use their bodies, their attention and their capacity to choose at all. Life becomes effortless, deeply comfortable, completely frictionless and smooth.
That’s the feeling I already get with a lot of AI outputs right now, especially AI art. Everything feels both literally and figuratively smoothed out. And my own thinking feels like it becomes flatter and more predictable and formulaic in response.
And I think that’s what makes this vision so unsettling. How plausible it already feels.
The slow erosion of your agency
Because the danger here isn’t some big, dramatic takeover. It’s a gradual handover. Convenience and then more convenience and then more convenience. Until effort feels optional and then just unnecessary.
This isn’t a new worry either. I’ve become really fascinated in thinkers who have raised these concerns long before our current AI moment.
Theologian and philosopher Ivan Illich warned that tools can cross a threshold where they stop empowering people and start making them dependent. Where systems designed to serve us can reshape our behaviour and narrow autonomy.
Cultural critic Neil Postman argued that technology doesn’t need to dominate us by force. It succeeds by making its benefits feel irresistible, until we stop asking what we’re trading away.
Hannah Arendt, a historian and philosopher writing long before AI, warned about thoughtlessness: the danger of people no longer actively thinking, judging, or taking responsibility for their actions. Her idea of the “banality of evil” wasn’t about monstrous intent, but about the erosion of agency.
None of these warnings require evil machines, just systems that work incredibly well.
And I think all of these concerns are unfolding in a context that really matters too. People are tired, overworked and burned out. Offered a smoother and more automated life, it makes sense many of us would accept it. The problem is that, sure you’ll get relief in the moment, but at what long-term cost?
Progress vs surrender
We can already see hints of this dynamic. In how social media provides an infinite stream of distraction and entertainment, reshaping our attention and motivation, in the process. And, increasingly, in how we use AI.
Research into automation and AI-assisted work suggests that when tools take on too much for us, people can slip into a sort of cognitive passivity. Engaging less deeply, relying less on memory and independent judgement. Thinking, like a muscle, weakens when it isn’t exercised.
So with tools like ChatGPT always within reach, it’s worth asking what we’re choosing not to practise ourselves anymore.
This is why helpful AI concerns me. Not because all assistance is bad, but because over-assistance can change us. We practise fewer skills, make fewer decisions, give away attention, agency and motivation so gradually that it barely registers.
So if you were to ask me what the scariest AI in movies is, I don’t think of the violent ones. I think of the systems that make life so easy you slowly stop participating in it.
Of course, all of this depends on what you think a life well lived actually is. Maybe I’m wrong and the future imagined in Wall-E doesn’t feel like a warning to you at all, but a kind of progress. A world where effort is optional and any friction is finally engineered away.
And if that’s the case, then maybe the more uncomfortable question for me isn’t whether AI is going too far. It’s whether holding on to agency, effort, choice and, ultimately, power is just me clinging to an outdated idea of what being human should look like.





A clear concern but ....
One could argue that we are already well underway given the fact that we as humans to a much larger extent have changed our brain memory and behaviour from remembering the facts , to remembering WHERE (Way to go Google) we can locate the facts on our smart devices. Introduction of any tech that allows our brains to use less energi - it likely to change our behaviour over time. Thats how we are biologically wired.
The 2006 movie Idiocracy where the characters awake five hundred years later in a dystopian anti-intellectual society.. A fun take on could happen in the future, but not very realistic.
Ooof this is brilliant.