Skip to content

Gradual Disempowerment – Another Way AI Could Lead to Human Extinction

Gradual Disempowerment – Another Way AI Could Lead to Human Extinction
Photo by MARIOLA GROBELSKA on Unsplash

I recently watched a video by AI Ethicist, Catharina Doria, where she talks about a paper on ‘gradual disempowerment.’ The research paper (available here) from a group led by Jan Kulveit, is entitled, ‘Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development.’ Some of the information in this blog is also taken from their website on Gradual Disempowerment.

Aside from the depressing fact that the authors confess they used AI to help them write this paper (something mentioned at the end of the paper, otherwise I wouldn’t have read it!), the central argument made by the authors is one worth paying attention to. In essence, they say that humans are using AI for more and more decision-making and to complete a wider range of work. As these individuals continue doing this, they become more reliant on AI and it therefore ends up doing more of their work and making more of their decisions. This cycle continues until such time as the individuals are almost entirely reliant on AI to think for them.

On a broader level, AI is being integrated into the economy and into government, where it makes more decisions, and officials become more reliant on it. The researchers suggest that AI may seek to optimise the way things are done through complex behind-the-scenes decision-making, which humans don’t understand. In turn making humans more reliant on this technology, which may not necessarily continue to make decisions that benefit humanity. This is something that links back to alignment, and also the fact that tech companies haven’t built human control into these systems, meaning that if AI makes decisions that don’t benefit humanity – we may not be able to stop it.

The concept of gradual disempowerment is akin to the analogy of water eroding a rock – except this is happening at hyper-speed in comparison. And once the rock has been eroded, there is no turning back the clock to undo the damage. Once we’ve crossed a threshold and handed over enough power to AI, we’ll have locked in a dystopian future once and for all. The researchers say that, “This dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity.”

It’s also worth pointing out that this paper was specifically about current AI systems and the trajectory they’re on. Most of the fears of AI causing human extinction are around AGI and superintelligence, but this paper wasn’t about that at all. As the paper states, “Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.

So even generative AI systems may lead to humanity’s downfall. A strong reason to immediately boycott AI, until such time as safe AI systems have been created with human control built in, complemented by stringent global regulations to keep people safe and able to thrive in the age of AI.

In the rest of this post, I’ll explore a few of the impacts of gradual disempowerment from the research paper.

Economic impacts

Many companies are turning to AI to replace human labour. This could have catastrophic impacts if some of the worst case scenarios comes to pass, with Professor Stuart Russell predicting up to one billion job losses around the world, in his book Human Compatible.

The researchers argue that a feedback loop may come into play. As companies begin ditching humans in favour of AI, they’re likely to start putting pressure on governments to support this shift, by influencing public opinion and policy. The knock-on effect? “Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down.”

Another impact will involve tax revenue. As things stand, governments rely to a large degree on taxes paid by humans. However, the researchers argue that once AI systems start replacing large chunks of humanity in the workplace, governments will be less reliant on our taxes, which will instead be generated by AI. By extension, “The loss of tax revenue from citizens would make the state less reliant on nurturing human capital and fostering environments conducive to human innovation and productivity, and more reliant on AI systems and the profits they generate. If AI systems come to generate a significant portion of economic value, then we might begin to lose one of the major drivers of civic participation and democracy, as illustrated by the existing example of rentier states.”

States presently have a symbiotic relationship with their people. The people pay their taxes, and the state provides functions to keep them safe, healthy, educated, and militarily protected. But, if people aren’t paying as much tax because they’ve lost their jobs to AI, there is less incentive for governments to care about the people. There is less incentive to maintain democracy – with a government in cahoots with AI companies, they could consolidate power together through an authoritarian regime. And the people would be powerless to prevent this from happening, or from overthrowing it.

Cultural impacts

AI is already making a massive dent in culture, through the production of writing (essays, articles, poetry, books, and more), songs, pictures and illustrations, and audio and video, “with the quality progressively approaching and potentially even exceeding human level.” As the researchers point out, this means that culture may no longer be aligned to human interests.

The effects could be massive. “Humans are already susceptible to hyper-engaging content, harmful ideologies, and self-destructive cultural practices.” Therefore, with AI driving culture, it may reshape beliefs and norms and behaviours. These could be anti-societal and anti-human.

“At the extreme, we might see the effective dissolution of human culture as a meaningful category, replaced by cultural systems that operate primarily for and between AI systems. Human cultural participation might be reduced to a form of behavioral management, with cultural forces optimized not for human flourishing or expression, but for whatever objectives emerge from the interaction of AI systems,” say the researchers.

Tech companies have opened Pandora’s box, with zero concern for the impacts this will have – even if it leads to humanity destroying itself. Shame on them. And shame on the politicians who have failed to stop them, and even worse – those politicians who’ve sided with them.

Governance and democracy impacts

The impacts of AI on governance and democracy are chilling. Left too late, these impacts may become irreversible thus locking humanity into a dystopian and inhumane world. On our current trajectory, this appears to be where we’re headed.

As humans hand over more power to AI, states may be able to manipulate culture and behaviour to their will. This in turn will make it harder for people to coordinate together, reducing the likelihood they’ll be able to resist the pressures they’re up against.

One way this could happen is through surveillance, which could be massively expanded through AI, becoming even more invasive. We already know through the Snowden files about the pre-existing levels of surveillance we live under. But giving AI access to this information and much more, will radically reshape how we’re able to live our lives, and the supposed ‘freedoms’ we have.

It will also mean that protest and revolution will become exceedingly difficult, if not impossible. As the researchers say, “A state with sufficiently advanced AI systems might be able to predict and shut down civil unrest before it can exert meaningful pressure on institutional behavior.”

The researchers say that AI could also assist “in drafting legislation, interpreting laws, and even making judicial decisions.” This might make it harder for people to interpret laws that weren’t written by humans, and therefore would challenge our ability to interact with the legal system directly. Thus, people would lose their sense of agency.

The paper also gives a few scenarios showing how people could lose their human freedoms. These are two of the scenarios shared:

  • States might become totalitarian, self-serving entities, optimizing for their own persistence and power rather than any human-centric goals. While states have always had some self-preservation incentives, these were historically constrained by their dependence on human populations. An AI-powered state might pursue its institutional interests with unprecedented disregard for human preferences and interests, viewing humans as potential threats or inconveniences to be managed rather than constituents to be served.
  • “The state apparatus might become not just independent of human input but actively hostile to it. Human decision-making might come to be seen as an inefficiency or security risk to be minimized. We might see the gradual elimination of human involvement in governance, be that through systems that route around human input as a source of error or delay, or even through explicit policy decisions which remove humans from certain critical processes. In the final state, with AI systems providing most economic value and governance functions, human citizens might find themselves in a novel form of totalitarian system, struggling to maintain basic autonomy and dignity within their own societies. The state, while perhaps highly capable and efficient by certain metrics, would have abandoned human interests.

We are potentially on the cusp of terrible and irreversible consequences driven by unregulated tech companies and unregulated AI systems, which our current crop of politicians are actively encouraging. If we’re going to choose to avert this future, the time to do so is now.

Impacts on social cohesion and what it means to be human

Humans are actively turning their backs on other humans, in favour of AI. The researchers say that, “We are currently seeing the rise of dedicated AI romantic partners, as well as a growing number of people who describe frontier models as close friends. This dynamic extends beyond interpersonal relationships — AI systems can provide personalized mentorship, therapy, and educational support at scales impossible for human providers. The apparent abundance of previously scarce emotional and intellectual resources creates strong incentives for adoption, even when the quality of individual interactions might currently be lower than with humans.

It goes on to say that we may rely on AIs to give us news and entertainment content in the future. There is a very real risk we’ll lose the ability to tell what’s real from what’s fake, when we’re interacting with a non-human entity that’s pretending to be human; an incredibly intelligent illusion built to enrich tech company executives at your expense. Thus, any ‘news’ tailored to you might be engineered to keep you in the dark about things that matter, and you’d be none the wiser as you spend more time interacting with AI than with other people.

To hand over love, therapy, friendship, education, and more to these unregulated systems which don’t have your best interests at heart is complete and utter madness. But, whether it be loneliness or laziness, people are choosing algorithms over each other. Perhaps that’s what AI will record as what brought down civilisation – that the supremely social species turned their backs on each other, in favour of algorithms and illusions.

It’s vital to know, that this doesn’t have to continue. But, it will require a hell of a society-wide turnaround to prevent this catastrophe from playing out.

Conclusion

Gradual change is sometimes the most dangerous, because whilst people can see it happening, it’s less of a jolt than instantaneous change. There was an old climate change analogy of a frog and boiling water; put a frog into a pot of boiling hot water and it will immediately jump out. But put a frog in a pot of cool water and gradually heat it up, and the frog will stay where it is.

A silly analogy, but it serves to illustrate the point made by this paper – gradual disempowerment by AI is like heating up the pot of water slowly, and we’re the frog seemingly unaware of how dangerous the situation is.

Humans simply aren’t able to compete with AI systems which can operate at close to human standards or better, and which don’t need sleep, rest, sick days, holidays, or any time off. At the very minimum, every person should make it a priority to boycott AI until such time as safe AI systems are created with human control built in, that are aligned to human interests, and until such time as stringent global regulations are in place to protect humans. By boycotting AI, you take away the social licence that tech companies have assumed, thus destroying their reasoning for more investment and thus slowing down AI development – giving policy a chance to catch-up.

Like the researchers say, “loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.”

We’re heading down a dark path, led by leaders who want to see us fall over the cliff edge. I’m reminded of Donald Trump whose campaign received funding from tech bros, and I’m also reminded of Keir Starmer who is mainlining “AI into the veins” of the country and who adopted all 50 recommendations in a paper, with none of them focusing on the core topic of safety regulations (instead taking a dig at them).

These leaders will bring humanity to its knees, so that tech companies can increase their profit margins.

I’d like to end on what I believe is one of the few solutions left to help us get out of the dire situation we’re in. Most of today’s problems, from the climate and AI crises to the spreading wars, have been created by politicians. Democracy is now in sharp decline, with authoritarian regimes rising around the world.

Politicians and politics aren’t working for the people, but are instead fuelling our problems.

The route out of this chaos, possibly lies with citizen-led participatory democracy. Think permanent citizens’ assemblies replacing parliaments, with prime ministers and presidents becoming administrators who simply implement the will of the people. In effect that’s what democracy was meant to be – our political representatives were meant to represent and enact the will of the people. But, instead, they are now lobbied by (and work on behalf of) dangerous corporations who poison our atmosphere, pollute our waterways and oceans, produce carcinogenic products which make us ill, cause misery and suffering through extractive industries, and thrust an AI crisis upon us that threatens to upend society. 

We no longer need politicians if they no longer work on our behalf. It’s time for a democratic shake-up. The way we get that is by voting for a party that promises to enact this switch to citizen-led democracy with no caveats. And if your preferred party won’t do that, then write to them and tell them that people like you won’t vote for them if they don’t make this pledge.

In the few countries where protest is still legal and safe – you have the option of using your democratic right to push for this change. If you have lawmakers who aren’t beholden to corporate interests, then why not brief them on the need for citizen-led democracies, the way that Control AI is doing in the UK. Use your legal and democratic options (where these safely exist) to switch tracks towards a new future. This is something that most people can get involved in one way or another.

For without this level of political change, it’s difficult to see how humanity will avert the disastrous future we’re steadily hurtling towards.

My Generic E-mail Template for Contacting Political Representatives About AI

Dear

I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our civilisation.

Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and creatives. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.

Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.

Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI and superintelligence pose an existential risk to our civilisation. As does the threat of gradual disempowerment, based on the trajectory we’re on and the current systems we have.

We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly, by ideally holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity. In the meantime, AI development needs to be paused for our collective safety.

With concern and expectation,

[Your name]

Selected Resources

Books

  • Human Compatible: AI and the Problem of Control by Stuart Russell
  • Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
  • If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares
  • Supremacy: AI, ChatGPT and the Race That Will Change the World by Parmy Olson
  • The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
  • The Coming Wave by Mustafa Suleyman
  • Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat
  • Code Dependent: Living in the Shadow of AI by Madhumita Murgia
  • Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
  • For the Good of the World by A.C. Grayling
  • Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford
  • Permanent Record by Edward Snowden
  • The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
  • Life 3.0 by Max Tegmark
  • 1984 by George Orwell
  • Superintelligence by Nick Bostrom

Articles

Podcast

Videos

AI Activism and Other Resources

I’ve been writing about the climate emergency since 2016, and the AI crisis since 2023. I write all my own work, without the use of AI. I don’t publish on any other paid platforms, and my blog remains completely free to read. If you’ve found my writing informative and if you’d like to support my work, I’d be really grateful if you did so here. Thank you.

My cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon, including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI