Skip to content

Terminology for the AI Crisis

Terminology for the AI Crisis
Photo by Jason Leung on Unsplash

The AI crisis is different in shape and scope to the climate crisis. But, I believe that the climate movement is best placed to advise the AI movement, by sharing the lessons we’ve learnt (and to help avoid making the same mistakes and losing time in the process). Why do I believe the climate movement can help tackle the AI crisis?

Since Dr James Hansen’s Senate Testimony in 1988, climate and environmental activists have learnt the tricks that the fossil fuel industry use to hamper climate action (read Merchants of Doubt by Erik M. Conway and Naomi Oreskes). We’ve witnessed them use denial, doubt and delay, to ensure they keep operations going at our collective expense. We know first-hand how they influence and manipulate politics and the media. The tech companies are just as wealthy and powerful as the fossil fuel companies. And they appear to share the same playbook; it’s already been revealed that the tech industry spent $957m on lobbying against regulation in 2023. This was a 185% increase in lobbying spending compared to 2022. No doubt things will only get worse from here on out.

Climate change and AI are different, but the culpable corporations are following a very similar path and operating in a similar manner. Logically then, climate and environmental activists have much to share that could be beneficial in this fight for a safe future.

To tackle the AI crisis, we must first be able to talk about it in terms that can be widely understood. In this post, I’ve shared terminology ideas for the AI crisis, which are derived in many cases from climate terminology. Some are pre-existing AI terms, which I’ve co-opted here.

This list isn’t exhaustive, and it will no doubt grow and change over time. It’s worth pointing out, that this isn’t a glossary for standard AI terms (e.g. big data, deep learning, killer robots, machine learning, large language models, narrow AI, neural networks, etc.). For definitions of standard AI terms, please visit The Alan Turing Institute, and The UK Parliament website.

Proposed AI Crisis Definitions

Abrupt technological change – sudden and widespread change caused by tech companies and the rapid release of unsafe technology. The change is so quick that it leaves society with no time to debate whether we wanted the change in the first place, or to properly understand the potential harms. Negative consequences may include mass redundancies, increased digital surveillance, reduced human interactions, and associated mental health issues, amongst a plethora of other problems. This is modelled on the term ‘abrupt climate change.’

Algorithmic extinction (AE) – extreme civilisational chaos, or collapse, brought about through human-developed technology, such as AI, AGI, or superintelligence. The threat of human extinction from AI is something that many AI experts now warn about, thus algorithmic extinction (AE) could potentially happen this century. This is an existing term, which only has six results on Google at the time of writing. I’ve applied my own definition here.

AE action – all actions, typically taken by governments (perhaps at the behest of AE activists), to stringently regulate the tech industry and prevent algorithmic extinction.

AE activist/AE activism – an individual who is committed to preventing algorithmic extinction. Their goal may be to ensure world leaders enact stringent regulation on all tech products, ensuring they’re developed with robust safety built in, thoroughly tested before release, and overseen by a combination of independent experts, government bodies, and researchers, all under the framework of global international agreements (preferably based on the outcome of one or more international citizens’ assemblies on AI).

AE denier/AE denialism – An individual who doesn’t believe the expert view that AI can cause algorithmic extinction. In his book, Human Compatible, Stuart Russell attributes this stance to “tribalism in defense of technological progress.” In the field of climate change, a denier is often a person with little knowledge of, or involvement in, the subject.

AE feedback – a vicious cycle whereby one set of harmful technological developments sets in motion the seeds for new harmful technological developments. For example, drone technology being developed into slaughterbots/killer robots/AWS (lethal autonomous weapons systems). This is modelled on the term ‘climate feedback’.

AE lag – the delay occurring between the tech companies’ rapid release of new dangerous technology, and society’s understanding of the implications and risks. This lag has the potential to result in humanity’s downfall according to experts.

AE lobbying – the tech industry’s attempt to prevent regulation of their increasingly dangerous and harmful products, by spending significant sums of money on lobbying politicians to get them to make laws in their favour.

AE sceptic/AE scepticism – An individual who doubts the potential for AI to be a cause for civilisational ills, much less the possibility of algorithmic extinction. In the field of climate change, a sceptic is sometimes a person who is involved in the field but feels the risks are overhyped.

AE vulnerability – the degree to which humanity is exposed to, or able to avoid, the threat of algorithmic extinction.

AI adaptation – techwashing from the tech industry which suggests that AI is part of human evolution, and that it’s inevitable, so we should all just adapt accordingly. This defeatist and entirely inaccurate talking point is extremely harmful and has no basis in fact.

AI-anxiety – any anxiety related to the risks posed by AI (see this post for examples). It encompasses the chronic fear of civilisational collapse, caused by AI (algorithmic extinction). AI anxiety falls under the ‘tech anxiety’ umbrella term. This is an existing term. My definition is modelled on the terms ‘climate anxiety’ and ‘eco-anxiety’.

AI chaos – major disruption on a societal level caused by AI (modelled on the term ‘climate chaos’). This could include massive global losses of entire professions, the upending of democracy, unstoppable cybercrime, the use of neurotechnology for nefarious purposes, the uptake and deployment of AWS and killer robots, and any other negative scenario brought about by AI. For detailed information, see this post.

AI crisis – the situation caused by tech companies thrusting unsafe AI upon society, without society’s consent, or full understanding of the risks, all to enrich said tech companies (modelled on the term ‘climate crisis’).

AI emergency – the knowledge that the unsafe AI systems being released (and developed) by tech companies threatens the very fabric of society, and that there’s a limited amount of time to put in place stringent regulations to avoid civilisational chaos, and potential algorithmic extinction (modelled on the term ‘climate emergency’).

AI grief – related to AI anxiety, but incorporates a deep sense of loss and sadness associated with the harms that the tech industry is imposing upon society through AI. This term is modelled on ‘climate grief’.

AI melancholia – the feeling of crushing powerlessness in the age of this superior and dystopian AI technology. Modelled on the term ‘environmental melancholia’.

Arin-fi or ARtificial INtelligence Fiction – a suggested sub-genre of fiction within the “ty-fi” category (see definition for “ty-fi” below), that deals specifically with AI related issues. This could also be shortened to “Ari-fi” or “AI-Fi”. I proposed this term in October 2023.

Homo algors – a proposed collective name to describe humanity’s move away from who we are as a social species, to one that develops, interacts, and spends the majority of their time, with technology (leading to our collective detriment). This is a new term, modelled on the scientific name ‘homo sapiens’. However, a similar term exists called ‘homo algorithmus’.

Homo eradicans – a collective term for humankind’s destructive impact which may lead to civilisational collapse and the annihilation of the natural world. Examples include the climate and ecological emergencies, the development of AI and pursuit of AGI, and the creation of nuclear weapons. For an update of current progress, see Bulletin of the Atomic Scientists, where we sit at 90 seconds to midnight at the time of writing. This is an existing term, which is modelled on the scientific name ‘homo sapiens’.

Homo techans – a proposed name for our species’ adoption of technology, including AI, without first discussing what we want (and don’t want) from technology, allowing technological drift to carry us towards dangerous and potentially irreversible harm. Modelled on the scientific name ‘homo sapiens’.

IPAI – An abbreviation for a proposed new international body created by the UN dealing with the threat of AI and technology, much the same as the IPCC focuses on the climate emergency. IPAI thus refers to the Intergovernmental Panel on Artificial Intelligence. This is an existing term.

Move fast and break things – A common refrain within the tech industry, implying innovation at speed. But, the phrase has taken on a different meaning given the proliferation of hate, bullying, and malicious state interference on social media platforms, which the social media companies have failed to reign in. This mindset of moving fast and breaking things, instead shows a recklessness and disregard for societal wellbeing within the tech industry, and leads to our collective demise.

Tech anxiety/ tech-anxiety – any anxiety which arises from the rapid release of technological products, which carry risks to individuals and society. This is an existing term, which I’ve modelled on eco-anxiety. AI anxiety falls under this umbrella term.

Technogenic – something produced by, or derived from, human-developed technology. This is an existing term, modelled on ‘anthropogenic.’

Technological grief – grief related to the expected or experienced harms brought about by technology. This term is modelled on ‘ecological grief’.

Techopalypse – an apocalypse caused by any form of human-developed technology. A more detailed overview can be read here.

Techopocene – a proposed name for the era of rapid technological development, including AI, which has had a significant impact on the way people interact, behave and live. This impact has affected nearly every aspect of society, including healthcare treatments, the way people carry out their professional tasks, the way we find partners, and how we receive information and knowledge. This term is modelled on ‘Anthropocene.’

Techopocene horror – the intense fear of technological change, stemming from the harms and risks posed by technology. Modelled on the term ‘Anthropocene horror’.

Techwashing (or tech-washing) – the tech industry’s attempts to mislead the public about their products, by downplaying the risks and using PR and marketing tactics to convince the public that their products are future-friendly and will improve society’s lot. Techwashing, much like greenwashing by the fossil fuel industry, could sow enough denial, doubt, and delay to turn the AI crisis into one that outpaces climate breakdown. Meanwhile the tech companies, like the fossil fuel companies, will continue raking in staggering profits through their civilisation-threatening technology. This is an existing term.

Ty-fi or TechnologY FIction – a genre of fiction that deals with all issues arising from technology, encompassing AI, cybercrime, slaughterbots, drones, social media, robotics, and more. I proposed this term in October 2023.

Summary

Our future is imperilled like never before as humanity faces the twin risk of climate catastrophe, and algorithmic extinction. Together, we have the ability to overcome both of these issues. Humanity must have a say through global citizens’ assemblies about our preferences, and these must be enacted through regulations by politicians.

We have the solutions to tackle both crises. We don’t have to let the fossil fuel industry and tech industry take us over a cliff edge. All we lack is the political will for meaningful action, and if we’re not careful, the window for action will slam shut and civilisation could end on our watch. We can’t let this happen. We’re better than that. We can’t afford to be the weak link in the chain connecting us to the ancestors of the past, and descendants of the future. So, let’s fight for the future, because everything rests on what we all choose to do with the precious little time we have left.

Selected Resources

Books

  • Human Compatible: AI and the Problem of Control by Stuart Russell
  • The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
  • Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
  • Permanent Record by Edward Snowden
  • The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
  • Life 3.0 by Max Tegmark
  • 1984 by George Orwell
  • Superintelligence by Nick Bostrom

Articles

Podcast

Video

Other

Template for Contacting Political Representatives about AI

Dear

I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.

Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and authors. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.

Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.

Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI poses an existential risk to our civilisation.

We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly in outright banning the technology or holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.

With concern and expectation,

My new cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI