
On Monday 13th January 2025, the UK’s Labour government announced plans to mainline “AI into the veins” of the country. This announcement comes less than a month after Geoffrey Hinton, one of the three Godfathers of AI, raised his prediction of AI leading to human extinction within the next 30 years. He believes the risk of this happening is between 10% and 20%.
Unsurprisingly, Keir Starmer and his government haven’t touched upon the countless risks AI poses, instead focusing on letting it be “unleashed”, to our collective detriment.
This extremely reckless plan has been praised by the tech industry, who spent $957m on their lobbying efforts in 2023. I leave it to others to decide whether their lobbying efforts were worth it.
Keir Starmer’s plan to “turbocharge AI”
The Labour government’s plan has three main strands.
- The first involves “laying the foundations” for AI across the country. New ‘AI Growth Zones’ will be established, which will be home to new data centres. They’ve identified Culham in Oxfordshire as the first site. They aim to create a twenty-fold increase in compute capacity in the next five years. As part of this, they’ll build a new supercomputer which has the power to play chess with itself 500,000 times per second.
- The second is a drive to increase adoption within both the private and public sectors. They want all sectors of the UK to take-up AI, and government departments are being told to make AI adoption a major priority.
- The third is to ensure the UK remains a world leader in AI. Tech corporations may be guaranteed energy and data access under these plans.
The basis of the plan is largely centred around the 50 recommendations put forward in the AI Opportunities Action Plan by Matt Clifford. All 50 of these recommendations have been taken forward by the government (the list of 50 recommendations and the government’s response to them can be read here).
The government has tried to put a positive spin on their plans by saying that AI will help spot potholes, speed up planning applications, and that it will give teachers more time to teach.
As part of the plan, three companies (Kyndryl, Nscale, Vantage Data Centres) have promised an investment package of £14 billion, which they claim will result in 13,250 new jobs in the UK.
Analysis of the government’s plan to mainline “AI into the veins” of the country
I’m deeply disturbed by the complete disregard for society in the government’s plan, whilst being equally concerned about the government choosing to back tech corporations to the hilt – all at massive risk and expense to each and every person. Below, I’ll explore a number of risks which haven’t been addressed by the government, which experts warn could have dire consequences.
1. Jobs
Jobs provide us with an income, a sense of purpose, and unlimited mental health issues. In short, they’re incredibly important and have a central bearing on our lives. The government claims that at least 13,250 new jobs will be created as mentioned above. The government talks this up further by using an IMF estimate, suggesting that AI could add 1.5 percentage points to our productivity and result in £47 billion of gains over 10 years. So far, so good.
However, as a Guardian article by Robert Booth points out, there is major concern that AI could lead to mass job losses, and lists vulnerable professions including, business management, law, and finance. Why would this happen? As Dan Milmo explains in a separate Guardian article, with AI entering the workforce, “you don’t need so many workers to do a certain job.” Thus, more tasks and roles will be completed by AI, and fewer humans will be needed in the workplace. This threatens entire professions and careers going forward. Milmo quotes an estimate by the Tony Blair Institute (TBI), which predicts that over 40% of the public sector’s tasks could be completed by AI. Such a massive change would drastically increase levels of redundancies. Things only get worse from there according to the TBI, with 1m-3m jobs in the private sector at risk of being stolen by AI. This, sadly, is a fairly conservative estimate.
In 2024, the IPPR thinktank predicted that up to 8m jobs in the UK could be lost forever to AI. More broadly, Professor Stuart Russell warned in his excellent book Human Compatible, that up to one billion jobs around the world are at risk from AI, while only up to ten million new jobs might be created. Russell’s apocalyptic forecast would upend society as we know it, and leave 990 million people unemployed. For context, that figure equates to the combined populations of the European Union, the UK, the US, Canada, Australia, South Africa, and Costa Rica. The fact that the UK government is actively trying to make this hellish future a reality is beyond the comprehension of most individuals capable of basic logic and reasoning.
2. On the tech industry’s side, instead of society’s
This plan feels like it’s been written by the tech industry, for the sole benefit of the tech industry. In their AI plan, the government writes that, “The AI industry needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers.” This kind of language couldn’t be any more pro-industry – the same industry that has plagued society with social media (the increasing list of harms of social media are simply too long to list in this post). So instead of regulating this industry which has proven itself to be on the wrong side of history, the government has embraced them tighter than a square peg in a round hole. The Labour government has made it abundantly clear that they’re not on the side of society, but rather on the side of the corporations which are threatening to bring civilisation to its knees.
3. Where’s the regulation?
Related to the second issue, the government has made it clear that they’re not motivated to regulate the tech industry which poses monumental safety risks to our species. Instead, in their plan, they write that, “For too long we have allowed blockers to control the public discourse and get in the way of growth in this sector.” Those “blockers” that they refer to are the regulators who are meant to keep us safe from the plethora of harms posed by this dystopian technology. As Robert Booth writes in the Guardian, “Regulators will be told to “actively support innovation”, setting up a potential clash with people who believe regulators’ primary role should be to protect the public from harm.”
For context concerning the urgent need for regulations, it’s worth revisiting a 2023 paper entitled, ‘Managing AI Risks in an Era of Rapid Progress’, written by AI experts including Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, amongst others. They stated that, “We urgently need national institutions and international governance to enforce standards to prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses governance to reduce risks. However, no comparable governance frameworks are currently in place for AI.” The Labour government has abandoned regulations much to the glee of tech corporations, who now hold humanity over a barrel. Why would they choose to do this to us? And why are we letting the government get away with this?
4. Data protection issues
In their response to the AI Opportunities Action Plan, the government said, “To make new advances in AI that benefit our society we need to ensure researchers and innovators have access to new data. We will responsibly, securely and ethically unlock the value of public sector data assets to support AI research and innovation through the creation of the National Data Library and the government’s wider data access policy.”
This is a controversial move, given that anonymised data from the NHS will be included in this data library, which companies and researchers will be able to use. But who’s given permission for their NHS data to be used in this manner? When were we consulted about these hurried plans? Moreover, can we trust tech companies with this data? What reasons do we have to trust them? If anything, their behaviour in recent months and years has shown them to be amongst the least trustworthy businesses on the planet. Do we really want them to have access to our data, even if it is anonymised? As Dan Milmo writes in the Guardian, this plan, “will have to jump hurdles related to privacy, ethics and data protection.” The fact the government hasn’t mentioned how they intend to clear these hurdles, or obtain our consent is grave cause for concern.
5. A monumental waste of taxpayer’s money
To fund this unsound plan, the Guardian notes that it will cost taxpayers billions of pounds up to 2030. This at a time when the government is struggling to balance the books, and has cut winter fuel allowance for some pensioners. So, is the message here, that pensioners must suffer, so that tech corporations can profit? What kind of heartless individuals would prioritise expanding the wealth of the richest corporations on the planet, at the expense of the most vulnerable? Eternal shame will be the legacy of this government if they proceed with these ill-founded plans.
6. Safety has gone out the window
Writing in response to the AI Opportunities Action Plan, the government says, “In the coming years, there is barely an aspect of our society that will remain untouched by this force of change. But this government will not sit back passively and wait for change to come.” In other words, they’ve made it clear that they’ll prioritise AI rollout, regardless of the fact that safety hasn’t been addressed in any way, shape, or form. Why does this matter?
In February 2024, the House of Lords published a report on Large language models (LLMs) and generative AI. Speaking to the enquiry, one of the leading experts in the field, Professor Stuart Russell, said, “The security methods that exist are ineffective and they come from an approach that is basically trying to make AI systems safe as opposed to trying to make safe AI systems. It just does not work to do it after the fact.” In other words, AI systems haven’t been built with safety factored in, instead developers are scrambling to try patch their systems with fixes. As Russell rightly argues, this is ineffective.
On a wider scale, Stuart Russell has mentioned in his book Human Compatible that the foundations of AI systems haven’t been built with human control in mind. As such, Russell says, “We need to do a substantial amount of work… including reshaping and rebuilding the foundations of AI.” To do this, tech companies would effectively need to rebuild their AI systems from scratch. However, no profit driven corporation will undertake this task willingly. Instead, regulation has to be in place to protect society, but as shown above, the government has abandoned their responsibility to regulate this untrustworthy industry.
The AI Opportunities Action Plan recommended that the government’s AI Safety Institute (AISI), should “maintain and expand its research on model evaluations, foundational safety and societal resilience research.” The government’s response was that they intend to make AISI a statutory body. Can the AISI be trusted to prevent algorithmic extinction? I worry that the answer is a resounding ‘no’.
On the AISI’s website, they state that they “aim to conduct rigorous, trustworthy assessments of advanced AI systems before and after they are launched.” But at no point do they say that they will prevent the launch of dangerous AI systems. Nowhere do they say they will hold the tech industry accountable. Nowhere does it state that they’ll push for meaningful regulation on a national or international level to prevent the release of these dangerous AI systems. They simply look into the systems and write assessments for the government to review. But the government has made it clear that they don’t want “blockers” in the way of tech corporations. So, what good is the AISI then? It feels like they’re simply they’re to be a tickbox – did the government evaluate this AI system before release? Yes! Tick! Done! But where does that leave safety? Where does that leave us?
To answer the latter question, it would appear that we’ve been left up a certain creek without a paddle. The government agreed to the 49th recommendation (as they agreed to all the other recommendations) in the AI Opportunities Action Plan, which was to “Drive AI adoption across the whole country.” Terrifyingly, the government brazenly boasts that, “there is barely an aspect of our society that will remain untouched by this force of change.” They’ve put us in the most vulnerable position possible, and they appear proud of their behaviour. This is unconscionable.
Yes, AI can be useful for medical purposes. This is something the government correctly acknowledged, when they wrote, “It is being used in hospitals up and down the country to deliver better, faster, and smarter care: spotting pain levels for people who can’t speak, diagnosing breast cancer quicker, and getting people discharged quicker.” But this is no excuse to abandon safety or scrutiny. Tech corporations must be held accountable at every level, for every step of the way.
As Robert Booth points out in the Guardian, this plan is a complete backwards shift in priority from the government. Prior to this, they’d “focused on tackling the most serious “frontier” risks from AI, relating to dangers involving cybersecurity, disinformation and bioweapons.” And now? They’ve removed “blockers” and sided with the very companies who threaten algorithmic extinction. In fact, they’ve gone as far agreeing with the 50th recommendation in the AI Opportunities Action Plan, which proposes partnering “with the private sector to deliver the clear mandate of maximising the UK’s stake in frontier AI.” Frontier AI is cutting edge general purpose AI – in other words, the extremely dangerous kind of AI, which an astounding number of the world’s leading AI experts have warned about and signed a petition calling for a minimum six month pause (or as long as necessary) in the development of these incredibly risky models. The government has flagrantly ignored these warnings and actively sought to be part of the problem with this position.
AI is very much an issue where the Precautionary Principle should be applied. The Precautionary Principle was devised to help policymakers act on the hole in the ozone layer, ensuring we took action to prevent the issue worsening, even as the scientific case was in its infancy and continuing to build. The World Commission on the Ethics of Scientific Knowledge and Technology defines the Precautionary Principle as, “an approach to guiding decisions when there is a plausible risk of irreversible consequences that would be unacceptable.” But even though AI is a prime example of where it should be applied, the government appears not to have even considered it.
To truly understand the safety risks associated with AI, I’ve spent months researching and writing blogs about them. I’d recommend this one and this one to get started.
7. AI causes severe environmental degradation
AI datacentres require vast amounts of energy and water. This puts completely unnecessary strain on both the water and energy grids.
Helena Horton writes in the Guardian that:
- Microsoft’s datacentres use anywhere between 1.8 litres and 12 litres of water per kilowatt hour.
- By 2027, one study estimates that 6.6 billion cubic metres of water will be consumed by AI datacentres around the world.
- The IEA (International Energy Agency) forecasts that next year (2026), datacentres will be using 1,000 terawatt hours of electricity. For context, that’s equivalent to the electricity used by the entire nation of Japan.
- In five years’ time, 4.5% of all energy generated will be used by datacentres, according to estimates by SemiAnalysis.
As the government’s AI plan involves looking at the use of ‘small modular nuclear reactors’, the Guardian writes that there is a fear that this could lead to increased volumes of radioactive waste.
The more you look into this completely unnecessary technology, the more you realise how immoral and unethical it is to be pushing ahead as vehemently as the government intends, with no prior thought to the incalculable risks and harms that it presents. We are being let down in the biggest possible way, at the worst possible moment.
8. They want AI to interact with citizens, thus more speaking with machines, and fewer human jobs
The 41st recommendation in the AI Opportunities Action Plan, states that “specialist narrow and large language models” could be used “for tens or hundreds of millions of citizen interactions across the UK.”
For anyone that’s ever interacted with an automated service, you’ll know how frustrating and time-wasting this can be. Now, the government wants more of it. But do we want this? I’d hazard a guess that the majority of people would side with ‘no’. We want competent people on the other end, not incompetent machines.
9. Government backing for AI gives tech companies the licence they need for developing more of these dangerous systems
Industries often look at what governments are planning. This government has given their full-throated support to one of the two industries (the other being the fossil fuel industry) that experts warn could bring civilisation to its knees. Despite that being their decision, it’s one for which we will all pay. This must change, which brings me to my final point.
10. Society hasn’t been given a say on the direction of AI
The government commissioned Matt Clifford to produce the AI Opportunities Action Plan. They then accepted all 50 of his recommendations. But where was our say on one of the biggest issues of our age? How was society consulted for our views? We weren’t. Why?
AI is being rapidly developed and foisted upon society, without us having the opportunity to discuss what we do or don’t want from it. Society deserves, and must have, a say in what happens with AI. What we desperately need is a citizens’ assembly on AI. Ideally, this would be conducted on an international level, with participants from every country in attendance. They’d be representative of the socio-demographic make-up of each country, and they’d hear from experts about the risks and benefits of AI. They’d discuss what they’ve learnt with each other in more depth and arrive at a set of recommendations for how AI could be safely developed, tested, released, and used, and for what specific purposes. These recommendations could form the basis for international regulations on AI.
Failing that, as a minimum, we must have a national citizens’ assembly on AI, for exactly the same purpose. Give society an opportunity to decide what happens, instead of handing the reigns over to the tech industry with wanton abandon. While these assemblies are taking place, AI development and rollout should be halted indefinitely.
Given what’s at stake, we must be given a say. Otherwise tech companies and governments will continue pursuing their own agendas for as long as they’re allowed to get away with it.
Summary
The government’s AI plans are reckless and dangerous. They have no right to “unleash” AI upon us without consulting us on this civilisation-altering technology.
In short, the government is on the side of tech corporations. Regulations won’t be there to protect us. And safety, well… what safety? It feels like this government is rushing head first towards catastrophe, having put little thought into these plans.
Every single person in the UK needs to care about this, because as the government says, “there is barely an aspect of our society that will remain untouched.” It’s now possible that entire professions will be wiped out in a few years. People will lose their entire careers, their sources of income, maybe their homes, and heaven-forbid, maybe even their lives. This is the direction the government is forcing us towards by siding with tech corporations, and refusing to meaningfully regulate them.
When industries aren’t regulated, terrible things happen. The pesticides industry got away with poisoning everything, until Rachel Carson released Silent Spring and people began making the connection between toxic pesticides and a multitude of health issues including cancer. The tobacco industry muddied the water for decades by denying the link between smoking and cancer, and countless lives were lost as a result. The CFC industry continued producing CFCs and expanding the hole in the ozone layer, despite growing scientific warnings. The fossil fuel industry has ignored and lobbied hard against 37 years of scientific warnings about the climate emergency and has driven us towards climate chaos. Now experts are urgently warning about the dangers of AI, but those warnings have fallen on deaf ears and we face the real risk of algorithmic extinction (up to a 20% chance of human extinction in 30 years according to Geoffrey Hinton). The Labour government’s response? To go balls to the wall in favour of the latest industry to harm society.
A Guardian Editorial on the government’s plans concluded that, “Governments must cut through industry noise and prioritise responsible AI governance.” This would involve recalling the current plan, and developing a new plan, based on sound input from society, which places a priority on regulations and safeguarding. That is the logical route forward. But logic and Labour, aren’t natural bedfellows.
So, sadly, it falls on us to call on our elected representatives to make them do their job (which in case they’ve forgotten involves representing the interests of their constituents, not the interests of wealthy corporations). I’ve included a template below for contacting the PM’s office, and another generic e-mail for contacting elected representatives about the dangers of AI. I implore you to engage, and do something whilst we have a chance to stop this juggernaut of a catastrophe from happening. For it may be impossible to reverse the carnage in a few years’ time.
And what a hellish world it would be if we let this accelerating disaster play out, in sync with climate collapse.
Template 1 – Short Email for the PM
Email: contact form available here – https://contact.no10.gov.uk/
Subject: Mainlining AI into the nation’s veins, is mainlining disaster. Please recall your disastrous AI plan
Dear Prime Minister,
I’m writing in regards to the AI Plan you announced on Monday 13th January 2025. I believe it’s a dangerous plan, which lacks any meaningful safeguards to protect society or jobs. The fact that you refer to regulators as “blockers”, and that you want to “unleash” AI, by mainlining it into the nation’s veins gives great cause for concern. A fuller argument listing numerous unanswered issues can be found here: https://www.ryanmizzen.com/mainlining-ai-more-like-mainlining-disaster-an-analysis-of-the-labour-governments-ai-plans/
I urge you to recall the plan with immediate effect, and instead create a new one based on input from society, with an underpinning focus on safeguarding and regulations. Both the tech industry and fossil fuel industry are driving us towards a hellish future. Even if you chose not to stop them, I urge you at the very least not to encourage them.
Yours sincerely,
Template 2 – A Lengthier Email for the PM
Subject: Mainlining AI into the nation’s veins, is mainlining disaster. Please recall your disastrous AI plan
Dear Prime Minister,
I’m writing to you in regards to the AI Plan you announced on Monday 13th January 2025.
I believe the plan you released is dangerous for a number of reasons, some of which I’ll list below. I don’t believe any elected representatives have the right to “unleash” this dystopian technology upon society, without society’s consent. To obtain such consent, it would be necessary to run a national citizen’s assembly on AI to present the full set of risks and benefits to a group of citizens, from which they can create a set of recommendations, which the government should implement. Only with that level of societal consent could the government rightfully choose to proceed. But this hasn’t happened.
Instead, the government has made it clear that they’ve sided with the tech corporations, that they see regulators as “blockers”, and that safety isn’t a concern, despite the limitless risks posed by AI.
It’s now possible that entire professions will be wiped out in a few years. An estimate from the IPPR thinktank estimates that up to 8m jobs could be lost in the UK as a result of AI. Is this level of economic catastrophe something that the Labour government has modelled and planned for? Can you show what steps you’ve taken to protect these 8m people who will have no source of income, and ensure that they’ll still be able to pay their bills? If not, please could I ask why you haven’t taken an interest in this major issue? For even the Tony Blair Institute estimates that up to 3m jobs could be lost.
Reading through your plan, it felt like you weren’t thinking about society, when you said that, “The AI industry needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers.” This comes across as being on the side of the AI companies – the side that AI experts believe poses a threat to our very existence. Please could you explain why you’ve taken their side, instead of society’s?
We still lack any kind of meaningful national or international regulation on AI, Mr Prime Minister. That’s something that wasn’t talked about in any detail in your plan. Please could you lay out the protections you’ll put in place for society, given that said regulations are urgent. Regulators aren’t “blockers” as the plan called them. They’re essential. Why? Because when industries aren’t regulated, terrible things happen.
The pesticides industry got away with poisoning everything, until Rachel Carson released Silent Spring and people began making the connection between toxic pesticides and a multitude of health issues including cancer. The tobacco industry muddied the water for decades by denying the link between smoking and cancer, and countless lives were lost as a result. The CFC industry continued producing CFCs and expanding the hole in the ozone layer, despite growing scientific warnings. The fossil fuel industry has ignored and lobbied hard against 37 years of scientific warnings about the climate emergency and has driven us towards climate chaos. Now experts are urgently warning about the dangers of AI, but those warnings have fallen on deaf ears and we face the real risk of algorithmic extinction (up to a 20% chance of human extinction in 30 years according to Geoffrey Hinton, one of the three Godfathers of AI). So why has the Labour government sided with a civilisation-threatening industry, Mr Prime Minister?
I don’t recall giving permission for my NHS data to be given to tech corporations (regardless of whether it’s anonymised). Please could you let us know why we haven’t been consulted about this, or when you intend to do so?
There will be a massive cost tagged onto your plan. Please could you explain why spending billions in favour of tech corporations takes priority over spending that money on the many areas of society which desperately require it? I’m thinking of pensioners and their fuel allowance for some reason, Mr Prime Minister.
Datacentres for AI require vast volumes of water and energy. Given that this technology is completely unnecessary (humans have managed fine without it for the last 300,000 years of our evolution), please could you explain the full set of steps taken to prevent AI from causing widespread environmental harm, and how you’ll ensure that the datacentres aren’t using up scarce water supplies that we desperately need?
I urge you to recall your current plan, and develop a new one. This new plan needs to be based on sound input from society, with an underpinning focus on safeguarding and regulations – two areas that are bizarrely lacking in your current plan. Both the tech industry and fossil fuel industry are driving us towards a hellish future. Even if you chose not to stop them, I urge you at the very least not to support and encourage them.
Yours sincerely,
Template 3 -General E-mail for Contacting Political Representatives About AI
Dear
I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.
Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and authors. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.
Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.
Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI poses an existential risk to our civilisation.
We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly in outright banning the technology or holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.
With concern and expectation,
Selected Resources
Books
- Human Compatible: AI and the Problem of Control by Stuart Russell
- Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
- The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
- The Coming Wave by Mustafa Suleyman
- Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat
- Code Dependent: Living in the Shadow of AI by Madhumita Murgia
- Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
- For the Good of the World by A.C. Grayling
- Permanent Record by Edward Snowden
- The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
- The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
- Life 3.0 by Max Tegmark
- 1984 by George Orwell
- Superintelligence by Nick Bostrom
Articles
- Stuart Russell – AI has much to offer humanity. It could also wreak terrible harm. It must be controlled
- Dan Milmo – ‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years
- Yuval Harari, Tristan Harris and Aza Raskin – You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills
- Jeremy Lent – To Counter AI Risk, We Must Develop an Integrated Intelligence
- Alex Hern – Interview – ‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity
- Yuval Noah Harari – ‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world
- Naomi Klein – AI machines aren’t ‘hallucinating’. But their makers are
- Yuval Noah Harari – Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
- Eliezer Yudkowsky – Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
- Jonathan Freedland – The future of AI is chilling – humans have to act together to overcome this threat to civilisation
- Daniel Kehlmann – Not yet panicking about AI? You should be – there’s little time left to rein it in
- Ian Hogarth – We must slow down the race to God-like AI
- Harry de Quetteville – Yuval Noah Harari: ‘I don’t know if humans can survive AI’
- Lucas Mearian – Q&A: Google’s Geoffrey Hinton — humanity just a ‘passing phase’ in the evolution of intelligence
- Sigal Samuel – AI companies are trying to build god. Shouldn’t they get our permission first?
- James Bradley – AI isn’t about unleashing our imaginations, it’s about outsourcing them. The real purpose is profit
- Alex Hern and Dan Milmo – Man v machine: everything you need to know about AI
- Society of Authors – Publishers demand that tech companies seek consent before using copyright-protected works to develop AI systems
- Alex Clark and Melissa Mahtani – Google AI chatbot responds with a threatening message: “Human … Please die.”
- The Guardian (Editorial) – The Guardian view on AI’s power, limits, and risks: it may require rethinking the technology
- Nick Robins-Early – OpenAI and Google DeepMind workers warn of AI industry risks in open letter
- Ryan Mizzen – AI and the Techopalypse
- Ryan Mizzen – 31 Reasons to Boycott AI
- Ryan Mizzen – Boycott Generative AI Before AI Makes Your Career Boycott You
- Ryan Mizzen – Terminology for the AI Crisis
- Ryan Mizzen – Mainlining AI? More Like Mainlining Disaster. An Analysis of the Labour Government’s AI Plans
- Ryan Mizzen – Results from the Society of Authors’ AI Survey 2024
- Ryan Mizzen – The Creator – Review
Podcast
Video
- Ozzy Man Reviews – Who is Real Anymore!? AI
Other
- Pause AI campaign
- Pause Giant AI Experiments: An Open Letter
- A Right to Warn about Advanced Artificial Intelligence
- Restrict AI Illustration from Publishing: An Open Letter
- Statement on AI training
- Statement on AI Risk
- Open Letter to Generative AI Leaders
- Call to Lead
- Autonomous Weapons Open Letter: AI & Robotics Researchers
- Lethal Autonomous Weapons Pledge
- Stop Killer Robots
- Amnesty International – Stop Killer Robots
- autonomousweapons.org
My cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.