Skip to content

31 Reasons to Boycott AI

31 Reasons to Boycott AI
Photo by Scott Rodgerson on Unsplash

AI developments are taking place at breakneck speeds and are likely to only accelerate going forward. Despite this, society hasn’t had an opportunity to discuss what we want from this dystopian technology, and more importantly, what we don’t want from it. Not only that, but international regulations aren’t in place for the safe development of AI. As such, humanity is completely exposed to a plethora of risks by the profit-driven tech industry.

Generally speaking, giving companies free reign with a dangerous product only results in bad outcomes; just look at how the fossil fuel industry has knowingly driven us to the brink of climate chaos. We can’t let tech companies engulf society in an AI techopalypse as well. One civilisation-threatening crisis is enough, but two such crises happening simultaneously is likely a stretch too far for our polarised society to cope with.

To give an analogy of our predicament, it feels like all of humanity is flying onboard a twin engine aircraft. One of those twin engines has been blown out by the fossil fuel industry’s climate crisis. The good news is that a plane can still fly with a single engine – so there is still a chance of civilisation surviving. The bad news is that the tech industry has been eying up the last functioning engine and have decided to set it on fire with the release of their civilisation-altering AI technology. Typically, pilots would cut the fuel supply and turn off an engine when it’s on fire. But if that’s done now, we’ll have no functioning engines left. In other words, we’d no longer be flying, we’d be falling to our demise. Such is the predicament we face, which our politicians have not only let happen, but have in some cases facilitated through their subsidies for the fossil fuel industry, and lack of regulation on emissions, and AI technology.

Through their actions, the tech industry and the fossil fuel industry appear to be hellbent on bringing civilisation to its knees. But it doesn’t have to be this way. If enough people coalesce and demand change, politicians will be forced to put in place the regulations to safeguard humanity at this critical junction in history. If we are to survive this century, this might just be one of the only routes that gets us there. Another way forward may be a global citizens’ assembly on AI, bringing together people from every country on earth to deliberate on what we want from the technology and how it should be developed in a safe and highly regulated manner.

There’s no denying that AI could have many benefits in the field of medicine, as well as potentially enabling communication between us and other sentient creatures. This would radically reshape our understanding, and treatment, of the flora and fauna of our world. But the truth of the matter is that by focusing purely on potential benefits, it runs roughshod over the incalculable risks that the technology poses to everything we hold dear.

One of the simplest actions each of us can take is to boycott AI. This removes the social licence that tech companies have assumed exists for their products. If no one uses AI, there is little incentive to pump large sums of investor money into developing it. Below are some reasons to consider boycotting AI, until such time as society has been given a meaningful say on this technology, and stringent regulations and codes of ethics are in place on an international scale, for the safe design, testing, release, and use of AI.

31 Reasons to boycott AI

1. Experts warn that AI is dangerous

Experts have warned us again, and again, and again, and again, about the major risks that AI poses to society and to the future of civilisation. In a survey published in January 2024 of 2,778 researchers who’d published in top tier AI outlets, 38% of the respondents said there was at least a 10% chance of human extinction from AI.

In his book Human Compatible, Stuart Russell warns that AI models don’t have safety built in. On top of which, global governance hasn’t put in place any meaningful regulations or safeguards regarding the development of AI. Humanity is very exposed to AI being used for nefarious purposes by individuals, organisations, or governments, with malicious agendas. Democracy, jobs, mental health, and societal wellbeing all face being eviscerated by the technology already in the public domain. With each new AI release, the risks only increase.

AI experts continue to sound the alarm as this technology rapidly develops. But, much like climate scientists, they’re being ignored by politicians who are responsible for putting in place meaningful laws and regulations to keep us safe.

2. Safety hasn’t been built into AI systems

AI systems have the potential to be dangerous. They have the potential to follow their own goals, which might be different from those of humanity. Despite this, tech companies haven’t built safety into their AI systems. In February 2024, the House of Lords in the UK published a report on Large language models (LLMs) and generative AI. Speaking to the House of Lords enquiry, Stuart Russell said, “The security methods that exist are ineffective and they come from an approach that is basically trying to make AI systems safe as opposed to trying to make safe AI systems. It just does not work to do it after the fact.” As such, the foundations of AI may need to be rebuilt, if we’re to have safe AI. Tech companies (motivated by profit, not social conscience) are unlikely to do this unless they are forced to do so through regulations.

3. No meaningful regulatory frameworks are in place

We lack international regulatory frameworks for the safe development, testing, release, and use of AI systems. In a 2023 paper entitled, ‘Managing AI Risks in an Era of Rapid Progress’, some of the world’s leading AI experts came together to warn about the risks and propose a route forward. These experts included Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, amongst others. They stated that, “We urgently need national institutions and international governance to enforce standards to prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses governance to reduce risks. However, no comparable governance frameworks are currently in place for AI.” Thus, without meaningful protections in place, we’re all at risk from the harms that AI could unleash.

4. AI regulation is being prevented because of tech industry lobbying

CNBC reported that lobbying by tech companies on AI, increased by 185% in 2023 compared to 2022. The report notes that 450 companies lobbied on AI and spent $957m on their lobbying efforts (an amount that includes lobbying on AI as well as other matters concerning the tech companies). This is a massive amount of money, and puts pressure on politicians to govern in favour of the tech companies. If governments are bought by tech companies, where will regulation come from, and who will stand up for the public’s interest?

5. We may lose control of AI

As safety hasn’t been built into AI, there is a risk of losing control of AI systems. Imagine a scenario where AI transfers itself across networks (to prevent it from being contained and switched off), and pursues an agenda at odds with our collective wellbeing. What measures are in place to prevent such a scenario from playing out, and if it does, what measures are in place to reign in the AI system? If it can’t be contained or turned off, what would that mean for humanity?

It’s worth pointing out that AI isn’t sentient (and doesn’t need to be, in order for it to be dangerous). It  doesn’t understand that humans don’t wish to be harmed. Thus, we’re collectively at risk if we lose control of an AI system.

6. Risk of global domination by tech companies

Tech companies are being allowed to do as they please, as regulations fail to keep up with the frantic pace of AI developments. As such, it’s worth remembering that, Ed Snowden, exposed that governments keep track of everything we do online. Is it therefore such a stretch to imagine those same governments might collaborate with tech companies to control our behaviour? This might sound like a sci-fi scenario, but it already happens in one of the most populous countries in the world, as the documentary, Total Trust: Surveillance State shows.

If other governments replicate this approach, we’d be living under even more repressive conditions. Once such a repressive regime takes over it becomes near impossible eradicate it, given that people are constantly monitored, and those who fight back are imprisoned (or worse). Imagine a more extreme version of Orwell’s 1984 – that’s what we could face if tech companies and governments joined up.

7. AGI threatens human extinction

“I explained the significance of superintelligent AI as follows: “Success would be the biggest event in human history … and perhaps the last event in human history.”” – Stuart Russell, Human Compatible: AI and the Problem of Control

The biggest AI risk, which the likes of Stuart Russell, Geoffrey Hinton and other tech leaders are warning about is the development of AGI. This is a form of intelligence that can learn anything that a human can. It’s seen as perhaps the last stepping stone to superintelligence (which far surpasses humanity’s intelligence) and the singularity, which has no comparison and would upend civilisation as we know it. When you hear people talking about AI that could end the world, they’re probably referring to AGI.

Experts believe that AGI would control everything, and would likely conclude that humanity have done more harm than good on this planet. Thus, the logical outcome would be to exterminate our species. For more info, see section 1.7 in this blog.

8. AGI may treat us the way we treat other sentient beings

Humans aren’t the kindest creatures on the planet. We’ve harmed each other in world wars. We’ve caused the climate crisis, which will have consequences for all living things. We’ve driven a 69% decline in animal populations in just 50 years. Our destructive impact has been so large, that some scientists have proposed naming our era, the Anthropocene.

Imagine if AGI is developed, and it looks at what we’ve done to the world. Imagine it looks at how we treat other animals through factory farming, where hundreds of millions of animals are grown in horrific conditions, pumped with steroids to grow unnaturally quickly and then slaughtered for consumption. If that is the standard we’ve set for the ‘humane treatment of sentient creatures,’ could AGI use that as the basis for how it treats us? Are we happy letting tech companies open that can of worms?

If nothing else, this should create a moment of introspection for humanity. We don’t know when AGI will happen, but many experts believe it’s likely within years or decades. So sometime this century, we’ll probably become the second smartest entity on the planet. In the eyes of this new alien superintelligence, it will see us for what we truly are – dominant and supremely destructive mammals. What will our comeuppance be?

9. AI threatens massive job losses

In Human Compatible, Stuart Russell chillingly notes that a billion jobs are at risk from AI, while only “five to ten million” data scientist or robot engineer jobs may emerge. If that forecast comes to pass, it would leave 990 million people unemployed. What those people are meant to do for survival is anyone’s guess. For context (at the time of writing), 990 million people is equivalent to the combined population of the European Union, the UK, the US, Canada, Australia, South Africa, and Costa Rica combined, with a few million to spare.

A Goldman Sachs report was more optimistic, saying that ‘only’ around 300 million jobs would be impacted by AI. Whichever way you want to cut it, many people will stand to lose their careers and their professions. Such a massive and irreversible upheaval on such a short timespan has never been seen in human history.

It’s worth pointing out that AI is also being used to conduct job interviews, and this has had negative effects given that bias has been built into some AI systems. There’s also the fact that… it’s not human. So, it probably shouldn’t be used for interviewing humans who will work in human roles in human companies, with other human colleagues.

As if all this wasn’t enough, in a Guardian Live event in 2023, Stuart Russell warned that the ‘robotics dam’ was likely to break soon, meaning that robots could become more widespread and steal more human jobs. Between AI and robots, humans may be hard-pressed to find any ‘safe’ career path.

10. AI threatens democracy

Generative AI can produce deepfakes, as well as campaign text for political parties. These tools could be used to deceive voters and tarnish opposition parties. The chapter assessing risks in the House of Lords report on LLMs, states that, “A reasonable worst case scenario might involve state and non-state interference undermining confidence in the integrity of a national election, and long-term disagreement about the validity of the result.”

If democracy is hijacked and dictatorships become prevalent, our future would take a turn for the worse, given the monumental task of removing authoritarian leaders (especially those receiving support from tech companies). Do we want to run this risk, and if so, why?

11. AI could turbo-boost cybercrime

Rates of cybercrime could skyrocket as a result of AI. One way this might happen is through the rise of ‘deepfakes’. These include videos, pictures, and audio that have been created from scratch, using a person’s likeness. For example, if you have video footage or voice recordings of a person, they can be put through an AI generator to copy your likeness to create new clips of a person doing or saying anything.

Fraudsters are using this to trick people into thinking they’re someone they’re not. If you get a call from a family member asking for money, and it sounds like them – how can you be sure it isn’t them? This happened to a mother in the US, who received a call from scammers who said they had kidnapped her daughter for a ransom and used AI to mimic her daughter’s voice, having obtained audio of the daughter’s voice from social media. The AI generator was so accurate, that the mother was completely taken in and extremely distressed. These kinds of heinous activities may increase the longer AI remains in the public domain with no regulation or oversight.

The threat of AI deepfakes was again on show in a shocking video released by the money saving expert Martin Lewis. Scammers made an AI generated video of Lewis backing an app, which he had never seen or heard of, and had to warn people about the scam on his Twitter (X) account. The Financial Conduct Authority (FCA) in the UK has also warned investors, banks and insurers of the potential for scammers to use AI for fraudulent purposes.

12. AI could be weaponised for cyberwarfare

The House of Lords report on LLMs states that, “A reasonable worst case scenario might involve malicious actors using LLMs to produce attacks achieving higher cyber infection rates in critical public services or national infrastructure.” Thus, nations are exposed to getting targeted by those with nefarious agendas. Given our current situation of global instability, and talk of World War 3, the risk of cyberwarfare (fuelled by AI) should be taken seriously.

13. AI could aid terrorism

The House of Lords report on LLMs notes that AI enables easier creation, translation, and dissemination of terrorist propaganda, including hate speech. They give the example of Meta’s AI model called LLaMa, which was leaked onto 4chan, and “users reportedly customised it within two weeks to produce hate speech chatbots, and evaded take-down notices.” This shows how reckless tech companies have been, and how little thought they’ve given to the consequences of their AI systems.

14. AI makes chemical warfare easier

The House of Lords report on LLMs warns of potential biological and chemical release, saying that, “there is evidence that LLMs can already identify pandemic-class pathogens, explain how to engineer them, and even suggest suppliers who are unlikely to raise security alerts.” In an experiment a few years back, AI identified up to 40,000 lethal molecules in just six hours. What safety protocols have tech companies and politicians put in place to ensure that such AI systems don’t get into the hands of people trying to harm society? Are they 100% certain that society is safe with these systems in the public domain? Given their potential for disastrous misuse, why have these systems been made at all?

15. AI could negatively impact mental health on a global scale

There will be a multitude of mental health issues that arise, if AI permanently steals entire careers away, if it upends democracy, if it leads to repressive surveillance states, if social connections are replaced by AI connections, and if we face the real risk of extinction through the development of AGI.

With climate and ecological breakdown, eco-anxiety and climate anxiety are becoming more prevalent. Will we now experience AI-anxiety on top of everything else?

16. AI could negatively impact our memory

There is a risk that memory will suffer as AI becomes more engrained in our lives. The Guardian explored this in an article asking, “Will AI make us stupid?” Will we need to remember anything anymore if we have a personal AI assistant that reminds us of everything? What incentive will people have to learn a new language when AI can translate in real time between people speaking different languages? The tech industry claims that AI is the next step in human evolution. But if AI makes us stupid because we use our brains less, then it will be taking humanity backwards. Beware of the misleading claims of the tech industry, who have profit and power in their sights.

17. AI threatens to make us more reliant on tech

We’re already glued to our phones. One study from 2023 suggested that people spend an average of 3 hours and 46 minutes on their phones each day. With the release of AI personal assistants which can organise people’s lives for them, people may find themselves more and more attached to non-living entities. Some people are also turning to AI for their relationships, and to replicate communication with people who’ve passed away. Technology like this may rip apart the fabric of what makes us human.

For context, we need to read The Good Life by Robert J. Waldinger and Marc Schulz. This book is about The Harvard Study of Adult Development, which spanned 84 years and is “the longest in-depth longitudinal study of human life ever done.” The authors said if they had to condense the results from the study into a single message, it would be that, “Good relationships keep us healthier and happier.” For clarity, they are referring to human relationships, such as with family, colleagues, friends, and life partners.

By turning away from humans, and towards technology (which enriches the tech companies), we are moving away from what makes us human and away from what we need to thrive in this life.

18. AI threatens to upend our ability to tell real from fake

With the release of systems like OpenAI’s Sora, it will become near-impossible to distinguish what’s real from what’s fake. We live in an age of fake news, where social media is flooded with polarised hate and disinformation. AI makes it easier to generate fake messages and fake stories. As humans are creatures of story, it’s not outside the realms of possibility that people will come to believe the trash generated by AI, given that it will be just as convincing (if not more so) than human-produced content. What kind of world would we be living in, if we couldn’t tell whether anything was real or fake anymore? Would it be worth bringing children into such a dystopian future? What do we, as reasonable people, stand to gain by living in such a world? Do we consent to that kind of future?

19. AI has inbuilt bias

AI which has been trained on data containing bias, will reflect that in the way it behaves. The House of Lords report on LLMs says that AI may, “entrench discrimination (for example in recruitment practices, credit scoring or predictive policing); sway political opinion (if using a system to identify and rank news stories); or lead to casualties (if AI systematically misdiagnoses healthcare patients from minority groups).” Thus, AI may make discrimination worse.

20. AI may have data protection issues

Generative AI is trained using data sets. If any of that data isn’t anonymised, there is a risk of the AI regurgitating this information.

21. AI training sets have breached copyright rules

Generative AI is trained using data. This data has been hoovered up online (often without permission). Creatives such as authors have had their copyrighted work used without consent or compensation, to train AI models – the same AI models that have the potential to eviscerate their profession. In a previous post, I explained that two of my blogs had been used in an AI training set without my permission, or compensation. This issue remains unresolved for most creatives.

22. AI threatens the entire creative industry

From writing to illustration, every type of creativity is under attack and at risk from AI-generated content. Art is the bedrock upon which civilisation is built, and stories are the most powerful form of communication we have. When AI takes over this space and produces our art, and influences people through its stories – where will that lead humanity? If an AI system develops its own goals, will it influence us in a negative way? And if AI remains controlled by tech companies, who are motivated only by profit, can we trust that they’ll influence us in a non-biased manner? What will happen to writers, artists, illustrators, actors, and voiceover artists? The people who’ve dedicated their careers to entertaining and inspiring the world may be swept aside. Who will protect them?

23. AI is a revolutionary technology with no comparison

Some people make the argument that technology always moves forward and always brings change. For example, look at the industrial revolution, or look at how cars replaced horses as a mode of transportation. But a 2023 KPMG report makes it clear that AI represents “a radical shift from past trends in automation.” In other words, we have nothing to compare AI against. What other technology for example, has threatened up to a billion jobs? We’re in unchartered territory and it’s worth remembering that being an early adopter of AI gives companies a social licence to continue developing these unsafe and unregulated systems. Is that really something you want on your conscience?

24. AI won’t benefit everyone as the tech companies claim

Tech companies claim that AI will benefit the world. But there is a good reason to believe this won’t be the case. In an Observer Editorial on AI, they note that, “A recent seminal study by two eminent economists, Daron Acemoglu and Simon Johnson, of 1,000 years of technological progress shows that although some benefits have usually trickled down to the masses, the rewards have – with one exception – invariably gone to those who own and control the technology.” In other words, the people who stand to benefit most from AI are the shareholders of the companies developing it. Profit trumps everything and everyone else.

25. Neurotechnology threatens to invade the privacy of our thoughts

Neurotechnology can now show an image of what we’re thinking. Neuroscientists are hoping to refine the technology to “intercept imagined thoughts and dreams.”

In a Guardian interview with Prof Nita Farahany, she says that she is most worried about, “Applications around workplace brain surveillance and use of the technology by authoritarian governments including as an interrogation tool.” Thus, neurotechnology has the potential to degrade human rights, expose activists and make this a world where people are afraid to think for themselves. A dictator’s dream for all intents and purposes.

A separate Guardian article warns that, “there are clear threats around political indoctrination and interference, workplace or police surveillance, brain fingerprinting, the right to have thoughts, good or bad, the implications for the role of “intent” in the justice system, and so on.”

This is the ultimate invasion of privacy.

26. AI and biotechnology might create a new hybrid class of humans

The tech industry seems to be quite confused about the idea of human evolution – AI isn’t human, and therefore isn’t part of our evolution as the industry claims… On that note, an Australian team have been awarded funding to merge AI with brain cells. They intend to use this to build better AI machines. But, is there a risk that this kind of research could lead us to the point where humans and machines merge?

Such an event would radically alter human society and create new classes of human/machine hybrids. The technology would be expensive, meaning that only the wealthiest of individuals would be able to afford it. In that scenario, the rest of humanity would be far weaker in terms of strength and intelligence and we would have a permanent new ruling class. This begs the question, what would happen to the rest of us? Instead of technology abolishing suffering, it’s likely that this would exacerbate it.

In Falter, Bill McKibben quotes Yuval Noah Harari, who says “Once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end, and a completely new process will begin, which people like you and me cannot comprehend.” As a global family of eight billion people, do we consent to this happening?

McKibben also quotes Ray Kurzweil, Google’s Director of Engineering, who says that, “We’ll have a synthetic neocortex in the cloud. We’ll connect our brains to the cloud just the way your smartphone is connected now.” This crazy future is one that people in the tech sector are trying to create. If humanity doesn’t like what the tech industry is doing, or the future they’re dragging us towards, then the industry must be forced to stop through regulations and internationally binding legislation.

27. AI-powered killer robots and lethal autonomous weapons systems (AWS) may proliferate

It’s believed that the first documented use of a killer robot was in Spring 2020 in Libya. The first documented real life use of a swarm of drones guided by AI was in May 2021 (they were used by Israel in Gaza). The Autonomous Weapons website says there have been “numerous reports” of killer robots being used around the world, since those first two incidents. This shows that killer robots aren’t a future threat; they’re an existing risk that will only become more pronounced and deadly as time goes by.

Not only could these types of weapons be used by authoritarian regimes to kill large numbers of dissidents, activists, rebels, or protestors, but they can also be manipulated by AI if there was ever a war against humanity. In a testimony to the House of Lords committee on killer robots, a contributor warned that, “The use of AI in defence “presents significant dangers to the future of human security and is fundamentally immoral””, according to the inewspaper. The article also says that, mixing AI with weapons “poses an “unfathomable risk” to our species, could turn against its human operators and kill civilians indiscriminately.”

28. Tech companies are mirroring the fossil fuel industry’s tactics

There is a motto within the tech industry of “move fast and break things.” This is something we should keep in mind, given that their products threaten to break civilisation. But now the tech industry is mirroring the tactics of the fossil fuel industry, to protect their dangerous products.

The parallels between the two industries are startling. Some of the wealthiest companies in the world are ignoring the dangers of their product, and investing billions to become market leaders, all while lobbying politicians to prevent regulation of their industry. Meanwhile a growing number of experts are sounding the alarm and calling for urgent political intervention to prevent catastrophe. Those calls are falling on deaf ears, as governments woo the companies at the heart of the problem for political or selfish gains. That’s the climate crisis, and the AI crisis, in a simplistic nutshell.

29. AI causes climate and environmental harm

AI requires massive servers which use significant amounts of energy and water. Unless that energy is coming from renewables, then AI is actually driving the climate crisis. Water abstraction for use in server farms is another contentious issue, especially as climate change depletes freshwater resources and the remaining water is needed for human consumption.

30. Society hasn’t been consulted on AI

AI is being developed so rapidly, that it’s been foisted upon society, without people having the opportunity to discuss what we do or don’t want from it. All the while politicians are being lobbied to prevent regulation, and tech companies are competing with each other to bring out new versions of their AI systems and to develop AGI – the release of which would likely be the final event in humanity’s troubled history.

Thus, we need the opportunity to publicly discuss this technology before more harm is done. AI development must be halted indefinitely until this happens. In an ideal scenario, a global citizens’ assembly would be convened with participants from every country in attendance. They’d be representative of the socio-demographic make-up of each country, and they’d hear from experts about the risks and benefits of AI. They’d discuss what they’ve learnt with each other in more depth and arrive at a set of recommendations for how AI could be safely developed, tested, released, and used, and for what specific purposes. These recommendations could form the basis for international regulations on AI.

The people must have their say given what’s at stake. But I fear the tech companies and politicians don’t care what the people think and will continue pursuing their own agendas for as long as they’re allowed to get away with it.

31. No AI benefits without mitigating the risks

“If the risks are not successfully mitigated, there will be no benefits.” – Stuart Russell, Human Compatible: AI and the Problem of Control

It’s safe to say there are a plethora of risks that AI poses. Given what they could mean for humanity, it’s essential that each and every one of them is addressed. Society must have a say on what we want from this technology. Safeguards and international regulations must be put in place. Tech companies must become transparent and work cooperatively on any technology that is allowed to be developed, and they should be held accountable for the harms caused by their dangerous products. All of this should be self-evident. But tech companies will keep talking up the benefits, to distract from the growing list of risks, and the fact they haven’t built safety into their systems.

Summary

We face the real risk of the AI crisis outpacing climate breakdown. As massive as both of these burdens are, we must tackle them simultaneously. If we don’t, experts believe that either crisis could bring about the end of civilisation; these words aren’t intended to scaremonger, but rather to drive home the magnitude of the challenges we face.

Tech companies won’t voluntarily do the right thing. Politicians being heavily lobbied by the tech industry, probably won’t do the right thing either. Thus, it’s sadly left to us as responsible citizens to lead the way.

Boycotting AI is something each of us can easily do. On top of this, we need to make our feelings clear to politicians and push for the regulatory safeguards that experts say we desperately need. The future is imperilled in a way never before experienced by humanity. But if we unify and work together across the world, I believe we can tackle both issues. Time isn’t on our side though, and the forces we’re up against are enormously powerful. This really will be the fight of our lives, because ultimately, it’s the fight to determine what future our civilisation will have. This is what’s at stake, and that’s why we must fight for our collective future.

Selected Resources

Books

  • Human Compatible: AI and the Problem of Control by Stuart Russell
  • Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
  • The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
  • The Coming Wave by Mustafa Suleyman
  • Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
  • For the Good of the World by A.C. Grayling
  • Permanent Record by Edward Snowden
  • The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
  • Life 3.0 by Max Tegmark
  • 1984 by George Orwell
  • Superintelligence by Nick Bostrom

Articles

Podcast

Video

Other

Template for Contacting Political Representatives about AI

Dear

I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.

Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and authors. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.

Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.

Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI poses an existential risk to our civilisation.

We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly in outright banning the technology or holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.

With concern and expectation,

My new cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI