Skip to content

30 Common Myths About AI – Debunking Techwashing

30 Common Myths About AI
Photo by Ilya Pavlov on Unsplash

We are repeatedly told that AI will bring about a wealth of benefits by industry proponents. Indeed, it should be acknowledged that AI has the potential to be beneficial in regards to health and medicine, as well as potentially interpreting animal communication. However, a problem arises when AI-advocates only talk about the benefits, and refuse to acknowledge the plethora of risks posed by this technology.

Indeed, experts warn that AI could eviscerate the jobs market, lead to entrenched autocracies, and upend our culture and our sense of what makes us human. In a worst-case scenario, it could even lead to human extinction. These are just a small selection of potential harms that I covered in a previous post.

In this post, I debunk common myths about AI, in an attempt to counter the growing problem of techwashing.

Debunking 30 common myths about AI

Myth #1 – AI will benefit everyone.

In an Observer Editorial on AI, they note that, “A recent seminal study by two eminent economists, Daron Acemoglu and Simon Johnson, of 1,000 years of technological progress shows that although some benefits have usually trickled down to the masses, the rewards have – with one exception – invariably gone to those who own and control the technology.”

Thus, history is clear that technological progress will benefit the owners of the technology. But, ordinary citizens like us will not see the benefits, and instead will be lumbered with the disastrous harms of releasing this unsafe technology into the world.

Myth #2 – AI is a natural part of human evolution.

Firstly, AI isn’t natural and is in no way part of our evolution. Secondly, modern humans have been around for over 300,000 years without needing AI, so one could make a strong argument that AI is not essential to us in any way, shape, or form. There is no solid case for developing this unsafe and largely unregulated technology, without implicit consent from all of humanity – something that can only be achieved through global citizens’ assemblies on AI.

True human evolution would involve humans becoming better people – helping each other, fostering global unity, ending all forms of war and conflict etc. Tech companies promote the myth of AI being part of our evolution, to mask their true motives, which are to promote their products and enrich themselves.

Myth #3 – AI will only bring benefits; there are no downsides.

This is a pure fallacy. I for one, managed to come up with 31 downsides to AI, and that barely scratches the surface. Experts have warned us again, and again, and again, and again, about the major risks that AI poses to society and to the future of civilisation.

In a survey published in January 2024 of 2,778 researchers who’d published in top tier AI outlets, 38% of the respondents said there was at least a 10% chance of human extinction from AI. If that isn’t a downside, I don’t know what is.

In his book Human Compatible, Stuart Russell warns that AI models don’t have safety built in. On top of which, global governance hasn’t put in place any meaningful regulations or safeguards regarding the development of AI. Humanity is very exposed to AI being used for nefarious purposes by individuals, organisations, or governments, with malicious agendas. Democracy, jobs, mental health, and societal wellbeing all face being eviscerated by the technology already in the public domain. With each new AI release, the risks only increase.

This myth is a dangerous example of techwashing, which must be addressed whenever it crops up.

Myth #4 – AI threats are overhyped.

Wrong again. In February 2024, the House of Lords in the UK published a report on Large language models (LLMs) and generative AI. The fifth chapter of the report deals with risks, and reading through them it soon becomes apparent that many of the risks are based on existing technology in the public domain.

To take just one example, the report states that “there is evidence that LLMs can already identify pandemic-class pathogens, explain how to engineer them, and even suggest suppliers who are unlikely to raise security alerts.” In an experiment a few years back, AI identified up to 40,000 lethal molecules in just six hours. Thus, existing AI systems could make chemical warfare easier and more deadly.

So why does this myth live on about AI threats being overhyped? Why do many people in the tech industry refuse to acknowledge these risks? Professor Stuart Russell also finds this baffling. In his book Human Compatible, he attributes this mindset to “tribalism in defense of technological progress.”

Myth #5 – People are making a big fuss over AI risks, without good reason.

Geoffrey Hinton, one of the godfathers of AI, said that the reason why we’ve avoided disasters like nuclear war is because people “made a big fuss”, which resulted in overreactions and that led to us putting in place safeguards to avert disaster. Hinton said it’s better to overreact than underreact in these situations. That’s why raising the alarm over AI risks at this stage, is absolutely critical.

Myth #6 – Even if there are risks, AI stands to bring so many benefits that we simply have to accept the harms.

This myth was succinctly shot down by Professor Stuart Russell in Human Compatible. He writes that, “if the risks are not successfully mitigated, there will be no benefits.”

Myth #7 – AI is safe.

Stuart Russell states in Human Compatible that the foundations of AI systems haven’t been built with human control in mind. As such, Russell says, “We need to do a substantial amount of work… including reshaping and rebuilding the foundations of AI.” This should be a priority, even if it means starting from scratch for tech companies. But they won’t voluntarily start from scratch even if that’s the only sensible solution on the table. Therefore, regulations must be put in place for these companies to abide by.

It’s also worth pointing out that in a Guardian interview, Geoffrey Hinton said that the odds of a disaster caused by AI might not be so different from a toss of a coin. That’s a fifty-fifty chance of catastrophe and potentially the end of humanity.

If you were told that getting on a plane had a fifty-fifty chance of ending in disaster, I doubt you’d call that ‘safe.’

Myth #8 – Algorithmic extinction (AE) couldn’t happen in real life.

I define AE as “extreme civilisational chaos, or collapse, brought about through human-developed technology, such as AI, AGI, or superintelligence.”

Earlier this year, a survey of 2,778 researchers who’d published in top tier AI outlets, examined this question. The result? 38% of the respondents said there was at least a 10% chance of human extinction from AI.

The biggest AI risk, which the likes of Stuart Russell, Geoffrey Hinton and other tech leaders are warning about is the development of AGI. This is a form of intelligence that can learn anything that a human can. It’s seen as perhaps the last stepping stone to superintelligence (which far surpasses humanity’s intelligence) and the singularity, which has no comparison and would upend civilisation as we know it.

Experts believe that AGI would control everything, and would likely conclude that humanity has done more harm than good on this planet. Thus, the logical outcome would be to exterminate our species. In Human Compatible, Stuart Russell sums it up succinctly, “I explained the significance of superintelligent AI as follows: “Success would be the biggest event in human history … and perhaps the last event in human history.”” For more info, see section 1.7 here.

Myth #9 – Experts aren’t concerned about the risks posed by AI.

This is blatantly false. Here are just a few examples of expert warnings:

  • In a survey published in January 2024 of 2,778 researchers who’d published in top tier AI outlets, 38% of the respondents said there was at least a 10% chance of human extinction from AI.
  • Over 33,000 people signed an open letter calling for, “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The signatories included hundreds of AI experts like Professor Stuart Russell and one of the godfathers of AI, Yoshua Bengio.
  • Two of the three godfathers of AI, Geoffrey Hinton and Yoshua Bengio, along with a large number of AI experts, assigned their names to the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
  • Yuval Noah Harari, one of the great thinkers of our age, warns in an extract of his new AI book published in the Guardian that, “Humanity is closer than ever to annihilating itself.”
  • Professor Stuart Russell has warned about the risks of AI in his book Human Compatible. He also writes in a Guardian article, that “Humanity has much to gain from AI, but also everything to lose.”
  • In a Guardian interview, Geoffrey Hinton said that the odds of a disaster caused by AI might not be so different from a toss of a coin. That’s potentially a fifty-fifty chance of catastrophe.
  • In a 2023 paper entitled, ‘Managing AI Risks in an Era of Rapid Progress’, some of the world’s leading AI experts came together to warn about the risks and propose a route forward. These experts included two of the three godfathers of AI, Yoshua Bengio and Geoffrey Hinton, as well as Professor Stuart Russell, amongst others. They stated that, “We urgently need national institutions and international governance to enforce standards to prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses governance to reduce risks. However, no comparable governance frameworks are currently in place for AI.”

Myth #10 – AI is just another type of technology, and we’ve always had technological change.

AI is unlike another other kind of technology, and the risks are gargantuan in comparison. To underline this point, a KPMG report makes it clear that AI represents “a radical shift from past trends in automation.” We’re in completely unchartered and dangerous territory.

Myth #11 – AI needs to be sentient to create harm.

This is a myth that’s been dispelled by numerous experts, including Professor Stuart Russell in Human Compatible. To put it simply, Russell states that, “All those Hollywood plots about machines mysteriously becoming conscious and hating humans are really missing the point: it’s competence, not consciousness, that matters.”

A machine just needs to have a purpose, and by having a purpose it may naturally seek to prevent anything interfering with it achieving its goal. This may include a human’s attempt to turn it off. This creates an additional problem, because nearly every goal apart from self-destruction, would involve the machine staying on.

Thus, it stands to reason that any basic goal given to a machine will involve a strong resistance to humans turning it off. As Brian Christian writes in The Alignment Problem, “A system given a mundane task like “fetch the coffee” might still fight tooth and nail against anyone trying to unplug it, because, in the words of Stuart Russell, “you can’t fetch the coffee if you’re dead.””

Myth #12 – AI will create a jobs boom.

This inaccurate talking point is another example of techwashing. In Human Compatible, Professor Stuart Russell chillingly notes that a billion jobs are at risk from AI, while only “five to ten million” data scientist or robot engineer jobs may emerge. If that forecast comes to pass, it would leave 990 million people unemployed. What those people are meant to do for survival is anyone’s guess. For context, 990 million people is equivalent to the combined population of the European Union, the UK, the US, Canada, Australia, South Africa, and Costa Rica, with a few million to spare.

A Goldman Sachs report was more optimistic, saying that ‘only’ around 300 million jobs would be impacted by AI. The OECD (Organisation for Economic Co-operation and Development) released their forecasts saying that highly skilled jobs were most at risk of being lost to AI. According to the estimates, those types of professions account for 27% of jobs in the OECDs 38 member states, and span sectors including law, medicine, and finance.

In the UK, the IPPR thinktank estimates that 8m jobs could be stolen by AI. In the US, research by the Oxford Martin School looked at how vulnerable certain professions were to compuerisation, and estimated at around 47% of employment was at risk.

A KPMG report on Generative AI and the UK Labour Market, lists the percentage of tasks that might be automated in different professions, as a result of AI:

  • Authors, writers and translators – 43%
  • Programmers and software development – 26%
  • PR and communication directors – 25%
  • IT user support technicians – 23%
  • Graphic designers – 15%
  • Personal assistants – 11%
  • Legal professionals – 11%
  • Business and related research professionals – 10%
  • Marketers – 7%
  • Auditors – 7%
  • Biological and biomedical scientists – 6%
  • Teachers in higher education – 6%

Meanwhile, according to a Gallup survey, 72% of the 135 interviewed Fortune 500 CHROs see AI replacing jobs in their workplaces over the next three years. They say that, “leaders believe the future is automated”. Whichever way you want to cut it, many people will stand to lose their careers and their professions. Such a massive and irreversible upheaval on such a short time span has never been seen in human history.

A short list of jobs already being lost to AI, include:

  • According to the Society of Authors’ AI Survey in 2024, 26% of illustrators and 36% of translators have already experienced losing out on work, which is instead being done by generative AI. This has led to 37% of illustrators and 43% of translators experiencing a decrease in income, as a result of generative AI.
  • Tech companies including Microsoft, PayPal, Snap, and eBay have laid off 34,000 staff since January 2024, as they pivot towards AI.
  • BT is set to replace around 10,000 workers with AI.
  • IBM has said around 7,800 jobs could be replaced by AI, over a five year timeframe.
  • Dropbox laid off 500 workers to make way for those who could help it develop its AI capabilities.
  • Stack Overflow is laying off 100 workers as their product is undercut by AI.
  • Duolingo has cut 10% of their contractors, and replaced them with AI to generate content.
  • Chegg Inc. which offers homework services is laying off around 80 people (4% of its workforce) as it embraces AI.
  • Derby City Council is going to replace four full time equivalent jobs with AI, which they believe will save £200,000 a year.
  • In a report based on responses from 750 business leaders, 37% of them said that AI was used to replace human workers in 2023. 44% of those business leaders said they would use AI to replace more human workers in 2024.
  • The situation in journalism doesn’t look great either. Many media outlets are using AI to write articles and that will no doubt eventually lead to job losses in the sector. Examples of these outlets include: Buzzfeed, the Daily Mirror, the Daily Express, Cnet, Men’s Journal, and Bild. Google is testing a new AI tool called Genesis, which writes news articles and has pitched this to large media outlets. They claim it’s not intended to replace, but to aid journalists with their story writing. In Australia, News Corp has gone hell for leather with AI, using it to produce around 3,000 local news stories each week.

As if all this wasn’t enough, in a Guardian Live event in 2023, Stuart Russell warned that the ‘robotics dam’ was likely to break soon, meaning that robots could become more widespread and steal more human jobs.

Between AI and robotics, jobs at all levels in many fields are at risk of being lost. To campaign to protect jobs after they’ve been lost, will be too late. Campaigning will need to happen as a preventative, rather than a reactive measure. Given the pace of AI developments, campaigning is needed urgently to protect entire professions from being eviscerated.

Myth #13 – AI doesn’t pose a risk to democracy.

AI tools in the public domain already pose a massive threat to universal democracy. Deepfake audio and video, will make it harder than ever to distinguish reality from fiction. As John Naughton writes in the Observer, “Generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.” How this is harnessed by political parties could forever change who gets elected. Writing in the New York Times, Yuval Noah Harari, Tristan Harris and Aza Raskin, explain that, “By 2028, the U.S. presidential race might no longer be run by humans.”

We’ve been given a glimpse of how fake news has led to the rise of right wing sentiment in recent years. If AI is employed for nefarious purposes, might we find ourselves on the verge of permanent right wing leadership? Could that usher in an era of extreme surveillance where our thoughts and actions are monitored around the clock? Such a scenario would make Orwell’s 1984 seem tame.

Myth #14 – AI will bring about Universal Basic Income (UBI), because many people will be unemployed.

Right… and where would the money come from for UBI if the economy is stagnant because one billion people have lost their jobs and can’t afford to buy things? Do you believe that the profit-oriented tech corporations will fund UBI? If so, you misunderstand how corporations work and the type of people who run them.

This myth is particularly popular with socialist and communist-leaning individuals, but they have zero evidence for how AI would lead to UBI.

The truth of the matter is that the concept of UBI is fantastic, and trials have shown it to be successful. But we don’t need AI or any other technology to ‘force’ UBI to happen. Rather, this should be a democratic decision made by the public through citizens’ assemblies around the world.

Myth #15 – people won’t substitute human contact for AI.

Some people are already using AI chatbots to help them speak to close members of their families who’ve passed away. In a Guardian article, ethicists expressed their concern about this. No studies have been conducted on whether this will lead to solace, or cause people more grief and anguish.

But if people became attached to this AI, might they find themselves more connected with the technology and less with real-life friends and family? When those real-life friends and family pass away will they find themselves once again relying on this technology to tell them all the things they weren’t able to say when they were alive? This could ultimately create a depressing cycle, whereby people only ever communicate with their AIs.

In The Age of AI by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher, the authors say that children of the future might be educated by an AI on a device. This would remove the need for schools and interaction with other children.

People are also turning to AI girlfriends, replacing real-life connections and handing over their privacy to AI companies. These dystopian threats will profoundly change how humans interact and reduce our connections with reality.

For context, we need to read The Good Life by Robert J. Waldinger and Marc Schulz. This book is about The Harvard Study of Adult Development, which spanned 84 years and is “the longest in-depth longitudinal study of human life ever done.” The authors said if they had to condense the results from the study into a single message, it would be that, “Good relationships keep us healthier and happier.” For clarity, they are referring to human relationships, such as with family, colleagues, friends, and life partners.

By turning away from humans, and towards technology, we are moving away from what makes us human and away from what we need to thrive in this life.

Myth #16 – AI couldn’t ever read our thoughts.

This may change in the future. Neurotechnology is a type of technology that connects and interacts with the brain. This technology can now show an image of what we’re thinking. Neuroscientists are hoping to refine the technology to “intercept imagined thoughts and dreams.”

In a Guardian interview with Prof Nita Farahany, she says that she is most worried about, “Applications around workplace brain surveillance and use of the technology by authoritarian governments including as an interrogation tool.” Thus, neurotechnology has the potential to degrade human rights, expose activists and make this a world where people are afraid of thinking anything apart from what they are told is ‘safe’ to think. A dictator’s dream for all intents and purposes.

A separate Guardian article warns that, “there are clear threats around political indoctrination and interference, workplace or police surveillance, brain fingerprinting, the right to have thoughts, good or bad, the implications for the role of “intent” in the justice system, and so on.”

Myth #17 – AI doesn’t cause environmental issues.

AI requires massive servers which use significant amounts of energy and water. Unless that energy is coming from renewables, then AI is actually driving the climate crisis. Water abstraction for use in server farms is another contentious issue, especially as climate change depletes freshwater resources and the remaining water is needed for human consumption.

Myth #18 – It doesn’t matter if AI replaces human art and creativity, because art is just art at the end of the day.

Before dispelling this myth, I’d just like to point out that as a writer, this feels like a very shallow thing to say and shows no respect for people who’ve spent their careers honing their craft, often with little or no pay and struggling to make ends meet – all for the world’s benefit.

Just because AI has the capability to produce art in all its forms, it doesn’t necessarily mean that this capability should be exploited. As Bill McKibben says in Falter, “The point of art is not “better”; the point is to reflect on the experience of being human—which is precisely the thing that’s disappearing.”

Myth #19 – AI isn’t going anywhere, so we should just embrace it.

With all those unmitigated risks, using AI seems reckless at best, and irresponsible at worst. It’s worth noting that by using AI systems, it gives the tech companies the social licence to continue developing their products. If no one used AI systems, the social licence that tech companies have assumed exists for their products, would disappear. In addition, funding would also disappear as profits fail to materialise.

Thus, there is a very strong case to be made for boycotting AI, until such time as stringent international regulations are put in place, and until a global citizens’ assembly has been held on AI to determine what we want from this technology, who should be allowed to develop it, how it will be regulated, and the safeguards that will be put in place to protect society, jobs, democracy, and mental wellbeing.

Myth #20 – It doesn’t matter if I use AI as a writer/artist/musician/creative.

Using AI as a creative is effectively a way of cheating. Cheating yourself. And cheating people who consume art. I compare it to a person who decides to ‘run a marathon’, by ordering a taxi at the starting line and getting dropped off at the finish line. It completely defeats the entire purpose of running a marathon. If a machine creates your story idea, or writes your story, or makes your music, or produces a film for you, then you’ve effectively made yourself redundant and cheated your audience out of a human-made experience.

If your goal is to be a creative, you know full well that the path involves honing your craft over many years. There are no ethical shortcuts. If you intend to use AI to create your work, don’t bother entering the profession at all – you’re adding nothing and fooling yourself if you believe otherwise.

As mentioned above, it’s also worth pointing out that by using AI, you give tech companies the social licence they crave to continue developing their dangerous products. Why would anyone want to enrich the tech companies, and hasten our collective demise?

Creatives are first up in the firing line with the widespread release of generative AI, which threatens our professions like never before. It’s up to us to take a stand by boycotting AI, until such time as protections are put in place for our professions and to protect the essence of our culture – for that’s what human-produced art truly is.

Myth #21 – AI hallucinates and it’ll never replace writers.

AI may hallucinate. But with each new system release, more problems get ironed out and the threat from AI increases. Don’t just think about the next system release, but the one after that, and the one after that – it doesn’t take much to imagine that sooner or later, these systems will exceed human capabilities.

Every time you use AI, you’re feeding the beast and improving it, by giving it more data to learn from. As the KPMG report says, 43% of writing tasks may be automated including “text creation”, which the rest of us call “writing”. If that prediction comes to pass, then writing careers might be eviscerated. Is that risk really worth taking? Surely that’s something society should have a say about? And one might hope that humans will stick up for human writers.

Myth #22 – AI will never steal work away from creatives.

According to the Society of Authors’ AI Survey in 2024, 26% of illustrators and 36% of translators have already experienced losing out on work, which is now being done by generative AI.

This has led to 37% of illustrators and 43% of translators experiencing a decrease in income, as a result of generative AI.

Myth #23 – Creatives don’t want regulation on AI.

As some of the most vulnerable professions out there, creatives are desperate for AI regulation to protect their careers and livelihoods. According to the Society of Authors’ AI Survey in 2024, 95% of respondents urge the Government to regulate AI and put in place safeguards, particularly in regards to compensation, consent, and transparency.

Myth #24 – Fiction can’t help us tackle the AI crisis.

We are creatures of story. We think in stories as opposed to cold, hard facts. Stories therefore have the power to change how we understand issues. This is something I’ve written about in the context of climate change, and you can read more here.

I’ve proposed a new genre and a new sub-genre of fiction specifically to deal with AI:

  • Arin-fi or ARtificial INtelligence Fiction – a suggested sub-genre of fiction within the “ty-fi” category (see definition for “ty-fi” below), that deals specifically with AI related issues. This could also be shortened to “Ari-fi” or “AI-Fi”. I proposed this term in October 2023.
  • Ty-fi or TechnologY FIction – a genre of fiction that deals with all issues arising from technology, encompassing AI, cybercrime, slaughterbots, drones, social media, robotics, and more. I proposed this term in October 2023.

Myth #25 – It’s easy to tell deepfake AI images and video apart from real ones.

Perhaps that used to be the case. But, unfortunately with the release of hyper-realistic AI systems like OpenAI’s SORA, this will all change. We’re entering the age where it will be impossible to tell AI-generated content from human-generated content. And in a world where we can’t tell what’s real anymore, one begins to wonder how quickly that will lead to societal disintegration.

Myth #26 – Artificial general intelligence (AGI) is just a pipe dream.

Currently, vast sums of money are being pumped into the development of AGI. Some people doubt whether AGI is possible. But there is less doubt amongst AI developers, who instead disagree on when it will be achieved. Much of the architecture needed has already been built for other AI systems, according to Stuart Russell. Indeed, writing in the Guardian, Russell notes that ‘sparks’ of artificial general intelligence appear to have been observed by a team of researchers, in early experiments with ChatGPT4. If so, that would be a significant breakthrough and may herald the arrival of AGI sooner than anyone expected. And far sooner than society is ready for it.

To learn more about what AGI is and how it will upend society, take a look at section 1.7 in this post.

Myth #27 – Even if AGI is developed, it probably won’t happen for decades.

Some people say that AGI is decades away from being realised. But Professor Stuart Russell says we should never underestimate human ingenuity. He gives the example in Human Compatible of how, “liberating nuclear energy went from impossible to essentially solved in less than twenty-four hours.” That story can also be read here. Thus, the same thing could happen with AGI. Therefore, international safeguards should be put in place to prevent such a scenario from occurring, by regulating what the tech industry can and can’t develop. Ideally, this would be informed by the results from an international citizens’ assembly on AI.

Myth #28 – The risks presented by killer robots/slaughterbots/AWS (lethal autonomous weapons systems) are overstated.

Given that killer robots can operate independently from human operators, the potential for disastrous consequences is massive. But, they could also be manipulated by AI if there was a war against humanity. In Human Compatible, Stuart Russell said, “superintelligent machines in conflict with humanity could certainly arm themselves this way, by turning relatively stupid killer robots into physical extensions of a global control system.”

In a testimony to the House of Lords committee on killer robots, a contributor warned that, “The use of AI in defence “presents significant dangers to the future of human security and is fundamentally immoral””, according to the inewspaper. The article also says that, mixing AI with weapons “poses an “unfathomable risk” to our species, could turn against its human operators and kill civilians indiscriminately.”

Myth #29 – Politicians will regulate AI properly.

We lack international regulatory frameworks for the safe development, testing, release, and use of AI systems. In a 2023 paper entitled, ‘Managing AI Risks in an Era of Rapid Progress’, some of the world’s leading AI experts came together to warn about the risks and propose a route forward. These experts included Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, amongst others. They stated that, “We urgently need national institutions and international governance to enforce standards to prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses governance to reduce risks. However, no comparable governance frameworks are currently in place for AI.”

And if you look at climate change, in the 36 years since James Hansen warned the world, politicians still haven’t regulated fossil fuels. The reason? Massive lobbying from fossil fuel companies.

Now the tech companies are following suit. CNBC reported that lobbying by tech companies on AI, increased by 185% in 2023 compared to 2022. The report notes that 450 companies lobbied on AI and spent $957m on their lobbying efforts (an amount that includes lobbying on AI as well as other matters concerning the tech companies). This is a massive amount of money, and puts pressure on politicians to govern in favour of the tech companies. If governments are bought by tech companies, don’t expect them to regulate in favour of our collective interests.

Myth #30 – Tech companies can regulate themselves.

Realising that regulation might be on the horizon, some tech companies have come together with the proposal of creating their own body to regulate the industry. But this isn’t anywhere near sufficient, nor could it ever be trusted.

Imagine if a premier league football team asked one of their players to put on a referee shirt and referee their premier league matches. You rightly wouldn’t trust the referee, nor would you expect a fair result. Any regulatory bodies should be entirely independent of the tech companies, and should have power to do what they feel is necessary to keep society safe.

Conclusion

Techwashing will only increase as tech companies foist this dangerous and unregulated technology upon society without our collective consent. Thus, we need to be able to call out their misinformation and disinformation, to stop it rapidly spreading, and to prevent the AI crisis from spiralling out of control. I hope this post acts as a starting point for countering falsehoods, and helping people to better understand the magnitude of the challenge we face.

As I’ve said many times before, both the climate and AI crises can only be tackled if humanity comes together to address them. We urgently need this to happen, for if it doesn’t, the future may be one of unmitigated catastrophe.

Selected Resources

Books

  • Human Compatible: AI and the Problem of Control by Stuart Russell
  • Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
  • The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
  • The Coming Wave by Mustafa Suleyman
  • Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
  • For the Good of the World by A.C. Grayling
  • Permanent Record by Edward Snowden
  • The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
  • Life 3.0 by Max Tegmark
  • 1984 by George Orwell
  • Superintelligence by Nick Bostrom

Articles

Podcast

Video

Other

Template for Contacting Political Representatives about AI

Dear

I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.

Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and authors. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.

Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.

Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI poses an existential risk to our civilisation.

We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly in outright banning the technology or holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.

With concern and expectation,

My new cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI