Skip to content

Boycott Generative AI Before AI Makes Your Career Boycott You

Boycott generative AI before AI makes your career boycott you
Photo by Elisa Ventur on Unsplash

Modern humans emerged around 300,000 years ago. For 300,000 years, our species has managed to survive without artificial intelligence (AI), so one could make a strong argument that AI is not essential to us in any way, shape, or form.

Yet, tech companies – some of the wealthiest companies in the world – are seeking to thrust AI upon us whether we like it or not. Regardless of whether society has discussed if we want it or not. Regardless of whether we have safeguards in place or not. Tech companies want power, control, and more profit, therefore they’ve decided to compete with one another on who can develop systems quicker and lobby their way out of regulations to ensure they become market leaders, which may give them a stranglehold over us.

Experts have been warning us again, and again, and again, and again, that this is a very dangerous path to follow. But our politicians are content to let tech companies do as they wish at our collective expense. The proliferation and rapidly-increasing rollout of ever more powerful AI systems means we are hurtling towards ruin at pace, and no one in power seems bothered to do anything about that. Therefore, it’s up to each of us as citizens to make a concerted effort to turn the tide, and do so quickly.

The capabilities of generative AI are increasing, meaning they can do more human jobs. With each new system release, the AI models become more competent, making it harder for humans to compete for those jobs. Indeed, human jobs are already being stolen by AI in their tens of thousands. Think of those robbed jobs as the trickle before the tsunami, because experts like Professor Stuart Russell warn that a billion human jobs may be taken by AI.

If we don’t boycott AI now (not just the use of AI, but also consuming any type of art, writing, films, music, documentaries that others have produced using AI), there is a very real chance that it will collapse the jobs market leaving humanity at the whims of a tech and political elite who have control over us. The time to make a stand and boycott AI in every possible way, is now. We can’t wait for the jobs to be replaced before acting, because then it will be too late to turn the tide.

In this blog, I’ll explore what generative AI is already capable of, the jobs that have been stolen by AI so far, how a plethora of careers face annihilation going forward, why writers have a particular responsibility, dispelling common AI proponent talking points, and how we can try survive the dystopian future that tech companies are battering us towards.

Generative AI and large language models (LLMs)

Using generative AI and LLMs does more than just provide new data for the AI to learn from. What it really does is show tech companies that there is a demand for the products they’re forcing upon us. In other words, it gives the tech companies a social licence to continue developing and releasing dangerous AI into the world. This is the real reason why everyone should think twice about using AI – if no one used it, the tech companies would be forced to drop it as it would be making them a net loss.

This is why I propose boycotting AI for our collective safety. Politicians don’t care about us, that’s why things are trending towards the worse with climate breakdown, the AI techopalypse, and the flaring up of regional wars, not to mention a whole host of unaddressed societal ills. Tech companies claim AI (like a trojan horse) will help resolve those issues, much like oil companies claim their products are needed and that climate change isn’t so serious. This isn’t true as we already have what we need to address society’s problems without the need for AI. And as the examples below show, things are rapidly deteriorating and looking to become even more bleak for society as AI runs rampant over everything.

Generative AI

In February 2024, the House of Lords in the UK published a report on Large language models and generative AI. The purpose of the report was to “examine likely trajectories for LLMs over the next three years and the actions required to ensure the UK can respond to opportunities and risks in time.”

The report defines generative AI as “a type of AI capable of creating a range of outputs including text, images or media.” Some examples of generative AI include Chat GPT (text generation), Dall-E (image generation), Midjourney (image generation), Amper (music generation), Codex (code generation), and Descript (voice synthesis).

Books, films, documentaries, art, and music – all of these things can be produced by generative AI to a standard that can (in some cases) pass as human-made. Indeed, here are some nefarious examples of how it’s been used to date:

  • Deepfake pornographic images of Taylor Swift were made and circulated on social media
  • An Indonesian dictator was resurrected from the grave through deepfakes to give a three minute election message
  • Hackers interrupted live UAE TV news with deepfake videos
  • A professor in China used AI to write a sci-fi novel, which went on to win a national award
  • A Japanese author won an award for a novel, where at least 5% of the writing was lifted verbatim from ChatGPT
  • An AI generated image fooled judges and won the Sony world photography awards
  • Clarkesworld, a sci-fi publisher, had to halt pitches after they received a deluge of AI-generated submissions
  • Deepfake Neighbour Wars is a tv-series which has been created using AI to act and speak on behalf of celebrities in a fictional world.
  • A song featuring the vocals of Drake and the Weeknd was pulled after it was revealed that AI had generated the song without permission or involvement from the named artists
  • Scammers convinced a mother that they’d kidnapped her daughter, by using AI and a sample of the daughter’s voice to say what the scammers intended to make the woman hand over money
  • The money saving expert, Martin Lewis, was subject to an AI scam where his likeness was used to promote a product that he hadn’t endorsed

The Guardian reports that António Guterres, the UN Secretary-General, said that AI used for malicious purposes could cause “horrific levels of death and destruction, widespread trauma and deep psychological damage on an unimaginable scale.”

Large language models (LLMs)

The recently released House of Lords report describes LLMs as “a subset of foundation models focused on language (written text). Examples of LLMs include OpenAI’s GPT, Google’s PaLM 2 and Meta’s LLaMA.” LLMs can provide translation, summarisation, dialogue, knowledge search, content generation, and coding.

To give some context on what kind of civilisation-altering potential LLMs have, the House of Lords report says they “will introduce epoch-defining changes comparable to the invention of the internet. A multi-billion pound race is underway to dominate this market. The victors will wield unprecedented power to shape commercial practices and access to information across the world.”

Speaking to the House of Lords enquiry, Professor Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, said that he believed we have a small amount of time in which to act (he referred to this as “steerage”), and that the decisions taken today will be felt for a long time to come. In other words, we have a small amount of time in which to make the right decisions to protect our collective future.

Some of the risks identified in the House of Lords report include:

  • More effective cyber attacks both on national infrastructure and critical public services
  • Easier creation, translation and dissemination of terrorist propaganda, including hate speech. They give the example of Meta’s AI model called LLaMa, which was leaked onto 4chan, and “users reportedly customised it within two weeks to produce hate speech chatbots, and evaded take-down notices”
  • Widespread creation and dissemination of misinformation and disinformation on a “previously unfeasible scale”. They quote The National Cyber Security Centre who say “that deepfake campaigns are likely to become more advanced in the run up to the next nationwide vote, scheduled to take place by January 2025.” This would bring into question the validity of election results and opens up national elections around the world to state and non-state interference
  • The report warns of biological and chemical release, saying that, “there is evidence that LLMs can already identify pandemic-class pathogens, explain how to engineer them, and even suggest suppliers who are unlikely to raise security alerts”
  • Chillingly the report says that, “Catastrophic risks resulting in thousands of UK fatalities and tens of billions in financial damages are not likely within three years, though this cannot be ruled out as next generation capabilities become clearer and open access models more widespread.”

Explaining the issue of safety to the enquiry, Professor Stuart Russell said, “The security methods that exist are ineffective and they come from an approach that is basically trying to make AI systems safe as opposed to trying to make safe AI systems. It just does not work to do it after the fact.” You can read more about AI risks and the lack of safety built into the systems in my previous AI blog here.

It’s also worth pointing out that many writers including myself, have had their writing used without permission or compensation, to train AI models. I won’t go into too much depth here, as I’ve written about this in a previous AI blog post, but it’s worth quoting the House of Lords report, which says, “but that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process.”

Jobs being stolen by AI

Tech companies are manufacturing the systems that are thieving human jobs away from people. Here are some of the job losses that have made the news so far:

  • Tech companies including Microsoft, PayPal, Snap, and eBay have laid off 34,000 staff since January 2024 (this blog is being written in mid-February), as they pivot towards AI.
  • BT is set to replace around 10,000 workers with AI.
  • IBM has said around 7,800 jobs could be replaced by AI, over a five year timeframe.
  • Dropbox laid off 500 workers to make way for those who could help it develop its AI capabilities.
  • Stack Overflow is laying off 100 workers as their product is undercut by AI.
  • Duolingo has cut 10% of their contractors, and replaced them with AI to generate content.
  • Chegg Inc. which offers homework services is laying off around 80 people (4% of its workforce) as it embraces AI.
  • Derby City Council is going to replace four full time equivalent jobs with AI, which they believe will save £200,000 a year.
  • In a report based on responses from 750 business leaders, 37% of them said that AI was used to replace human workers in 2023. 44% of those business leaders said they would use AI to replace more human workers in 2024.
  • The situation in journalism doesn’t look great either. Many media outlets are using AI to write articles and that will no doubt eventually lead to job losses in the sector. Examples of these outlets include: Buzzfeed, the Daily Mirror, the Daily Express, Cnet, Men’s Journal, and Bild. Google is testing a new AI tool called Genesis, which writes news articles and has pitched this to large media outlets. They claim it’s not intended to replace, but to aid journalists with their story writing. In Australia, News Corp has gone hell for leather with AI, using it to produce around 3,000 local news stories each week.
  • A 2024 survey by the Society of Authors revealed that 26% of illustrators and 36% of translators are losing work to AI.

Vulnerable professions going forward

It’s probably easier to list professions that won’t be affected than those that will. Such is the extent of the upcoming job market evisceration due to AI. In Human Compatible: AI and the Problem of Control, Professor Stuart Russell says that up to a billion human jobs may be taken by AI. He says that “five to ten million” data scientist or robot engineer jobs may emerge, but that still leaves around 990 million unemployed people, which is equivalent to the combined populations of the European Union, the UK, the US, Canada, Australia, South Africa, and Costa Rica combined, with a few million to spare.

A Goldman Sachs report was more optimistic, saying that only around 300 million jobs would be impacted by AI around the world. The OECD (Organisation for Economic Co-operation and Development) released their forecasts saying that highly skilled jobs were most at risk of being lost to AI. According to the estimates, those types of professions account for 27% of jobs in the OECDs 38 member states, and span sectors including law, medicine, and finance.

Meanwhile, according to a Gallup survey, 72% of the 135 interviewed Fortune 500 CHROs see AI replacing jobs in their workplaces over the next three years. They say that, “leaders believe the future is automated”.

In a KPMG report on Generative AI and the UK Labour Market, the authors note the percentage of tasks that might be automated in the following professions:

  • Authors, writers and translators – 43%
  • Programmers and software development – 26%
  • PR and communication directors – 25%
  • IT user support technicians – 23%
  • Graphic designers – 15%
  • Personal assistants – 11%
  • Legal professionals – 11%
  • Business and related research professionals – 10%
  • Marketers – 7%
  • Auditors – 7%
  • Biological and biomedical scientists – 6%
  • Teachers in higher education – 6%

As more tasks are automated, that means that fewer people will be needed in each profession. The statistics show that careers like writing are most vulnerable to being wiped out, if we don’t fight back now.

The responsibility of the writing industry

Given all the warnings from experts, particularly concerning the writing field, one might imagine that writers would be cautious about embracing AI. But sadly, that’s not the case for some people.

As mentioned above, people have used AI to write novels which have won awards. Others have used it to flood writing competitions with submissions. Unfortunately, some writers are even offering courses that teach people how to use AI to write novels. This feels a bit like teaching people how to cheat – and making money from it in the process.

I’ve said it before, and I’ll say it again – if someone wishes to ‘run’ a marathon by ordering an Uber to the finish line, then one may as well not bother signing up. How is using AI to do your writing and claiming the credit, any different from Salt Bae lifting the FIFA world cup, without having participated in a single match? You are fooling yourself and betraying your readers by relying on AI to do your job. Writing is hard. If you don’t like it, don’t do it. Best to leave the real writing to the real humans, who are prepared to make the real sacrifices. We certainly don’t need AI trash adding to the mountain of 150 million books that already exist.

Art is the bedrock of human culture. Art is something that is produced by those who are prepared to make sacrifices to share their experiences and stories with others. It comes from our soul and by sharing it with others, it can help chart a path through this messy life. Art in other words is part of who we are as human beings. Using AI to create this art, is about as authentic as a genocidal dictator pretending to be a humanitarian. As Bill McKibben says in Falter, “The point of art is not ‘better’; the point is to reflect on the experience of being human—which is precisely the thing that’s disappearing.”

Arguments that AI proponents make

AI-optimists and proponents make some of the following false arguments:

  • “There has always been technological change – this is no different. We must embrace AI, before we get left behind!” This is wrong. The KPMG report makes it clear that AI represents “a radical shift from past trends in automation,” thus it has no comparison and we’re in completely unchartered and dangerous waters. It’s also different in the fact that it threatens up to a billion jobs, which no other technology has ever done – raising the risk of mass unemployment, social unrest and conflict around the world.
  • “The threats of AI are overhyped!” Wrong again. Read the House of Lords report on risks, and you’ll see they appear limitless. Professor Stuart Russell attributes this mindset to, “tribalism in defense of technological progress,” in Human Compatible.
  • “AI stands to bring so many benefits, we have to accept the risks!” Wrong. Professor Stuart Russell writes in Human Compatible that, “if the risks are not successfully mitigated, there will be no benefits.” He also says that to build secure AI systems, it will likely involve rebuilding AI models from scratch as they don’t currently have safety built in. Do you think tech companies will willingly do this if they haven’t done so already?

Meanwhile those in the writing industry embracing AI make the following false arguments:

  • “Using AI for writing is no different to using a ghost writer!” There are several reasons why this is wrong. Firstly, AI will likely result in human ghostwriters losing their jobs. Secondly, every time you use AI, you feed it with data which thus improves future iterations of the system (stop feeding the beast in other words). Thirdly, as an example, a celebrity or an individual might use a ghostwriter for a one-off memoir, which might be understandable as they might not have the necessary writing experience. But from an ethical point-of-view, you wouldn’t expect someone setting out to be a full-time writer relying solely on a ghostwriter. That would be pointless, misleading, and immoral. Thus, the same argument applies to those setting out to make a career from writing by using AI. To rely on AI to come up with a prompt, settings, characters, a plot, or even to write your work for you means that you’re not being true to yourself or your readers. The authenticity and the human element are gone. Trust and self-respect have gone along with it.
  • “But I plan to declare AI as a co-author, so it’s all fine!” Are we talking about AI being a co-author of one piece of work, or of everything “you” write over a lifetime? Because the latter really should make you pause and reflect – imagine wanting to be a world-class violinist, without wanting to put in the time learning to play the violin, or performing for audiences. It’s not worth lying to yourself and deceiving others. Maybe leave real writing and real art to real humans.
  • “Don’t worry AI hallucinates… It’s unreliable… It’ll never replace writers…” There are plenty of variations of these arguments. Yes, the current systems may hallucinate and need human editing. But what about when the next updated AI model is released? Or the AI model after that? Or the one after that? Each iteration has more data and is vastly improved. Every time you use AI, you’re feeding the beast and improving it. As the KPMG report says, 43% of writing tasks may be automated including “text creation”, which the rest of us call “writing”. Future systems may be just as competent as us and human writing careers might be buried. Is that risk really worth it? Surely that’s something society should have a say about? And one might hope that a society of humans would stick up for human writers.

Surviving the impossible dystopian future ahead

Jeremy Lent writes in his excellent AI blog that developments in AI, “have been unfolding at a time scale no longer measured in months and years, but in weeks and days.” So rapid are the developments that society hasn’t had time to debate what we do or don’t want from this technology. Governments are being lobbied by tech companies, and I wouldn’t put any faith in politicians standing up for us.

Therefore, it’s left to us as ordinary citizens to avert the AI techopalypse and to ensure our politicians put in place structural changes to avoid climate chaos. It’s not fair that we’re in this position, but many generations throughout history have had terrible burdens placed on their shoulders.

The first thing each of us can do is stop using any form of AI, and also avoid consuming any kind of creative work made by AI. We need a complete boycott on AI, if our leaders won’t pause it or ban it. We need to send a collective message, that we don’t want this apocalyptic technology in the world. We don’t want our careers to be stolen from us. We don’t want to be governed by a hybrid techno-political elite who control everything in our lives. We don’t want to live in an artificial world, created by the tech industry which wants to “move fast and break things”. Especially when the things that might get broken, happen to things like the jobs market. As well as democracy. As well as art and culture. As well as national and international security.

The next thing we can do are join groups who are petitioning our leaders to implement stringent and urgent regulations on AI, which protect humanity from all the risks AI poses. Nothing short of that is sufficient, given what’s at stake. Time is running out to act on the AI crisis. Time is running out to act on the climate crisis. Yet are politicians are hellbent on kicking the can down the road by doing next to nothing (or in some cases exacerbating the crises). Things really are in our hands if we want to see change in the world. And if we don’t fight for a better future, then the next generation has every right to call us out for failing them and leaving them with a climatically and technologically wrecked trash heap of a planet.

This is why I urge everyone to boycott AI now. Avoid using it. Avoid consuming any kind of content created by it. Your career is at stake. Make a stand whilst we have a chance. While this tiny gap remains open for action. For if we don’t seize this opportunity, we may lose a whole lot more than just our jobs.

Selected Resources

Books

  • Human Compatible: AI and the Problem of Control by Stuart Russell
  • The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
  • Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
  • Permanent Record by Edward Snowden
  • The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
  • Life 3.0 by Max Tegmark
  • 1984 by George Orwell
  • Superintelligence by Nick Bostrom

Articles

Podcast

Other

Template for Contacting Political Representatives about AI

Dear

I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.

Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and authors. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.

Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.

Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Even British Prime Minister, Rishi Sunak, is concerned about AGI and the “existential risk of what happens when these things get more intelligent than us.”

We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly in outright banning the technology or holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.

With concern and expectation,

My new cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI