Skip to content

Results from the Society of Authors’ AI Survey 2024

Results from the Society of Authors’ AI Survey 2024
Photo by Sara on Unsplash

In January 2024, the Society of Authors (SoA) ran a survey about the impacts of AI on writers. It was open to all 12,500 SoA members, as well as any other writers who wished to express their views.

Only 787 people responded to the survey, including myself. Those who responded were fiction writers, non-fiction writers, scriptwriters, journalists, poets, translators, illustrators, as well as both self-published and traditionally publisher authors.

Creatives who use AI

Below is a summary of who is using AI:

  • 22% of respondents admitted to using generative AI for their work. This figure was an average for all respondents. A more detailed breakdown of who is using AI is as follows:
    • 12% of illustrators
    • 20% of fiction writers
    • 25% of non-fiction writers
    • 37% of translators
  • 31% of writers and illustrators admitted to using generative AI for brainstorming their ideas
  • 8% of translators and 5% of illustrators say that publishers and commissioning organisations have specifically asked them to use generative AI for their work

Generative AI stealing work away from creatives

According to the survey results, creatives are already losing work to generative AI and many feel that their work has lost value because of AI:

  • 26% of illustrators and 36% of translators have already experienced losing out on work, which is instead being done by generative AI
  • 37% of illustrators and 43% of translators have experienced a decrease in income, as a result of generative AI
  • The majority of creatives (57% of non-fiction writers, 65% of fiction writers, 77% of translators, and 78% of illustrators) believe that generative AI will lead to a decrease in their future income
  • 86% of respondents were concerned that generative AI may mimic their style, likeness, and voice.
  • 86% of respondents were concerned that generative AI will devalue creative work produced by humans.
  • The SoA said, “Some respondents expressed concerns that generative AI could replace human creators, particularly in areas like copywriting and content creation.”
  • The SoA also said, “Even those respondents who were more optimistic that generative AI systems can be used ethically… reiterated that ethical concerns are a primary reason to avoid the use of generative AI systems at this stage.”

Regulating AI

Respondents held strong views that generative AI needs proper regulation:

  • 94% of respondents want credit and compensation when their work is used to train AI systems
  • 95% of respondents want to be asked for their consent before their work is used to train AI systems
  • 95% of respondents urge the Government to regulate AI and put in place safeguards, particularly in regards to compensation, consent, and transparency
  • More than 9 in 10 people believe publishers and organisations should make it absolutely clear where generative AI has been used in the production of audio, video, covers, illustrations, decision-making, editing and translation
  • Ethical concerns were of significant importance for respondents, including AI biases, inaccuracies, copyright infringement, and misuse of personal data
  • 97% of respondents believe that consumers should be made aware of where and how generative AI has been used to produce what they’re viewing, hearing, or reading

Analysis

The SoA’s AI survey results have thrown up some interesting findings. A higher proportion of individuals are using AI than I would’ve imagined (22% of survey respondents). That writers more than any other profession face losing work or potentially being replaced by AI, seems to be of little concern to some individuals who’ve embraced the source of their possible downfall.

I’ve encouraged creatives to boycott AI, because by using it we give the tech companies a social licence to continue developing this dangerous new technology which threatens our jobs. Not only that, but safety hasn’t been incorporated into the structure of AI systems. Stuart Russell in his book Human Compatible believes that these systems would likely need to be rebuilt from scratch to incorporate safety mechanisms. Tech companies won’t willingly do that – societal wellbeing is of little concern to corporations motivated solely by profits.

Thus, we need governments to implement stringent new regulations. But, given that governments have failed to effectively regulate the fossil fuel industry, I don’t hold out much hope for them regulating the tech industry. Therefore, every individual has a role to play in safeguarding society. Boycotting the use of AI is the logical course of action, until such time as safety has been built into these systems and until regulations are in place to protect our jobs, democracy, society, and our overall wellbeing. For without that, experts warn that we face the real threat of an AI techopalypse.

Needless to say, I find it completely unacceptable that 8% of translators and 5% of illustrators have been told to use AI by their publishers and organisations. Those same organisations must be held accountable for this completely unethical behaviour.

Despite the overwhelming evidence that jobs are being lost to AI, there are still people out there who chose to ignore this fundamental issue. But now we have a clearer insight from within the creative industries. The results show that 26% of illustrators and 36% of translators have lost out on work, which has instead been completed using generative AI. This extremely worrying trend is only likely to increase. On top of this, 37% of illustrators and 43% of translators have experienced a decrease in income through work being lost to AI. The majority of creatives hold the view that AI will result in less income as the tech industry shifts work away from humans without our collective consent.

Interestingly, the SoA says that, “Even those respondents who were more optimistic that generative AI systems can be used ethically… reiterated that ethical concerns are a primary reason to avoid the use of generative AI systems at this stage.” So even those who are pro-AI believe that generative AI should be avoided right now. This is why I’m struggling to reconcile statements like this, with the fact that 22% of the respondents willingly used AI despite the mountain of ethical issues.

That being said, there were some positives from the survey, with 95% of respondents requesting tech companies ask for consent before using their work to train AI models, and 94% of respondents requesting credit and compensation for their work that’s been used for this pursuit. Similarly, the Publishers Association, which represents 180 publishers, has sent a letter to AI developers saying that they withhold permission for work being used to train AI, unless specific consent has been obtained from the copyright holder. A solid 95% of respondents want the government to put in place AI regulations and safeguards – this is the most encouraging part of the AI survey. Rightly, 97% of respondents want consumers to be aware of where and how AI has been used to produce work.

Therefore, one thing that almost everyone seems to agree on, is that urgent government intervention is needed to reign in the careless actions of the tech industry. AI is different to any other type of technology or threat that humanity has faced, and taking action now is the only feasible way to prevent an AI techopalypse within our lifetimes.

Conclusion

Creatives seem to be somewhat conflicted over the use of AI, with 22% of respondents using it already in their work. To those creatives, I’d urge you to look into the reasons for boycotting AI, and I’ve made my own list of reasons in this blog post. Even AI optimists expressed their concerns regarding ethical issues and said that people should avoid using AI. We’d do well to heed this message. In addition, many creatives are already losing work to AI and this problem will only ramp up, the longer governments wait to regulate tech companies and their dangerous products – which 95% of survey respondents are calling for.

But it’s not just governments who have a role to play – each of us can contact our political representatives to make our views clear. And each of us can boycott the use of AI to remove the social licence tech companies have assumed for developing this unnecessary technology.

Tech companies are pressing ahead with developing dangerous AI without society’s consent, and without a concerted effort by all of us, they may end up taking society over a cliff edge. Our responsibility is to stop that happening, and as creatives we still have the influence and power to bring about change. But, as more people get replaced, our power vastly diminishes and our fate grows increasingly precarious.

Humanity has the chance to do the right thing at the right moment, and that moment is now.

My new cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon’s global stores including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.

Published inAI