
Control AI is a non-profit organisation that aims to reduce AI risks, with a focus on addressing superintelligence.
About Control AI’s Work
Given that Control AI only have 12 team members at the time of writing, they’ve made enormous strides in the UK especially regarding their work on briefing parliamentarians. Before going into that, it’s worth explaining a bit about their work.
Their overall imperative is to: “prevent the development of artificial superintelligence and keep humanity in control.”
They’ve developed something called the Direct Institutional Plan (DIP), which guides their efforts. This is made up of two parts:
“1. Design policies that target ASI development and precursor technologies
2. Then, inform every relevant person in the democratic process – not only lawmakers, but also executive branch, civil service, media, civil society, etc –, and convince them to take a stance on these policies.”
Regarding the first objective, they produced some policy measures in ‘A Narrow Path,’ and have since created and presented a draft AI bill to the UK Prime Minister’s Office.
As part of the second objective, they’ve created a statement on superintelligence, which they’ve asked parliamentarians to back. They’ve also engaged the public through press features around the world, as well as through their own content – which has already amassed over 150 million views. A big part of this objective involves getting the public involved. They’ve achieved this in a few ways, including:
- A tool for members of the public to contact their MP (as well as “AI and Digital Ministers”) in the UK regarding AI risks and superintelligence. This tool uses your postcode to identify your local MP. It then generates an email for you to send, meaning that the entire process takes a minute or two to complete. There is a similar template for people in the US to contact the President, as well as a template for the rest of the world to use for contacting their elected representatives. 150,000 people have made use of these templates to date.
- A tool for contacting newspaper editors in the US and the UK regarding the risks of superintelligence. The idea being that if members of the public contact these papers, they’ll in turn write more pieces for the public to understand the issue.
- A statement on AI risks for members of the public to sign. This has been signed by over 185,000 people.
- The opportunity to volunteer, or ‘microcommit’ with Control AI. Only five to ten minutes a week are required for this, which definitely seems possible for most people! An email is sent out weekly with quick tasks to complete. This seems like such a logical thing to do, as loads of small actions can often lead to big results over time. Easy tasks, for a massive issue, with very little time required – it makes sense.
- A form for registering interest in local meetups.
UK Campaign Statement and Briefing Parliamentarians
To guide their work with parliamentarians, Control AI developed the following statement:
“Nobel Prize winners, AI scientists, and CEOs of leading AI companies have stated that mitigating the risk of extinction from AI should be a global priority.
Specialised AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.
The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.”
They pitch parliamentarians to brief them on the risks of superintelligence, and then ask them to support the above statement. Leticia García Martínez is the UK Parliamentary Engagement Lead for Control AI. Since September 2024, she has delivered over 140 introductory briefings on the risks of superintelligent AI, which was poorly understood prior to these meetings.
Their work with parliamentarians is a case study on effective campaigning. The stats look particularly impressive:
- The 140 briefings have taken place over the past 18 months (September 2024 to February 2026, when this blog was published)
- 126 briefings were given directly to parliamentarians
- The remaining 14 briefings were given to staffers
- The parliamentarians included MPs (42% of briefings), Peers in the House of Lords (35%), and devolved parliamentarians from the Scottish, Welsh, and Northern Irish legislatures (22%)
- Two team members from Control AI attended most meetings, apart from a few which were only attended by Leticia
- 110 parliamentarians now support Control AI’s campaign (at the time of writing)
To have 110 parliamentarians supporting their campaign, from 140 briefings, is a fantastic conversion rate. It’s worth noting that Control AI had no pre-existing contacts, but had to cold pitch parliamentarians to create the opportunity for a briefing in the first place. As this article in the Guardian shows, the campaign is already having an impact, with these parliamentarians pushing for government regulations.
Their achievements are even more staggering given that they have been fighting an uphill battle regarding perceptions of superintelligence, which many people felt reluctant to back initially because of the stigma involved with an issue that should be confined to the pages of a sci-fi novel, as opposed to something that many tech companies are actively developing at this moment.
Control AI’s approach appears to be very logical. They’ve developed a simple statement for parliamentarians to support. Having this as a focus, they’ve been able to develop their briefing. They’ve focused on parliamentarians because they’re the policymakers – they can regulate this dangerous technology, and therefore protect humanity. And they’ve given the public and the media the information and tools they need to supercharge the entire campaign, so that it becomes a self-reinforcing feedback loop.
The climate and environmental movements can learn from this approach
I can think of many climate and environmental non-profit and activist organisations, some of which have been active for decades, but not many have made this level of progress in such a short space of time.
Indeed, many environmental groups have met with the government, and others have briefed parliamentarians. But, I struggle to think of many who’ve taken this focused approach with parliamentarians and who’ve accrued this level of support so fast.
I did a quick tally of parliamentarians in the All Party Parliamentary Group on Climate Change (79 parliamentarians), and the new Climate and Nature Crisis Caucus (10 parliamentarians), and it appears that both of these climate groups combined have fewer parliamentarians compared to Control AI’s campaign (110 parliamentarians).
Given that the climate crisis has been in the public consciousness for at least 38 years, and that a team of two people from Control AI have only been delivering briefings on AI risks for 18 months, it’s a staggering achievement.
Without oversimplifying the great and ongoing efforts of the climate and environmental movement, I can’t help but wonder why at least one group hasn’t already tried to do what Control AI is doing. I understand that superintelligence entered the collective mindset a long time ago with movies like 2001: A Space Odyssey and the Terminator series, which may have also made getting support easier. Nonetheless, it feels like there are many lessons that can be learned and passed on.
Ironically, I previously thought that it would be the climate movement who had much to teach the AI crisis movement, as climate organisations have been around a lot longer. It seems I got that wrong. Ideally, the climate and AI movements should learn from and help each other.
Leticia García Martínez published a post on Substack on the 12th February 2026, explaining more about her work. This follows on from her previous post in May 2025. I’ve picked out transferable ideas from both posts that could help organisations tackling the climate and AI crises, and shared them below. I highly recommend reading both her posts, which go into a lot more detail.
18 lessons for securing the backing of policymakers
1. Cold pitch
It doesn’t matter whether you have pre-existing contacts in Parliament. Reach out to parliamentarians through cold pitches if you have to. This is exactly how Control AI began their campaign.
As Leticia writes, “We had no insider contacts in Parliament. We had to push the door open ourselves: making ourselves known, reaching out as widely as possible, and building from scratch.”
2. Follow-up unanswered pitches
Parliamentarians are busy people and they may only have a few staff working for them. An unanswered pitch doesn’t necessarily mean they aren’t interested, but may instead reflect that they haven’t got round to reading/ responding yet.
Following up, puts your name in front of them again. Leticia says, “I have relentlessly followed up with people, and nobody has been angry with me – quite the contrary, some have thanked me for it. It is important to always be kind when following up and never reprimand someone for taking time to respond – they are extremely busy, and doing so would not help anyway. They will appreciate your understanding.”
3. Develop a tool for members of the public to contact their representatives
Control AI has made it easy for people to email their elected representatives with their concerns about the risks of AI and superintelligence. To date, over 150,000 messages have been sent to lawmakers around the world, showing how effective this tool has been.
With so many people emailing their MPs about this issue, it no doubt made it easier for Control AI to secure backing for their campaign. As Leticia says, “More and more constituents used Control AI’s email tool to contact their MPs with concerns. This made the problem more salient, and as MPs saw trusted colleagues getting involved, they found it easier to engage themselves. Interest spread from parliamentarian to parliamentarian.”
I used their tool to e-mail my local MP, who has now backed the campaign statement. Over the last few years, I’ve contacted my MP about AI risks and they seemed less than enthusiastic in their responses. Yet they’ve now backed this campaign. Clearly, Control AI is doing something right and we should learn from them.
Debates have now been held in the House of Lords, including one this year about an international moratorium on superintelligent AI. A snowball effect is taking place, whereby Control AI’s campaigning has awakened concern about the dangers of this unregulated and potentially catastrophic technology. Thus, they’re now debating it and seeking to push the government for regulation. This is how meaningful change can happen.
Leticia writes, “It is hard to overstate how encouraging it is to see lawmakers engage, take a stance, and carry the issue forward themselves, on a topic many were unfamiliar with just a year ago. And to see superintelligence and securing a great future for humanity being discussed in the parliament of one of the most powerful countries in the world! It is both encouraging and clarifying. It shows that change is possible through direct, consistent, and honest engagement.”
4. Craft a pitch carefully for meetings with Parliamentarians
Some tips from Leticia’s post include:
- Avoid buzzwords, acronyms, or complex language. Parliamentarians may have little existing knowledge of the issue, so keep it simple.
- They will also need to speak about this topic publicly, so sharing a message that’s easy to understand and repeat, is essential.
- Keep 80% of the pitch consistent for briefings, and innovate with the remaining 20%
- Refine the pitch for future briefings, based on what resonates
- Context may change as government’s do certain things, and don’t do others – and thus context specific arguments might be best avoided.
- Constantly practice your pitch, so that you can deliver it to a high standard every time.
5. Ensure you know the person’s name and how to pronounce it
Research how to say the parliamentarian’s name online if you need to, otherwise it could be an awkward 45 minute meeting if you keep saying it incorrectly and you’ll likely not be in their good books either.
6. Put aside which party the parliamentarian supports. Focus instead on who they are as a person and what will resonate with them
From her experience, Leticia says that factors other than party affiliation matter more in meetings. She gives a few examples including, “whether their background includes computer science, whether they have been interested in other challenges involving coordination problems (e.g. environmental issues), and other aspects of their personal background (e.g. they have worked on a related piece of legislation, or have a child who works in tech).”
Find a way of relating to the human before you. Whether it be climate breakdown or the AI crisis, study the parliamentarian’s background carefully to find commonalities and determine what messaging will resonate best.
You can also use Hansard to see a parliamentarian’s contributions on different issues, by searching for keywords.
7. Be honest and consistent with messaging
The facts must remain the facts and shouldn’t be watered down or avoided. If you have to lie to get someone onboard, then you’re not going to get very far with your campaign.
As Leticia says, “If you have to change your message to please one party or avoid upsetting a person, that’s someone you won’t be able to work with (you have forfeited your opportunity to convince them of the problem!) and someone whose trust you have forfeited, as it will become obvious that your message is not consistent across audiences. In other words: Don’t make arguments others can’t repeat. You can only lose. Honesty is not just an asset, but an obligation to yourself and others.”
8. The power of two people pitching – warmth and clarity
It’s important to connect with the other person. It’s also important to be clear about the message you’re communicating. Thus, Leticia advises that if there are two of you presenting, then you should lean into your strengths, with one person bringing the warmth to the meeting and the other the clarity.
9. Provide actionable next steps during a meeting
It’s all well and good informing a decision-maker about a problem. But, they also need solutions if they’re to affect change. Ensure you provide actionable solutions during your meeting.
10. Determine your non-negotiables and your areas of comprise, especially when drafting policy suggestions
It’s extremely important you’re able to separate out what’s crucial for driving action, from ‘things that would be nice to have.’ Or to put it more concisely; pick your battles carefully.
Leticia gives the analogy of building a house. Your non-negotiables which are crucial for delivering change are the load-bearing structures of the building. Whereas the ‘things that would be nice to have’ are more like the decoration of the building, which is therefore open to discussion and comprise.
11. Trust your intuition and adapt in the moment
Similar messaging will hit differently for different people. If you sense you’re losing someone’s attention, adapt to bring them back into the fold.
Reflect on how each meeting went and what worked well and what didn’t. Learn and improve going forward.
12. Build relationships as a priority
Time with decision-makers is usually very limited – a short amount of time is usually set aside, and even this may be eaten into if their previous engagement overruns.
With such limited time, it may seem entirely logical to jump right into the issue. However, we’re human and therefore social creatures. So, building a rapport with the person sitting opposite you is essential.
Leticia gives an example of a meeting with an MP and a staffer. The MP offered to buy the Control AI team a coffee, and Leticia being very aware of the limited time they had, and the very long coffee queue (!), declined the coffee. However, her colleague opted for a coffee and the MP thus queued for five minutes, further eating into their remaining meeting time.
Afterwards the colleague explained, that you need to take time to build a relationship with the person you’re briefing. If you rush in and just demand they listen and then demand they take action, without any kind of rapport, you’re not going to get very far. Say yes to the coffee.
Connect with the person, even if that means spending slightly less time on the civilisation-threatening issue, and more time building up that vital relationship. Contrary to how it appears, it will probably put you in better stead going forward.
13. Take physical examples of media coverage, empirical evidence, and polls
Taking copies of these things with you, builds credibility and also shows there is public concern about the issue. Ensure you have duplicates, so that you can give these copies to the parliamentarian should they wish to keep them.
14. Wear a suit to the meeting and ensure good hygiene
How we appear sends a message about us. If you dress like you mean business and should be taken seriously, then you’ve met the basic criteria.
15. Ask for introductions to other parliamentarians
At the end of the meeting ask if the person knows anyone else who may be interested in the issue. Or if you have the name of someone you’d like to speak with, ask if they may be able to introduce you.
16. Never underestimate the value of staffer briefings
Not every briefing will be delivered directly to a parliamentarian; some will be delivered to their staffers. These briefings are particularly useful as staffers tend to be quite open about what they do or don’t understand, and what works with your pitch. This feedback is extremely valuable for refining your briefing.
17. Making notes
It’s worth remembering what questions the parliamentarian asked, so that you can refine your pitch and better prepare for the next one. It’s also worth remembering that they may not want certain things recorded. So when making notes, be discreet and perhaps stick to just a few keywords to jog your memory.
18. Don’t be too hard on yourself
Briefing parliamentarians on a civilisation-level threat is no easy task. Don’t beat yourself up about mistakes, instead learn from them and try to improve going forward.
Conclusion
Control AI have achieved a great deal in the space of just 18 months, with 110 parliamentarians backing their statement. This in turn has led to new debates amongst policymakers, and a call for the government to regulate this technology and the risks it brings.
One additional piece of advice that comes to mind is for campaign groups to allocate their roles carefully. As mentioned, there are only 12 staff (around the world) at Control AI at the time of writing. They specifically employed Leticia to be their UK Parliamentary Engagement Lead, and she has a solid background. Control AI has hired for positions which will directly help them achieve their core goals – everything is focused and aligned in one direction.
I remember going to a climate event almost ten years ago and meeting someone from a climate nonprofit organisation that I was donating to. He was extremely intelligent, but after listening to him speak about his role and after a few follow-up questions, I left absolutely none the wiser about what he actually did. Most nonprofits are strapped for funding; thus, every role needs to be well thought out and strongly aligned with the goals of the organisation.
Ultimate success will be determined by whether the UK and the rest of the world regulates AI before we develop superintelligence which could signal our immediate demise. But, given how quickly Control AI has shifted the needle on this topic in the UK and how they’re trying to replicate this in the US, I believe there is much that can be learned and applied to wider campaigns regarding AI risks, as well as lessons that the climate movement could also adopt.
I’ve previously written about the intersection of the climate and AI crises. I believe that best practice on one issue can also be just as effective on the other. It’s time for campaign groups who are serious about tackling these issues learn, and support one and other – for we’re all at risk and our fate is currently being determined by the decisions of politicians, the same ones that Control AI has successfully managed to brief and secure backing from.
My Generic E-mail Template for Contacting Political Representatives About AI
Dear
I’m writing in regards to the rapid advances in AI and related technologies, which pose massive threats to society, jobs, arts and culture, democracy, privacy, and our collective civilisation.
Many AI systems are trained on copyrighted data and this has been done without consent or compensation. The way that machine learning works is flawed and this means that control hasn’t been designed into AI, which could create unimaginable problems further down the line. But AI isn’t just a future threat. The large language models (LLMs) already in the public domain threaten the livelihoods of writers and creatives. AI image, video and audio generators pose risks to the jobs of artists, actors, and musicians. When combined together, these types of AI can have a devastating impact on democracy, and ‘deepfakes’ could be used by malicious actors for cybercrime purposes.
Both AI and the introduction of robots into the workforce jeopardises jobs on a scale like never before. By one estimate, up to a billion jobs could be lost, with only around ten million new jobs created. Mass unemployment could result, leading to social unrest, extreme poverty, and skyrocketing homelessness.
Through neurotechnology, it’s already possible to create an image of what people are thinking about – the ultimate invasion of thought privacy. Killer robots have been deployed around the world over the last few years, and can be easily made and sold on the black market, threatening our collective safety. Meanwhile AGI and superintelligence pose an existential risk to our civilisation.
We have a limited period of time to act before AI becomes so embedded in modern life, that it can’t be extricated. I therefore urge you to act swiftly, by ideally holding a global citizens’ assembly on AI and using the guidelines that emerge to implement stringent regulations that forever protect and safeguard humanity.
With concern and expectation,
[Your name]
Selected Resources
Books
- Human Compatible: AI and the Problem of Control by Stuart Russell
- Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari
- If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares
- Supremacy: AI, ChatGPT and the Race That Will Change the World by Parmy Olson
- The Alignment Problem: How Can Machines Learn Human Values? by Brian Christian
- The Coming Wave by Mustafa Suleyman
- Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat
- Code Dependent: Living in the Shadow of AI by Madhumita Murgia
- Falter: Has the Human Game Begun to Play Itself Out? By Bill McKibben
- For the Good of the World by A.C. Grayling
- Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford
- Permanent Record by Edward Snowden
- The People Vs Tech: How the Internet is Killing Democracy (and how we save it) by Jamie Bartlett
- The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff
- Life 3.0 by Max Tegmark
- 1984 by George Orwell
- Superintelligence by Nick Bostrom
Articles
- Stuart Russell – AI has much to offer humanity. It could also wreak terrible harm. It must be controlled
- Dan Milmo – ‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years
- Yuval Harari, Tristan Harris and Aza Raskin – You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills
- Jeremy Lent – To Counter AI Risk, We Must Develop an Integrated Intelligence
- Alex Hern – Interview – ‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity
- Yuval Noah Harari – ‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world
- Naomi Klein – AI machines aren’t ‘hallucinating’. But their makers are
- Yuval Noah Harari – Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
- Eliezer Yudkowsky – Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
- Jonathan Freedland – The future of AI is chilling – humans have to act together to overcome this threat to civilisation
- Daniel Kehlmann – Not yet panicking about AI? You should be – there’s little time left to rein it in
- Ian Hogarth – We must slow down the race to God-like AI
- Harry de Quetteville – Yuval Noah Harari: ‘I don’t know if humans can survive AI’
- Lucas Mearian – Q&A: Google’s Geoffrey Hinton — humanity just a ‘passing phase’ in the evolution of intelligence
- Sigal Samuel – AI companies are trying to build god. Shouldn’t they get our permission first?
- Dan Milmo – Former OpenAI safety researcher brands pace of AI development ‘terrifying’
- James Bradley – AI isn’t about unleashing our imaginations, it’s about outsourcing them. The real purpose is profit
- Alex Hern and Dan Milmo – Man v machine: everything you need to know about AI
- Society of Authors – Publishers demand that tech companies seek consent before using copyright-protected works to develop AI systems
- Alex Clark and Melissa Mahtani – Google AI chatbot responds with a threatening message: “Human … Please die.”
- The Guardian (Editorial) – The Guardian view on AI’s power, limits, and risks: it may require rethinking the technology
- Alexander Hurst – I met the ‘godfathers of AI’ in Paris – here’s what they told me to really worry about
- Nick Robins-Early – OpenAI and Google DeepMind workers warn of AI industry risks in open letter
- Stuart Russell – DeepSeek, OpenAI, and the Race to Human Extinction
- Nesrine Malik – With ‘AI slop’ distorting our reality, the world is sleepwalking into disaster
- Charis McGowan – The workers who lost their jobs to AI
- Blake Montgomery – Will AI wipe out the first rung of the career ladder?
- Lauren Almeida – Number of new UK entry-level jobs has dived since ChatGPT launch – research
- Rory Carroll – Futurist Adam Dorr on how robots will take our jobs: ‘We don’t have long to get ready – it’s going to be tumultuous’
- Ryan Mizzen – AI and the Techopalypse
- Ryan Mizzen – 31 Reasons to Boycott AI
- Ryan Mizzen – Boycott Generative AI Before AI Makes Your Career Boycott You
- Ryan Mizzen – Struggling to plan for the future in the midst of the climate emergency and the AI crisis? You’re not alone
- Ryan Mizzen – The Intersection of the Climate Emergency and the AI Crisis
- Ryan Mizzen – 30 Common Myths About AI – Debunking Techwashing
- Ryan Mizzen – Terminology for the AI Crisis
- Ryan Mizzen – Mainlining AI? More Like Mainlining Disaster. An Analysis of the Labour Government’s AI Plans
- Ryan Mizzen – Trump’s Return Immediately Brings Us Closer to Climate Disaster and Algorithmic Extinction
- Ryan Mizzen – Pact for the Future – Analysis
- Ryan Mizzen – Results from the Society of Authors’ AI Survey 2024
- Ryan Mizzen – The Creator – Review
Podcast
Videos
- Senator Bernie Sanders – LIVE with the Godfather of AI
- The Diary of a CEO – Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!
- The Diary of a CEO – An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future!
- Yuval Noah Harari – AI and human evolution
- Ozzy Man Reviews – Who is Real Anymore!? AI
AI Activism and Other Resources
- Pause AI Campaign Group
- Control AI Campaign Group
- Hold an urgent citizens’ assembly on AI (my petition)
- Pause Giant AI Experiments: An Open Letter
- A Right to Warn about Advanced Artificial Intelligence
- Restrict AI Illustration from Publishing: An Open Letter
- Statement on AI training
- Statement on AI Risk
- Open Letter to Generative AI Leaders
- Call to Lead
- Autonomous Weapons Open Letter: AI & Robotics Researchers
- Lethal Autonomous Weapons Pledge
- Stop Killer Robots
- Amnesty International – Stop Killer Robots
- autonomousweapons.org
I’ve been writing about the climate emergency since 2016, and the AI crisis since 2023. I write all my own stuff, without the use of AI (something I’m against doing as a writer). I don’t publish on any other paid platforms, and my blog remains completely free to read. If you’ve found my writing informative and if you’d like to support my work, you can do so here. Contributions are greatly appreciated.
My cli-fi children’s picture book, Nanook and the Melting Arctic is available from Amazon, including Amazon UK and Amazon US. My eco-fiction children’s picture book, Hedgey-A and the Honey Bees about how pesticides affect bees, is available on Amazon’s global stores including Amazon UK and Amazon US.