AI 2027 features as one of its two endings a race ending in which humanity loses control of its destiny in 2027 and is extinct by 2030. How can we avoid this?
Below we share what we see as the best ways for you to improve humanity’s chances.
Act with urgency but not certainty
We depicted AGI in 2027 because we think it’s a plausible outcome1 and society isn’t acting with anywhere near the appropriate urgency. We may only have 2 years left before humanity’s fate is sealed!
Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.
If you can’t contribute now, keep in mind that AGI timelines are uncertain2 Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then. Consider preparing to contribute if AGI arrives post-2027.
Preparing for the intelligence explosion
Let’s imagine the world was up to the task of handling an intelligence explosion. What might that look like?
Governments and the public understand that AGIs will dictate humanity’s future and might arrive soon. There’s high-quality online discussion about AGI, companies disclose their internal AI capabilities, and governments have invested tens of billions into AGI preparedness. A world in which the public is informed about risks from superintelligence would be a safer world.
As companies automate AI R&D, governments are on high alert and take action. Government agencies and nonprofits conduct regular interviews with top researchers at the companies. Companies report their estimates of AI R&D speedups based on surveys and uplift studies.
Companies publish detailed safety cases justifying why their AIs won’t cause catastrophic harm. These are treated with much more seriousness than in industries that don’t pose an existential threat, such as cars. These argue that either (a) their AGIs aren’t adversarially misaligned or (b) even if they were, they wouldn’t be able to put us on a catastrophic path.3 The government, external organizations, academia, and the public engage deeply with these safety cases. If the safety cases aren’t strong enough, companies refrain from developing or deploying better AIs.
Well-resourced teams inside and outside of AI companies do alignment research to better control AIs’ goals. Alignment research is seen as a top priority with respect to attention and resourcing.
It’s practically impossible for the CEO or POTUS to use aligned AGIs to seize control of humanity’s future. All of their queries to the models are logged and monitored. The model spec and system prompt are public and red-teamed to prevent coups.
The US and China coordinate to reduce competitive pressures, ensuring models aren’t developed without strong safety cases. If necessary for safety, development is slowed. On-chip verification and inspectors allow for trustless enforcement of an international deal.
The above is not an exhaustive list, but it covers some of our top priorities.
If you’re in government or an AGI company
Our next project will have detailed recommendations for governments and AGI companies.4 In the meantime, we encourage focusing on steering toward the world described above.
Learning
You might start by learning more about AGI-relevant topics. Along with AI 2027, we recommend the following regarding AGI forecasting and strategy (more in footnote):5
The AGI-relevant episodes of the Dwarkesh podcast and the 80,000 Hours podcast
Situational Awareness, though we think it underemphasizes international coordination and AGI alignment difficulty
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
AI-Enabled Coups: How a Small Group Could Use AI to Seize Power
Consider also going through this technical AI alignment or AI governance course with a friend, or registering for the facilitated version, with a focus on the portions relevant to existential risks.
Types of professional work
Many sorts of work can help. Below we list some of the most common ones along with specific opportunities:6
Governance/policy/forecasting research and advocacy. Policy research focuses on determining what AI policies are both impactful and tractable, both in the near-term and during AI takeoff. Policy advocacy focuses on getting these policies implemented.
Opportunities designed for entering the field include the Horizon Fellowship, IAPS AI Policy Fellowship, Pivotal Fellowship, and ERA Fellowship. We’ll also highlight RAND’s Technology and Security Policy Fellowship, GovAI, and our very own AI Futures Project.
Technical research, evaluations, and demonstrations. Research focuses on developing techniques to align and control increasingly capable AIs. Demonstrations and evaluations of AIs’ capabilities and goals help inform decision-makers and the public.
The MATS Program is for entering the field.7 We’ll also highlight Redwood Research, METR, and Apollo Research. See also this video with technical safety career advice.
Beneficial AI applications: Some applications of AI are especially beneficial for positive AGI outcomes, e.g. AI for decision-making and AI for coordination. This blog post details some promising applications.
Communications and journalism. Help the public understand when AGI might come and the impact it will have.
The Tarbell fellowship is for entering AI journalism.
Infosecurity: Securing AI model weights and algorithmic secrets is important for nonproliferation.
Operations / other. AI safety organizations, like others, also need various other skillsets, such as generalist operations staff and management capabilities.
80,000 Hours and AISafety.com have more comprehensive job boards, and 80,000 Hours gives career advice.8
Non-professional activities
There’s also things to do without working full-time on AI safety, or in addition to doing so.
Contribute to public discourse. As AI improves, the amount of AI discourse will increase and the stakes will rise. Having reasonable voices on blogs, social media, podcasts, etc. will help improve societal decision-making. Organized public advocacy may also play an important role.
Private discourse and informing others. Having open conversations with friends, family, etc. about AGI may have significant effects. If you’re a college student, consider joining your college’s AI safety club or founding one.
Donate. Many AI safety organizations are funding-constrained. Manifund contains a bunch of projects’ information (our information is here), or you can donate to an organization that we listed in the previous section. If you’re interested in donating >$200k email us and we may be able to advise you.
In particular, first author Daniel Kokotajlo thinks it’s the most likely year that AGI will arrive, and is near his median forecast of 2028. My median is roughly 2032, but with AGI by 2027 as a serious possibility (~15-20%).
But keep in mind that people sometimes contribute despite being in a position where it seems difficult! For example, Encode was founded by a high schooler.
For example of putting things on a “catastrophic path,” in AI 2027 Agent-4 aligns Agent-5 to itself rather than humanity. While this didn’t immediately cause a visible catastrophe, it did put humanity in a very precarious position due to Agent-5’s capabilities and level of autonomy.
AI Lab Watch also has a detailed scorecard regarding how well AGI companies are doing on various safety metrics.
Other reading recommendations left out of the main text for lack of space, pick what looks most interesting! How AI Takeover Might Happen in 2 Years, Preparing for the Intelligence Explosion, Is Power-Seeking AI an Existential Risk?, the Most Important Century series and Implications of the Most Important Century, Why AI alignment could be hard with modern deep learning, Scheming AIs, AGI Ruin: A List of Lethalities and Where I agree and disagree with Eliezer, Will AI R&D Automation Cause a Software Intelligence Explosion?, Yudkowsky and Christiano discuss "Takeoff Speeds", Clarifying and predicting AGI
They’re selective, but err on the side of applying!
MATS also has governance and policy tracks despite being mostly technical.
This post also recommends jobs to improve AGI outcomes. 80,000 Hours also has career profiles for AI technical research, governance and policy, China-related paths, information security, and hardware.
One thing I would add to the non-professional activities is activism. Political cooperation on an issue is made significantly easier when there is a signal from the public that they view it as important.
I'm a volunteer with PauseAI, and while keeping my unrelated day job, I've been able to lobby state and federal congressional officials, do in-person outreach, host events, and hold a protest. I run a local PauseAI chapter with multiple members, and I was informed by my representative's office that I have personally had a positive impact on their awareness of AI risk, just through repeated phone calls and emails.
As the public becomes more aware of AI risk, it is important for them to know that they can have an outsized positive impact on the future just by speaking up and demanding reasonable regulations, and spreading further awareness of the risks and how to mitigate them.
TRUMP AND VANCE. TRUMP AND VANCE. TRUMP AND VANCE. We NEED them.
Transparency fails without regulation.
Regulation FAILS without DONALD TRUMP (and probably JD VANCE).
So who CONVINCES TRUMP and VANCE?
Peter Thiel? Here is a June 26 Peter Thiel interview. I won't parse his views, partly because I can't:
https://www.nytimes.com/2025/06/26/opinion/peter-thiel-antichrist-ross-douthat.html
A plurality of tech CEO's? (I think the CEO's are doing a good job of 'voicing their opinions' to Trump and Vance already, given the Canada trade talk situation)
So, could you convince the tech CEO's themselves to want regulation again? How would you do that?
Otherwise, who outside of SILICON VALLEY ITSELF would Trump and Vance actually listen to?
(And it's not enough to inform them. They need to really re-think their views. Vance already knows about AI 2027, right? And has anything changed? They need to get confident. Solid. In-agreement. Because "accelerationist" CEO's will DEFINITELY meet with them and try to persuade them back.)