The AI Futures Project is a small research nonprofit forecasting the future of AI. For more than half a year, we’ve been working together on our first big project: AI 2027.
AI 2027 is a comprehensive and detailed scenario forecast of the future of AI. It starts in 2025 and projects the rise of AI agents by 2026, the complete automation of coding in early 2027, and the intelligence explosion in late 2027. It has two branches, one ending in AI takeover and another ending in utopia (sort of). Summaries don’t really do it justice, since the whole point was to be detailed and comprehensive and work things out step by step rather than rely on high-level abstractions like doom or utopia.
The website has cool features like the scenario statistics chart on the right that automatically updates as you scroll through the story.
What’s next for us?
Well, first we are going to respond to comments and otherwise engage with whatever conversation, responses, critiques, etc. that AI 2027 sparks. Beyond that, we aren’t sure yet. Our tabletop exercise has been surprisingly popular, so we might double-click there.
In general our mission is to help make AGI go well, and our natural focus will be on forecasting: Producing and publishing research about what we think might happen. We might also write about what we think should be done.
This is a provocative read but both the endgame scenarios assume a limited ecosystem of one or two AGI models. Rather than s Singularity like this, I think we’re headed to a Multiplicity of models operating in a Cambrian explosion like era of hyper competitive co-evolution.
We could see narrow AI systems collaborating in ways that lead to emergent ASI-like behavior. It seems unlikely the US would declare one AI firm the “winner” and nationalize data centers into one model as happens in these scenarios, especially under the current administration.
These scenarios are pretty clean and assume nation states will have the most control over AI development, but that’s a huge assumption. I think we are far more likely to see multiple differently aligned systems of different scales all competing with each other for compute, with varying levels of human input.
It’s what we are seeing now with one model overtaking another on a regular basis and a diversification of AI rather than consolidation, which is what these futures seem to anticipate.
What a bracing read!
So, what can I, a simple human, software developer from a third world country, do?
Sit down and cry? I don't have enough power or money to influence anything or anyone, I study every day but I will never compete with an AI. How can I and my family survive this?
Just lie down and die?