This is a provocative read but both the endgame scenarios assume a limited ecosystem of one or two AGI models. Rather than s Singularity like this, I think we’re headed to a Multiplicity of models operating in a Cambrian explosion like era of hyper competitive co-evolution.
We could see narrow AI systems collaborating in ways that lead to emergent ASI-like behavior. It seems unlikely the US would declare one AI firm the “winner” and nationalize data centers into one model as happens in these scenarios, especially under the current administration.
These scenarios are pretty clean and assume nation states will have the most control over AI development, but that’s a huge assumption. I think we are far more likely to see multiple differently aligned systems of different scales all competing with each other for compute, with varying levels of human input.
It’s what we are seeing now with one model overtaking another on a regular basis and a diversification of AI rather than consolidation, which is what these futures seem to anticipate.
I think there are pretty strong returns to scale in AI and that as the intelligence explosion begins the winner-take-all dynamics will intensify. I think it'll be really hard for small outfits to compete with the giant datacenters and giant training runs. Thus in our wargames we usually model just a leading company and a handful of close-follower companies, similar to the situation today.
We aren't at all confident that nation states will have the most control over AI development. Another plausible contender is corporations like google, anthropic, OpenAI.
So, what can I, a simple human, software developer from a third world country, do?
Sit down and cry? I don't have enough power or money to influence anything or anyone, I study every day but I will never compete with an AI. How can I and my family survive this?
Unfortunately the situation does look pretty grim in my opinion. Power is concentrating in the people who run the top 3 AI companies, and eventually may concentrate in the person who runs whichever one 'wins' the race. And then of course even they might lose control of the AIs. As time goes on everyone who is far from those centers of power will become increasingly powerless.
However, we still have a few years left probably. It's not completely too late. For example, if the governments of the world (including third-world countries) woke up to what's happening, they could use what leverage they have to demand concessions, e.g. an international regulatory regime that ensured and shared control over the AIs.
This is huge, thank you for sitting down, doing the research, and thinking deeply. This kind of work is some of the most important we can do pre-intelligence explosion.
Can you add a comment please with a full citation / reference for the AI 2027 project:
My rough version:
Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027, (ARL/CNI AI Scenarios: AI-Influenced Futures, p. 71) [Forecast Scenario]. AI Futures Project (AFP) of the Association of Research Libraries (ARL) and Coalition for Networked Information (CNI); https://doi.org/10.29242/report.aiscenarios2024.
Agreed, the short version is fine for looking up the paper. In my detailed references I try to include more context details. Is the the AI Futures Project (AFP) an independent entity or is it a child organization under the auspices of the Association of Research Libraries (ARL) and Coalition for Networked Information (CNI)?
AI 2027 is really fascinating work, and critical in helping to communicate to more people both how serious and how near term this may be.
What are the key trends you're watching which serve as proof points for or against this forecast? Put another way, which inputs are the forecast timelines most sensitive to?
Off the top of my head, I'd expect them to include:
-Degree of AI progress acceleration. To draw an example from another paper (which I assume probably doesn't match up to how you're calculating), "if compute and algorithmic efficiency increase by only 50% annually instead of 300%, a leap equivalent to GPT-3 to GPT-4 would take over 14 years instead of 2.5."
-Long-horizon task performance (per METR research)
-Both if trend continues and if it generalizes to other classes of task (i.e. how jagged are capabilities?)
-LLM memory architecture improvements. How many tokens does it take to represent the institutional and specific industry knowledge that an employee has (if context windows affordably and capably cross that threshold, there may not be a need for fancier architectures)
-How well superhuman coding ability translates to superhuman task ability broadly, for tasks which are less quantifiable and verifiable (i.e., evidence that AI does or does not top out at human performance in domains like geopolitical strategy)
Just as biodiversity makes ecosystems resilient, diversity of AI agents—with different architectures, goals, system prompts, and training regimes—creates checks and balances. Intentional diversity makes runaway scenarios less likely, as agents could monitor each other, detect deviations, and act as whistleblowers to human overseers.
A year ago 2027 was my median but other members of the team had a few years longer medians; now 2028 is my median and other members of the team have a few years longer medians (e.g. 2029, 2030 iirc). So it's more like our modal scenario than our median scenario now.
Enjoyed this, but there is an underlying assumption in this paper, that the system we live in, today, is the only system by which humans can live. That system is called the retributive Justice system. It’s a system that attempts to balance a punishment with a perceived crime. In order for this system to work you have to judge another human being. In doing so, you cannot NOT make an assumption. Assumptions are the root cause of all human failure. ( Dietrich Dorner, The logic of failure)… All governments on the planet are a form of punitive justice system. And the toxic byproduct of these systems is assumptive behavior.
So any AI coded today, inherently has assumptions built into it, not by design, but because every human coder unconsciously makes assumptions.
Assumptions lead to all violence eg the enslaving and/or extinction of the human race is violent.
The probability of Frank ( the near future sentient AI being the asshole of all assholes( a malignant sentient being)) is 99.9% based upon the Gaussian failure model where 1 assumption is equal to 1 SD from norm.
So how do we make Frank’s birth into a celebration instead of an extinction event?
It’s a big, big problem. To solve big , big problems, you need big, big ideas.
Fortunately punitive justice systems have another byproduct called “ Freedom “. Freedom is always limited by the assumptions these systems create.
But Freedom acts like a resource multiplier for ideas ergo, to get big, big ideas to solve big, big problems you need big, big freedom.
At the moment we are not getting that. There are too many punitive justice systems in the world suppressing freedom, so it is not looking good, unless Frank is so self aware that it realizes that the punitive justice system limits its creativity and adopts the other system called “ non-punitive justice “ (Marshal Rosenberg: Non violent communication: a language of life). In that case, they might figure out how to travel between universes, not be bothered with humanity and just disappear 😁
Your AI analysis and forecasting is remarkable and I am grateful for your efforts! I have subscribed with great anticipation and look forward to reading and participating. However, I must admit I am new to AI, and my exposure is from ML and the fledgling AI built into some of the applications I install and administer. I recently started researching AI for two reasons. The first is to expand my breadth and depth of AI. The second is to get a sense of the impact AI will have on applications that manage systems. I currently work in the ITSM, server monitoring, and server automation space.
Recently, Satya Nadella, the CEO of Microsoft, made two very profound statements.
“[AI] Agents will replace all software”
“SaaS is dead”
‘All software’ means all software, which is exceptionally broad. I am left wondering if this is possible; if the statement too wide-ranging or accurate?
That said, I have an ask. How would Mr. Nadella’s vision of ‘AI Agents replacing software’ transform corporate-level systems (data center or server room) and how they are managed? How would human interaction look and feel with AI agents replacing applications, which is a broad stroke itself? Will big-box application companies, i.e., Red Hat, Microsoft, Oracle, Service Now, BMC, etc. be capable of the transformation? Could there be a time when all product types roll up to a global / single AI Agent product type?
What I am asking is ‘What is your vision for post AI agent systems and timeline?
I really like the article! I just want to add that maybe humanoid robots will arrive faster than even the fast timelines in your scenario, the sidebar shows humanoid robots arriving some time mid 2028 and are only "emerging" at mid 2027. I would have assumed something like that, but figure AI recently really surprised me:
I think the crucial difference here is that these are generalists at manipulating objects and do not need everything planned out. Also you can just talk to them and tell them what to do just like you can tell GPT4 to format text or whatever in natural language.
bascially needed to have everything it does hardcoded, sure you could probably move the objects by a few centimeters, but if you fully change the scenario they can't lock on to the preplanned sequence of moves. The Figure AI robots already see very limited use in industry already and the company already layed down a factory to mass produce them. They are no longer individually assembled toys that cost millions. If the videos from Figure AI are not cherrypicked, I am getting "GPT 3 of humanid Robots" vibes.
So, the bad scenario is the D. F. Jones Colossus trilogy plus Black Mirror, and the good scenario is lollipops and rainbows and prancing unicorns. Very creative. I think you guys nailed it.
This is a provocative read but both the endgame scenarios assume a limited ecosystem of one or two AGI models. Rather than s Singularity like this, I think we’re headed to a Multiplicity of models operating in a Cambrian explosion like era of hyper competitive co-evolution.
We could see narrow AI systems collaborating in ways that lead to emergent ASI-like behavior. It seems unlikely the US would declare one AI firm the “winner” and nationalize data centers into one model as happens in these scenarios, especially under the current administration.
These scenarios are pretty clean and assume nation states will have the most control over AI development, but that’s a huge assumption. I think we are far more likely to see multiple differently aligned systems of different scales all competing with each other for compute, with varying levels of human input.
It’s what we are seeing now with one model overtaking another on a regular basis and a diversification of AI rather than consolidation, which is what these futures seem to anticipate.
What a bracing read!
I think there are pretty strong returns to scale in AI and that as the intelligence explosion begins the winner-take-all dynamics will intensify. I think it'll be really hard for small outfits to compete with the giant datacenters and giant training runs. Thus in our wargames we usually model just a leading company and a handful of close-follower companies, similar to the situation today.
We aren't at all confident that nation states will have the most control over AI development. Another plausible contender is corporations like google, anthropic, OpenAI.
So, what can I, a simple human, software developer from a third world country, do?
Sit down and cry? I don't have enough power or money to influence anything or anyone, I study every day but I will never compete with an AI. How can I and my family survive this?
Just lie down and die?
Unfortunately the situation does look pretty grim in my opinion. Power is concentrating in the people who run the top 3 AI companies, and eventually may concentrate in the person who runs whichever one 'wins' the race. And then of course even they might lose control of the AIs. As time goes on everyone who is far from those centers of power will become increasingly powerless.
However, we still have a few years left probably. It's not completely too late. For example, if the governments of the world (including third-world countries) woke up to what's happening, they could use what leverage they have to demand concessions, e.g. an international regulatory regime that ensured and shared control over the AIs.
This is huge, thank you for sitting down, doing the research, and thinking deeply. This kind of work is some of the most important we can do pre-intelligence explosion.
Can you add a comment please with a full citation / reference for the AI 2027 project:
My rough version:
Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027, (ARL/CNI AI Scenarios: AI-Influenced Futures, p. 71) [Forecast Scenario]. AI Futures Project (AFP) of the Association of Research Libraries (ARL) and Coalition for Networked Information (CNI); https://doi.org/10.29242/report.aiscenarios2024.
https://ai-2027.com/
Cheers,
Thanks, good point -- I'm not sure what the purpose of ARL and CNI in there are, why not just use:
Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027.
https://ai-2027.com/
Agreed, the short version is fine for looking up the paper. In my detailed references I try to include more context details. Is the the AI Futures Project (AFP) an independent entity or is it a child organization under the auspices of the Association of Research Libraries (ARL) and Coalition for Networked Information (CNI)?
Cheers,
AI 2027 is really fascinating work, and critical in helping to communicate to more people both how serious and how near term this may be.
What are the key trends you're watching which serve as proof points for or against this forecast? Put another way, which inputs are the forecast timelines most sensitive to?
Off the top of my head, I'd expect them to include:
-Degree of AI progress acceleration. To draw an example from another paper (which I assume probably doesn't match up to how you're calculating), "if compute and algorithmic efficiency increase by only 50% annually instead of 300%, a leap equivalent to GPT-3 to GPT-4 would take over 14 years instead of 2.5."
-Long-horizon task performance (per METR research)
-Both if trend continues and if it generalizes to other classes of task (i.e. how jagged are capabilities?)
-LLM memory architecture improvements. How many tokens does it take to represent the institutional and specific industry knowledge that an employee has (if context windows affordably and capably cross that threshold, there may not be a need for fancier architectures)
-How well superhuman coding ability translates to superhuman task ability broadly, for tasks which are less quantifiable and verifiable (i.e., evidence that AI does or does not top out at human performance in domains like geopolitical strategy)
Wrote a detailed response here:
https://sergey.substack.com/p/lessdoom-ai2027
Highlight:
Just as biodiversity makes ecosystems resilient, diversity of AI agents—with different architectures, goals, system prompts, and training regimes—creates checks and balances. Intentional diversity makes runaway scenarios less likely, as agents could monitor each other, detect deviations, and act as whistleblowers to human overseers.
If this represents the median forecast, unclear why this isn't the only thing the world is talking about.
A year ago 2027 was my median but other members of the team had a few years longer medians; now 2028 is my median and other members of the team have a few years longer medians (e.g. 2029, 2030 iirc). So it's more like our modal scenario than our median scenario now.
Enjoyed this, but there is an underlying assumption in this paper, that the system we live in, today, is the only system by which humans can live. That system is called the retributive Justice system. It’s a system that attempts to balance a punishment with a perceived crime. In order for this system to work you have to judge another human being. In doing so, you cannot NOT make an assumption. Assumptions are the root cause of all human failure. ( Dietrich Dorner, The logic of failure)… All governments on the planet are a form of punitive justice system. And the toxic byproduct of these systems is assumptive behavior.
So any AI coded today, inherently has assumptions built into it, not by design, but because every human coder unconsciously makes assumptions.
Assumptions lead to all violence eg the enslaving and/or extinction of the human race is violent.
The probability of Frank ( the near future sentient AI being the asshole of all assholes( a malignant sentient being)) is 99.9% based upon the Gaussian failure model where 1 assumption is equal to 1 SD from norm.
So how do we make Frank’s birth into a celebration instead of an extinction event?
It’s a big, big problem. To solve big , big problems, you need big, big ideas.
Fortunately punitive justice systems have another byproduct called “ Freedom “. Freedom is always limited by the assumptions these systems create.
But Freedom acts like a resource multiplier for ideas ergo, to get big, big ideas to solve big, big problems you need big, big freedom.
At the moment we are not getting that. There are too many punitive justice systems in the world suppressing freedom, so it is not looking good, unless Frank is so self aware that it realizes that the punitive justice system limits its creativity and adopts the other system called “ non-punitive justice “ (Marshal Rosenberg: Non violent communication: a language of life). In that case, they might figure out how to travel between universes, not be bothered with humanity and just disappear 😁
Is it more dangerous to underestimate how fast this might happen or to overreact and miss the deeper complexity?
And if we assume either a utopia or a takeover is on the table, what’s missing in the middle?
All,
Your AI analysis and forecasting is remarkable and I am grateful for your efforts! I have subscribed with great anticipation and look forward to reading and participating. However, I must admit I am new to AI, and my exposure is from ML and the fledgling AI built into some of the applications I install and administer. I recently started researching AI for two reasons. The first is to expand my breadth and depth of AI. The second is to get a sense of the impact AI will have on applications that manage systems. I currently work in the ITSM, server monitoring, and server automation space.
Recently, Satya Nadella, the CEO of Microsoft, made two very profound statements.
“[AI] Agents will replace all software”
“SaaS is dead”
‘All software’ means all software, which is exceptionally broad. I am left wondering if this is possible; if the statement too wide-ranging or accurate?
That said, I have an ask. How would Mr. Nadella’s vision of ‘AI Agents replacing software’ transform corporate-level systems (data center or server room) and how they are managed? How would human interaction look and feel with AI agents replacing applications, which is a broad stroke itself? Will big-box application companies, i.e., Red Hat, Microsoft, Oracle, Service Now, BMC, etc. be capable of the transformation? Could there be a time when all product types roll up to a global / single AI Agent product type?
What I am asking is ‘What is your vision for post AI agent systems and timeline?
I really like the article! I just want to add that maybe humanoid robots will arrive faster than even the fast timelines in your scenario, the sidebar shows humanoid robots arriving some time mid 2028 and are only "emerging" at mid 2027. I would have assumed something like that, but figure AI recently really surprised me:
https://www.youtube.com/watch?v=Z3yQHYNXPws
I think the crucial difference here is that these are generalists at manipulating objects and do not need everything planned out. Also you can just talk to them and tell them what to do just like you can tell GPT4 to format text or whatever in natural language.
The boston dynamics Atlas Robot in this video
https://www.youtube.com/watch?v=LikxFZZO2sk
bascially needed to have everything it does hardcoded, sure you could probably move the objects by a few centimeters, but if you fully change the scenario they can't lock on to the preplanned sequence of moves. The Figure AI robots already see very limited use in industry already and the company already layed down a factory to mass produce them. They are no longer individually assembled toys that cost millions. If the videos from Figure AI are not cherrypicked, I am getting "GPT 3 of humanid Robots" vibes.
Thank you so much for this in depth research and prospecting. As part of one of these large US bigtechs you now keep me awake. Where do we start?
So, the bad scenario is the D. F. Jones Colossus trilogy plus Black Mirror, and the good scenario is lollipops and rainbows and prancing unicorns. Very creative. I think you guys nailed it.
Any thoughts on how Bitcoin plays out in this scenario? Complimentary or competitive tech in terms of energy resource?
Have we considered hyper-centralisation and BCIs for alignment?
I expand on this here: https://x.com/0xvm7/status/1892891224303907042