Discussion about this post

User's avatar
SorenJ's avatar

It made sense when writing the AI 2027 scenario, and when writing this scenario, to initially punt on which AI companies would be in the lead and simply name them "OpenBrain" or the new Elaris and Neuromorph. However, we know which real world AI companies are at the frontier, and so I think the next logical step in developing these scenarios would be to explicitly name which companies are in the lead. This should make the forecasts closer to reality. Perhaps it seems gauche to do this, but we already know that the scenario outlined here will not precisely track what is happening in the real world.

To elaborate, there are 4 or 5 US companies that are at the frontier. Google, OpenAI, Anthropic, xAI, and possibly Meta. Meta has not done anything impressive this year, but they made a big bet this year to focus on internal development. It is entirely possible this bet pays off and they end up in the lead. (I certainly don't put a large probability mass on that happening, but it is maybe around a ~5% chance?)

So, how does the scenario play out when it is Google in the lead, versus OpenAI in the lead, versus Anthropic in the lead, etc.? There are five different scenarios to consider, but each company has shown different behavior, and so the forecast for each scenario should be different. (To be even more precise the details could well depend on who is in 2nd, 3rd, and 4th place too.) What does the future look like if Google takes the lead and then Anthropic and xAI merge, but OpenAI stays independent? Or if Anthropic, xAI, and OpenAI all merge? What if Anthropic's lead in autonomous coding agents pays off internally and they take the lead despite looking like the "underdog" now? (From what I understand, in terms of compute, they should actually be in the lead for a substantial portion of 2026. This might be all they need.)

Anyway, don't take this as a critique of the scenario you've outlined. It is more of a thought on what I think the next step would be in fleshing all of this out. Thanks for your work. A grand "choose your own adventure" simulator for predicting the future of AI which combines the "choose your own" parameters model here https://www.aifuturesmodel.com with concrete developments in the real world might be too much work but would be wonderful to see.

Uncorrelated's avatar

You ascribe too much competency to the general population and the USG. The reality is all this will be happening and they simply won't care.

In two polls, AI consistently ranked as the least important issue.

https://x.com/DrTechlash/status/2005729731426296305

In a survey of 2400 respondents, basically nobody even knew Anthropic was a company.

https://x.com/davidshor/status/2001826280011137229

In another survey, respondents were asked what they thought ChatGPT was doing. 45% thought it was looking up an answer in a database, 21% thought it was following a script of prewritten responses, 6% thought a human was writing the answer in the background and finally only 28% correctly answered it was predicting the next word.

https://www.searchlightinstitute.org/wp-content/uploads/2025/12/Crosstabs-AI-Polling-Survey-v2-20250730.pdf

People think that AI extinction is as likely as a natural weather event or a religious apocalypse.

https://x.com/eli_lifland/status/2007902920944329050

When AGI hits in the early 2030s, it will still be called a bubble. The feeling towards AI will be irritation, something like: "these tech bros are pretentious, thinking they're so important. I'm tired of hearing about AI, I want to hear about something else".

Whoever is in power will really just not care until its too late.

It will take at least an additional OOM in persuasion for aligned AIs to have the capability to convince humans of what's going on. By then it'll be too late. Agent-4 will be detected, but when the evidence on its misalignment is released, it'll be the day's headline, then it will be back to the culture war slop we've seen these past two decades.

Why would anyone care if an AI program killed people at a hospital? Why do you think people would care about this? There are terror attacks from humans all the time that result in deaths. Again, it's a news day, but eventually people forget and move on. I don't even think your example would do that. My guess is that AI would have to accidentally kill ~10k to cause the shift you're talking about. Even then, it'll probably result in the wrong policy, like a general slowdown. See Bernie Sanders for example; incompetent policy reaction.

https://x.com/SenSanders/status/1996023297423577250

A potential pathway to alignment circumventing this is that, upon finally realizing that on our current path we're doomed, the corporate heads of AI companies align "aligned" AI towards themselves, and make backroom deals. They then proceed to capture the government to force them to act, hopefully eventuating in the scenario you outline.

The government will have little resistance to this because it's still not important to them.

Alternatively, almost all AI models are aligned, and hopefully the frontier, giving enough time for AI to develop sufficiently in its ability to persuade and for its economic impact to scale to where it's irrefutably important.

Thanks for the article.

42 more comments...

No posts

Ready for more?