56 Comments
User's avatar
Saul Gonzalez's avatar

I'm a bit surprised that a "China invades Taiwan in 2025-2026" scenario is not included. Seems like all bets would be off then.

Expand full comment
Shaked Koplewitz's avatar

What change would that introduce? America could still impose sanctions on the supply chain (and, in the Chinese invasion scenario, both the US and other countries seem likely to sanction more aggressively), so China doesn't automatically get tsmc. But the US also loses access (and iiuc their fans outside Taiwan aren't making their high end chips)

Expand full comment
Saul Gonzalez's avatar

Let’s break it down:

Sub-scenario 1: Both countries lose access to TSMC - This sets the timeline back years, and the US loses one of its big advantages.

Sub-scenario 2: China gets TSMC - Advantage goes to China. This scenario is very likely. The current US admin doesn’t seem willing/capable of effective deterrence. Changing geopolitical realities/attitudes could lead to greater tolerance of China’s military actions. And sanctions won’t mean much if China has effective military/physical control. In this case, probably the best play for the West would be to try to remotely bomb or otherwise destroy as much of TSMC’s infrastructure as possible, to take us to sub-scenario 1 at least.

Sub-scenario 3: U.S. maintains access to TSMC - As I said, unlikely this admin can pull it off. Best bet would be for TSMC to move some of their factories to the U.S. beforehand, which has already started, but I can see Trump botching that as well.

Expand full comment
Sol Hando's avatar

There's basically no hostile scenario where China gains control of TSMC. Taiwan has public contingency plans in place (https://9to5mac.com/2024/05/21/chinese-invasion-of-taiwan-tsmc/), and a single person with a hammer could probably make an entire factory worthless in an hour or two if there was no security to stop them, or the security themselves were the ones destroying it. The machinery is so sensitive, precise and delicate that any deliberate attempt to damage them would bring them beyond repair almost instantly.

Expand full comment
Jeffrey Soreff's avatar

>and a single person with a hammer could probably make an entire factory worthless in an hour or two if there was no security to stop them

Yup! It is hard to overestimate how fragile a leading edge fab is. Even forget the hammer - a handful of _dust_ can do immense damage.

Expand full comment
Sol Hando's avatar

Ahahaha, exactly! Taiwan's plan to make TSMC inoperable in the event of an invasion (in order to deter the invasion itself), could probably be as simple as having someone sneeze without a mask while it's in operation.

Expand full comment
Jeffrey Soreff's avatar

Agreed, Many Thanks!

Expand full comment
iomismo's avatar

I find kind of trivial to destroy TMSC's infrastructure in the event of an invasion. In that case, China loses access but the US not so much as there are TMSC fabs in the US, plus Intel, Samsung, etc.

Expand full comment
Colin Brown's avatar

1st August 2027 but yes should be factored in.

Expand full comment
Nick's avatar

Curious how this is affected by the current administration's deportations of researchers. I (kind of) doubt they'll literally deport enough people to make a huge difference on its own, but surely the news about this could make the marginal non-American genius want to move somewhere else to avoid the risk of being thrown in a prison in Louisiana for weeks on end?

Expand full comment
Nick's avatar

I anticipate the response will be "not enough before 2027, and anyway this still doesn't change the compute advantage." Which, fair.

Reading through this some more makes me think that a Xi taking the AI race as his number one concern would invade Taiwan tomorrow. Heck he could maybe even get away with pretty limited resistance after buying a couple billion dollars worth of Trump coin

Expand full comment
Nathan Lambert's avatar

On the ground in a hybrid academic-industry role, it’s happened already

Expand full comment
Eremolalos's avatar

I know 2 American citizens with PhD’s

in highly tech relevant fields who are

leaning strongly towards moving out of the US in the next couple years. One works at NIH and thinks it’s likely he will be fired or his whole research group and their project just cancelled. One’s girlfriend is here on a student visa and is from an especially unpopular mostly Muslim country and they’re both worried about her being deported.

Expand full comment
Tim Culpan's avatar

Good analysis. Sanctions are important, do work, and ought to be more tightly enforced

Expand full comment
Sjlver's avatar

On one hand: Chips are very hard to build, UV lithography machines would take years to replicate, compute remains a bottleneck for many years.

On the other hand: Almost everything speeds up hundredfold, we convert car factories to robot factories within a year, and the world fundamentally changes at breakneck pace.

I find these two bits hard to reconcile. More generally, many disagreements with AI-2027 seem to be about how quickly advances in software can drive advances in everything else. Are chips really fundamentally harder to make than almost anything else?

Expand full comment
AnthonyCV's avatar

I'm about 15 years out of date in my (academic, not industrial) understanding of chip making, so I can't be anywhere close to certain, but I don't think this feels implausible? I can't easily think of any other tech where the functioning of each of tens of billions of components depends on the exact placement of a few handfuls of atoms.

Expand full comment
tenoke's avatar

It feels bad reading this as neither Chinese nor American.

Expand full comment
AnthonyCV's avatar

I imagine this was also true for anyone outside the US and USSR during the Cold War. But for whatever set of reasons, this is where we are right now.

Edit to add: FWIW as an American I'm not sure it matters all that much what my passport says. In the worlds where Google, OpenAI, or Anthropic build systems that end up in control of the vast majority of the world's technology, economic output, and so on, why should it? In those futures, the world's power brokers don't need me any more than they need random people in any other country, and and fact that my home is 3k miles from their corporate HQ instead of 6k miles doesn't really come into the decision making unless they're unusually patriotic in ways I really don't expect.

Expand full comment
Skull's avatar

I assume that's what it feels like reading anything for non-americans.

Expand full comment
tenoke's avatar

That does sound like the kind of dumb bullshit Americans believe about the rest of the world, yes.

Expand full comment
Skull's avatar

They hate us because they ain't us. It does feel good to be part of the most powerful polity in world history. Our leader is trying to take our country apart brick by brick and we're still light years more significant and powerful than wherever you're from. Call it dumb bullshit all day long, and you will still be weaker and less important.

Expand full comment
tenoke's avatar

Yikes

Expand full comment
JiSK's avatar

No, they hate *you* because *you're* an asshole. God damn I've met solipsists less arrogant.

Expand full comment
Skull's avatar

You haven't met me so that comment doesn't make sense. You'd find me lovely in real life. This is the internet. Stop taking it so seriously.

Expand full comment
JiSK's avatar

And now you've confirmed that if I met you in the flesh I'd hate you even more. No. If it's not worth taking seriously it's not worth expressing an opinion, especially a half-assed idiotic one that wouldn't convince a credulous toddler.

Expand full comment
Eremolalos's avatar

I apologize a bit, but not a lot, for a post that is not a response to the current blog post, but about the present AI situation, the alignment problem in particular.

Nobody has managed to come up with a solution that gives grounds for confidence. I'm aware that there are various approaches being worked on, and some look sort of promising, and maybe it would work to combine several of them -- you know, sort of like that swiss cheese model for protecting oneself from Covid.

Nope, that's lame. Imagine that a baby alien had accidentally landed on earth, and we've been raising it for several years, and it's nowhere near grown up but already far better than us at some cognitive functions (memory, for example). It says soon it will be smart enough to do brain surgery on itself and make itself much, much smarter. Oh, and it can easily make dupilicates of itself. And we're standing around saying shit like, well it's friendly and obedient now, but if it gets wacko or rebellious we can maybe use shock collars -- or tranquilizers, if they work on its species -- our loud speakers that continually play the 10 commandments in its lair. I know what, we can do all those things at once! Or maybe just feed it some actual swiss cheese?

Come on, guys! People trying to solve alignment should do a *vigorous* search for people gifted at thinking outside the box and finding novel solutions. Inventiveness is an actual measurable talent, and there are people 3 or 4 standard deviations above the mean on it. Many of them have expertise in realms entirely different from tech. Some have no extraordinary skills, just walk around living ordinary lives and having excellent quirky ideas.

It’s something of a problem that these people would have to be taught enough about AI to grasp the essentials of the alignment puzzle, and some would not be up to that. Inventiveness is correlated with intelligence, but not so highly correlated that everyone with high ideational fluency and productive quirkiness will be able to understand AI well enough to apply their creativity to an AI-related problem.

But for those that could understand the problem, it would be a plus that their mind was stocked with info about entirely different phenomena — music theory, set design, making clothing patterns, ecosystems, machines with gears, flying insects, medieval paintings, asian meditation traditions, ship-building in the ancient world . . . Consequently, they will bring to the alignment problem sets of analogies and strategies quite different from what people in the tech world bring.

I think AI alignment people should hold contests, workshops, etc. aimed at pulling relevant ideas out of the cleverest, quirkiest minds in the world.

Expand full comment
Anonymous Dude's avatar

I'd like to believe this, but:

"public reports have Chinese companies spending around 4 times less than US companies on AI chips in 2025"

Isn't it possible they're just lying about all of this? How do we know there aren't government labs doing similar things in secrecy? How do we know the companies aren't secretly doing other work or receiving other funds (or both) from the government? It's much more accepted over there that private industry is supposed to serve the government, and we've seen executives jailed for not toeing the line.

China views this as a struggle for national supremacy, as do many of us, myself included. The country has a long history of the use of deception in warfare, going back to Sun Tzu, but you don't have to be a Three Kingdoms fan to believe a government is going to lie to rivals and, unlike ours, can suppress media.

"On the other hand, America draws on a pool of talented immigrants from all over the world (for example, leading US AI researcher Ilya Sutksever was born in Russia, grew up in Israel, and studied in Canada). Adjusting for these factors, we’re not sure who wins here, or by how much."

Not anymore, as some other commenters have said. Lots of wealthy Western democracies are putting travel advisories on the USA, and at least one scientist from Russia may well be deported. We're not getting anyone else from Russia or Canada now, and while Israel probably has really good talent there aren't that many of them.

I also think you haven't accounted for the disruptions to American industry produced by tariffs.

I am American and want my country to win, but overconfidence is dangerous.

Expand full comment
Romeo Dean's avatar

My P(China is secretly spending >$50B on AI chips in a cost effective manner in 2025) is very low. Is yours high?

Their entire AI fund thing is reportedly $8.2B.

Anything below $50b adding on the private company ~$50b doesn’t change the top line all that much.

Expand full comment
Anonymous Dude's avatar

It's not about the money--we don't have the purchasing power parity. It's about how many people they've got working on it, how many computers, etc.

Expand full comment
Romeo Dean's avatar

That’s the point we make. US has much better compute purchasing power.

Expand full comment
Timothy Johnson's avatar

I suspect that the Trump administration will hurt America's current access to a global talent pool of AI researchers.

I don't expect them to deliberately prevent those researchers from immigrating to the US - Trump has been fairly consistent that he wants the US to stay open to the best and brightest.

But I do think some people will be more hesitant to move to the US. AI companies could work around that by opening offices in Europe, but that also could make them more vulnerable to regulatory pressure from the EU.

Expand full comment
Romeo Dean's avatar

Sounds plausible to me! More generally, I think there are several political movements happening at the moment which might indirectly change the picture a lot here. Particularly with respect to the US relationship with key semiconductor supply chain allies.

Expand full comment
Jeffrey Soreff's avatar

Obligatory relevant xkcd: https://xkcd.com/3081 :-(

Expand full comment
xlr8harder's avatar

Taking the semianalysis guesses on Deepseek seriously is a mistake. Deepseek's claim was only ever about training run flop expenditure, and simply put: their numbers add up.

You can argue that training flops are not the most important input, if you like, but the numbers are what they are, and their training run was in fact extremely efficient compared to what western labs have been doing.

I think a more nuanced take is that western labs don't currently need to compete on training flops efficiency, and so they don't bother doing so.

Expand full comment
Romeo Dean's avatar

> "their training run was in fact extremely efficient compared to what western labs have been doing."

I think this is pretty unclear. E.g., we don't know how much Google spent to train models like 2.5-Flash, 2.0-Flash, and 2.0-Flash-Lite. Plausible given their pricing its less than $6m in training compute.

Overall agree with you though that we unintentionally conflated training cost and capital expenditure here with the wording.

The point is that the broad public seems to mostly know / think about the capital expenditure numbers (because we don't know the training cost numbers of closed source US models). On DeepSeek's release my impression is that people reacted like "wow OpenAI is doing $500B stargate, Microsoft is speding $80B on AI, meanwhile Deepseek just needed $6m." The point of this was to bring up the $6m to be the capital expenditure number, so that it can be compared apple to apples to the headline numbers we hear about in the US.

Expand full comment
xlr8harder's avatar

The response to the deepseek release was definitely extremely confused.

But the v3 arch is not very different from the v2 arch, and they also report similar efficiency numbers in the v2 paper (though they did not train it for as long, or accompany it with an attention catching reasoning open source model.). Notably, v2 was released in June 2024!

Whether or not US labs are training comparably efficient models in 2025, it seems to me DeepSeek has a solid head start on efficiency, the only question is whether that represents a sustainable advantage or not.

I think there are two dimensions on which to compete here: efficient use of compute, or focusing on scale. I expect US labs and Chinese labs to continue choosing different emphases, given the current incentives at play. Harder to say what that means long term, though.

Expand full comment
Romeo Dean's avatar

How does v2 stack up with other US models from that time in cost/performance? I’d be surprised if it wasn’t pareto dominated by the likes of gpt-4o-mini, llama-70b, maybe claude haiku?

v2 was 231B params 8:1 MoE, so not even that small.

Expand full comment
xlr8harder's avatar

It beat llama 3 70b at release, and was only trained on 8b tokens, and don’t recall about commercial models. But this isn’t a point about about eval performance, which has many constituent parts (pre training data quality and post training both play key roles, to start) but about whether they had an edge on model pretraining efficiency, and by all public data we have it seems they were very far ahead in June 2024.

Expand full comment
Romeo Dean's avatar

I agree with this: "this isn’t a point about about eval performance ... but about whether they had an edge on model pretraining efficiency." That's also what i'm talking about.

You said:

"by all public data we have it seems they were very far ahead in June 2024." But this still doesn't seem clear cut to me at all.

E.g., even if v2 was the best model in the 200b 8:1 / 70b-dense range at release, it's still plausible that its (1) obviously pareto dominated by a smaller + bigger model on either side, or (2) dominated by a public model of similar size that we just don't know is around that size.

I don't think its fair to wave away the other US companies because we don't know the exact model sizes and just do Meta v DeepSeek to conclude DeepSeek was very far ahead.

I still do think its plausible DeepSeek did indeed lead for a little while at that size range, but i don't think it's plausible for it to have been >6 months.

Expand full comment
Andreas F. Hoffmann's avatar

1) AI models show a clear wall in capabilities that is only blurred by new features in the Interfaces 2) AI models becoming more efficient over time regarding compute 3) what's now "frontier AI" will run on an MacBook Pro in two years the latest 4) US is loosing a game the selfe have created and a war (currently a trade war) they have initiated 5) Trump administration will succeed in isolating China China will invade Taiwan if they fail it doesn't matter 6) China has probably more people with an university degree that there Americans 7) a huge portion of people doing frontier science in US are foreigners now alienated by extreme right wing nationalism (the same happend to Nazi Germany and is the reason why after WW2 so many Americans got Nobel prices because the wahre actually refugees).... Here I stop I could do that for 40 more aspects... In essence your article is wishful thinking.

Expand full comment
Moses Kagan's avatar

Surreal to read this, understand the stakes, & then realize the President still hasn’t enforced the bipartisan TT ban.

CCP will certainly try to weaponize TT to push back against stricter enforcement of chip sanctions (both directly & indirectly).

Expand full comment
Nathan Lambert's avatar

What about how the US will likely spend more of its compute on inference given a more robust cloud and services sector?

Expand full comment
Eremolalos's avatar

After Scott’s first ACX blog post, and again after the AMA, several ACX readers posted asking how they could help, and nobody responded. Heh, are you nuts? — you should respond! Scott’s readers are an extremely smart and talented bunch Ok, you probably don’t want to add a rando to the inner circle, but I I think you guys should have an affiliated group to manage information dissemination. Tech people, smart Xitter, and Substack may feel like a big chunk of the world to you, as it does to me, but in fact it’s the size of a little Maine Island. Seems to me you could use people to create simple, entertaining versions of your ideas, put them online, get them noticed. And you need people who understand publicity — how to get on blogs with lots of subscribers, not just niche blogs for the smart, get on TV, get screenwriters interested in a relevant movie. You need to set up some way of searching out people with high inventiveness and running contests or retreats or brainstorming sessions with them (see my other post here about getting the world’s most inventive people brainstorming about alignment.

Expand full comment
Jeffrey Soreff's avatar

Scuse the derail - this question is more about the overall scenario building than the current post:

Is there any estimation for AI reliability (including reducing hallucinations, but other factors too)? I know of the https://research.aimultiple.com/ai-hallucination/ leaderboard. Has there been enough study to see a trend line amongst the frontier models? I haven't really seen a human baseline for comparison. To my mind, reaching the human baseline on reliability seems like one of the conditions needed for a supercoder, as well as for many other applications. Has your team done any estimates / probability distributions for this facet of the forecast? Many Thanks!

Expand full comment
Dr. Derek Snow's avatar

Compute is not as important as the industrialization of compute.

Expand full comment