12 Comments
User's avatar
Andy Haymaker's avatar

Looking forward to the deeper dive, but could you edit in some clear definitions for HEM and SME? I needed ChatGPT to guess that these mean Hardware-Enforcement Mechanism (or Monitoring) and Semiconductor Manufacturing Equipment.

Thomas Larsen's avatar

HEM= Hardware enabled mechanism

SME = Semiconductor Manufacturing Equipment

Torches Together's avatar

It's actually Hardware-Enabled Mechanisms - tools that enable verifiable reporting of AI training activities e.g. quantity of compute used, training cluster configuration or location etc. see: https://arxiv.org/abs/2505.03742

Olle Häggström's avatar

Thank you so much for this excellent preliminary framework for thinking about AI governance in the US! Truly important stuff.

And the promise made in the italicized intro puts me in the mood of a child longing for Christmas:

“To any adversarial readers who want to dunk on our organizational policy recs, just wait a couple of months. We intend to publish a much more comprehensive ‘positive vision for AGI’, which will have much juicer targets to criticize.”

Oscar Delaney's avatar

What did you make of the news about the creation of an 'AI Futures Steering Committee'? https://defensescoop.com/2025/12/08/fy26-ndaa-dod-ai-artificial-intelligence-futures-agi-steering-committee/

Seems tentatively promising to me, though ideally they would bring in a bunch of external experts like you recommend, rather than just having existing DoD people.

Jeffrey Soreff's avatar

My 0-th order concern is that "Plan A" could metastasize into strangling the AI industry like civilian nuclear power has been strangled. I'm _marginally_ in favor of regulations like RAISE _provided_ that the regulations are tightly circumscribed so that they don't turn into things like the NRC's ALARA policy, or into things like 600 page, 3 year environmental impact statements.

Re: "Without strong government wakeup (and ability to think through AGI strategy), we remain on-course for Plan C or D worlds, where we leave the future in the hands of lab leaders and race dynamics. Good policymaking requires an alert government."

Bluntly, given a choice between power held by Putin/Trump/AOC/Harris/Xi and power held by Hassabis/Altman/Musk/Amodei, I strongly prefer the latter.

Ondrej Kubu's avatar

Chip registry: Does this need to be done by the government? It would, of course, be better (could e.g. account for individual chips), but I think that at least for now, the data centers are so "big and run hot" that military-grade intelligence is not needed; it could be done even in the nonprofit/analytics space.

100YoS's avatar

One thing that came to mind while reading this (excellent post) was a reading from my Rethinking Nuclear Deterrence class.

https://muse-jhu-edu.ezp-prod1.hul.harvard.edu/pub/6/article/968888/pdf

The Chinese government (apparently) believes that *the weaker party should be less transparent than the stronger party*, a belief formed by decades of being threatened as the weaker party in the nuclear realm.

Neural Foundry's avatar

Compeling framework that reframes AI governance as primarily a coordination problem rather than purely a technical one. The Plan A/B/C/D taxonomy helps clarify what often feels like muddled policy debate. The chip registry proposal is paricularly smart since it creates infrastructure for verification without prematurely locking in specific restrictions. Most policy discussions assume we'll figure out coordination later, but prepositioning those capabilites now expands the bargaining range when stakes get higher.

Herbie Bradley's avatar

If the true timeline appears to be like "AI 2035" then this might count as a slow takeoff? At some later point, if this appears to be the trajectory, what policies would you endorse?

Daniel Kokotajlo's avatar

Re: 2035 and slow takeoff: yes probably. It depends of course on how it happens--if it's smooth scaling of AI capabilities and AI company revenue along current trends until the whole economy is AI and robots in 2035, then that's a slow takeoff and Paul Christiano was right about everything praise be His name. If instead, there's something more jumpy -- e.g. LLMs+RLVR stalls out in the next few years, maybe there's even an AI winter, and then a few years later a new paradigm comes along that obsoletes the previous stuff... well, that could be almost arbitrarily fast. But yeah overall if I imagine AGI happening in 2035 I think the slow takeoff is more likely.

As for what policies we'd endorse... the same ones basically? We are trying to make a plan that is robust to different settings for takeoff speeds, timelines, etc.

Tony's avatar

One difficulty with coordinating with China:

Chinese govt doesn’t seem to be taking the likelihood of ASI very seriously; if the US govt does, that is a strategic advantage. To coordinate, the Is govt would need to convince Chinese govt of the likelihood of ASI, losing that strategic advantage.

This is not to say that any effort to prepare the ground for coordination is infeasible now, but some initiatives might be.