9 Comments
User's avatar
Andy Haymaker's avatar

Looking forward to the deeper dive, but could you edit in some clear definitions for HEM and SME? I needed ChatGPT to guess that these mean Hardware-Enforcement Mechanism (or Monitoring) and Semiconductor Manufacturing Equipment.

Expand full comment
Thomas Larsen's avatar

HEM= Hardware enabled mechanism

SME = Semiconductor Manufacturing Equipment

Expand full comment
Torches Together's avatar

It's actually Hardware-Enabled Mechanisms - tools that enable verifiable reporting of AI training activities e.g. quantity of compute used, training cluster configuration or location etc. see: https://arxiv.org/abs/2505.03742

Expand full comment
Olle Häggström's avatar

Thank you so much for this excellent preliminary framework for thinking about AI governance in the US! Truly important stuff.

And the promise made in the italicized intro puts me in the mood of a child longing for Christmas:

“To any adversarial readers who want to dunk on our organizational policy recs, just wait a couple of months. We intend to publish a much more comprehensive ‘positive vision for AGI’, which will have much juicer targets to criticize.”

Expand full comment
Oscar Delaney's avatar

What did you make of the news about the creation of an 'AI Futures Steering Committee'? https://defensescoop.com/2025/12/08/fy26-ndaa-dod-ai-artificial-intelligence-futures-agi-steering-committee/

Seems tentatively promising to me, though ideally they would bring in a bunch of external experts like you recommend, rather than just having existing DoD people.

Expand full comment
100YoS's avatar

One thing that came to mind while reading this (excellent post) was a reading from my Rethinking Nuclear Deterrence class.

https://muse-jhu-edu.ezp-prod1.hul.harvard.edu/pub/6/article/968888/pdf

The Chinese government (apparently) believes that *the weaker party should be less transparent than the stronger party*, a belief formed by decades of being threatened as the weaker party in the nuclear realm.

Expand full comment
Herbie Bradley's avatar

If the true timeline appears to be like "AI 2035" then this might count as a slow takeoff? At some later point, if this appears to be the trajectory, what policies would you endorse?

Expand full comment
Daniel Kokotajlo's avatar

Re: 2035 and slow takeoff: yes probably. It depends of course on how it happens--if it's smooth scaling of AI capabilities and AI company revenue along current trends until the whole economy is AI and robots in 2035, then that's a slow takeoff and Paul Christiano was right about everything praise be His name. If instead, there's something more jumpy -- e.g. LLMs+RLVR stalls out in the next few years, maybe there's even an AI winter, and then a few years later a new paradigm comes along that obsoletes the previous stuff... well, that could be almost arbitrarily fast. But yeah overall if I imagine AGI happening in 2035 I think the slow takeoff is more likely.

As for what policies we'd endorse... the same ones basically? We are trying to make a plan that is robust to different settings for takeoff speeds, timelines, etc.

Expand full comment
Neural Foundry's avatar

Compeling framework that reframes AI governance as primarily a coordination problem rather than purely a technical one. The Plan A/B/C/D taxonomy helps clarify what often feels like muddled policy debate. The chip registry proposal is paricularly smart since it creates infrastructure for verification without prematurely locking in specific restrictions. Most policy discussions assume we'll figure out coordination later, but prepositioning those capabilites now expands the bargaining range when stakes get higher.

Expand full comment