8 Comments
User's avatar
Alex Turner's avatar

I think the "is the world in equilibrium?" criterion really goes out the window with people like Trump. See: Tariffs, incoherent export control policies, and other moves seemingly driven by ego or whim instead of the interest of any nation or even large interest group.

For a policy perspective, I think it is EXTREMELY IMPORTANT to not apply rational actor theory to nations unless the nation has stably followed the predictions of that kind of theory.

To be specific, a rational theory would say "it doesn't matter if Xi Jinping calls Trump a 'pussy on AGI development' on national TV. The nations' incentives remain basically the same." But I guarantee that such a statement would MASSIVELY shift US policy and politics. Granted, such brazen public provocation is unlikely, but I'd imagine that some button pushing happens regardless.

Expand full comment
Daniel Kokotajlo's avatar

I don't want to claim that the world is actually in equilibrium! However, I think it's still important to use this as a sanity check for plans. If a plan *only works if at least one major actor behaves significantly contrary to its incentives* then that's a mark against the plan. It's not a dealbreaker necessarily. Like, "This plan requires POTUS to behave irrationally, but here's an argument that he probably will behave irrationally in the required way" is pretty reasonable. Is that the kind of reasoning you'd recommend we do instead / the shape of plan we aim to construct?

Expand full comment
Craig Gordon's avatar

under missing what's important for AI

“A thought transfixed me: for the first time in my life I saw the truth as it is set into song by so many poets, proclaimed as the final wisdom by so many thinkers. The truth — that love is the ultimate and the highest goal to which man can aspire. Then I grasped the meaning of the greatest secret that human poetry and human thought and belief have to impart: The salvation of man is through love and in love.”

— Viktor Frankl (1905–1997), “Man’s Search for Meaning”

Expand full comment
Ormond's avatar
5dEdited

If we only knew what love is...or Salvation...or Man.

Expand full comment
Life Engineering's avatar

This is a solid take on what is necessary for a functional policy, especially the mention of questioning if both major powers would agree on something if there's a power discrepancy. Overall the call for more scrutiny is absolutely valid, I see many hypothetical "policies" but they seem highly dependent on specific actors taking specific steps, which makes the entire scenarios shaky, that's not to say a policy that relies on this impossible, as you stated, but it makes it a lot less viable in my opinion.

I also think there's an over assumption of the inherent goodness of businesses and nations, since they all want to win, which might mean short term benefits for their populations but long term harm to society with AI as a whole, and I think more polices should take this inherent selfishness/wish to succeed into account.

Expand full comment
Jamie Fisher's avatar

Who is the intended audience of these posts? I'm asking sincerely.

Because obviously you want to change policy *on the grandest scale imaginable*. And yet I earnestly don't know who the target actors/influencers are meant to be. The last I checked, no one this blog runs a branch of government. And the people who 'like and restack' aren't exactly famous (no offense to any of them).

Take this line:

> So, if you have policy proposals to make advanced AI go well, we challenge you to articulate them and then subject them to scenario scrutiny!

Great! And then what? Submit them? To who? To you? To hypothetical meetings in hypothetical halls of power?

I've pounded this drum again and again in this tiny rice-sized public-comment-section: Who are you actually talking to? Where/What is your interface to the non-AI community? Are you talking to individuals in "positions of concrete power"? Are you talking to more "grassroots" operatives and organizers? Are you talking to both the Left and Right? Are you talking to non-STEM people? Are there any skeptics who could still make useful allies?

***I think I'm allowed to keep pounding this drum as long as I see literally ZERO COVERAGE of AI Risk from any of the mainstream "Center to Left" news outlets I regularly watch, read, and listen-to. Not podcasts.***

(Unless you think you can change government policy on this topic *fully* under-the-radar of Mainstream News, Public Debate, Activism, and Election Cycles)

Just a reminder... the few mainstream magazine that *do* sometimes cover AI Risk are always skeptical-of [if not actively ridiculing] the "Doomers".

Expand full comment
Daniel Kokotajlo's avatar

Our general strategy is: Our top priority is being correct, i.e. getting the ideas right. Our secondary priority is to articulate the ideas clearly and engagingly for a wide audience. Targeting specific powerful groups is maybe third priority. We do in fact talk with people in the government, with partisans on both sides, with miscellaneous other groups, etc. but it's not our main priority.

This strategy seems to have worked historically. I'm open to being convinced it's suboptimal though.

To answer your question about what to do with policy proposals & scenario scrutiny: We think they should typically be published, i.e. submitted to the public.

Expand full comment
Jamie Fisher's avatar

https://www.youtube.com/watch?v=675d_6WGPbo

This is a "good and galvanizing" presentation of AI dangers, imo, even if it doesn't directly mention "extinction". The dangers, imo, are framed similarly to the "slowdown ending" of the AI 2027 forecast, which is still very dystopian.

And it's on a popular and famous television show.

Tristan Harris is great. He even did a Netflix documentary years ago about social media.

(In case my point's unclear, Tristan Harris, manages to slip-in "Doomer Adjacent" scenarios without coming across as a Doomer. Whether or not Tristan personally believes in Extinction Risk... he's calling for a lot of the Same Damn Proposals as you. He's a critical ally. And you need more of them!)

Expand full comment