9 Comments
User's avatar
Nathan Metzger's avatar

One thing I would add to the non-professional activities is activism. Political cooperation on an issue is made significantly easier when there is a signal from the public that they view it as important.

I'm a volunteer with PauseAI, and while keeping my unrelated day job, I've been able to lobby state and federal congressional officials, do in-person outreach, host events, and hold a protest. I run a local PauseAI chapter with multiple members, and I was informed by my representative's office that I have personally had a positive impact on their awareness of AI risk, just through repeated phone calls and emails.

As the public becomes more aware of AI risk, it is important for them to know that they can have an outsized positive impact on the future just by speaking up and demanding reasonable regulations, and spreading further awareness of the risks and how to mitigate them.

Expand full comment
Mo Diddly's avatar

I will say that in the last 20 years, activism has been at least as likely to make things worse as it is better. I hope you are being cautious with your activities.

Expand full comment
Nathan Metzger's avatar

Yes, PauseAI is very careful about how they go about engaging in activism, in the ways in which it matters to be careful. Generally, in the space of AI risk advocacy, the balance between caution and action has been almost entirely lopsided in favor of caution, to the point of being counterproductive. But the fact is that simply being a decent human and communicating what you believe to be true almost always goes well, or at least almost never makes things worse. Your early mistakes will not end the world, anymore than your early impacts will save it. Do something rather than nothing (at least something that seems sensible), and then work on doing better as you go.

Expand full comment
Jamie Fisher's avatar

TRUMP AND VANCE. We NEED them.

Transparency fails without regulation.

Regulation FAILS without DONALD TRUMP (and probably JD Vance as well).

So who CONVINCES TRUMP and VANCE?

Peter Thiel? Here is a June 26 Peter Thiel interview. I won't parse his views, partly because I can't:

https://www.nytimes.com/2025/06/26/opinion/peter-thiel-antichrist-ross-douthat.html

Who else?

A plurality of tech CEO's? (I think the CEO's are doing a good job of 'voicing their opinions' to Trump and Vance already, given the Canada trade talk situation)

So, could you convince the CEO's themselves to want regulation again? How would you do that?

Otherwise, who outside of SILICON VALLEY ITSELF would Trump and Vance actually listen to?

(And it's not enough to inform Trump + Vance. Vance already knows about AI 2027, right? And has anything changed? They need to really see the issue anew. They need to get confident. Solid. Because "accelerationist" CEO's will ASSUREDLY meet with them and try to persuade them into non-action.)

Expand full comment
LambdaSaturn's avatar

"The MATS Program is for entering the field"

There are programs with less requirements to enter the field, like MARS, ARENA, SPAR, AISC

Expand full comment
Andrew Wetherell's avatar

All good ideas, but how do you change the culture of the organisations leading this and that of a US & Chinese government to collaborate?

The ideas of what can be put in place stem from some beliefs about safety and what is a desirable way to act as an AI company. For the authors, maybe figuring out explicitly what norms you have that the AI companies don't would be beneficial to share so we can debate at the more fundamental level of what beliefs underpins successful AGI pursuit.

For example, I'd interpret your ideas as a shift towards stakeholder centricity - where companies act to deliver their perceived best net benefit to all stakeholders, not just shareholder value growth.

It is important to note the influence of background cultures as well. The West is very Christian values, probably most people on Substack will come with this same value system without even knowing it.

If you think examining the desirable culture in orgs that can lead to more responsible AI is useful public discourse, happy to create and share. The same interesting question applies to the AI's, if they create personalities in essence, is there a way to bake in recursive alignment processes where the AI's consistently evaluate and adjust towards some set of desirable norms that can be publicly debated. Validating of course is hard with sycophancy.

Expand full comment
Houston Wood's avatar

I think of this article is a useful manifesto, a needed call to action, for all of humanity. And there is a building community of Substackers who embrace at least a portion of this call to work.

However, I worry that we are missing a very important way to rally a larger public to becoming passionate about this topic. We must make clear that AGI will transform basic human psychology. The outside world will change, certainly, but our inner worlds will likely change even more.

How we experience our selves, each other, and reality itself is already being transformed. Our calls to action must put this threat in the foreground in order to arouse a wider public response.

This is the basis of my Substack "Mind Revolution."

Expand full comment
Jamie Fisher's avatar

> All of their queries to the models [CEO's or POTUS's] are logged and monitored.

How would it be possible to monitor the activity of one person without monitoring the activity of *anyone* with access to the same system? (cause it's trivial to use a different username/person if available)

And are we talking about monitoring CEO/POTUS's activities on just the "highest capability/danger/authority systems"? Or monitoring their activity on *any and all* AI's they interact with?

Expand full comment
GasStationManager's avatar

If you are interested in AI safety, especially as related to coding, I recommend learning more about formal verification, and in particular what a modern interactive theorem prover like Lean can do.

Coding (and math) are domains in which you can (in theory) have trustless guarantees about the safety and correctness of the output of a superintelligent AI. How? Just ask the AI to prove a theorem (in e.g. Lean) about the correctness of its code. You can verify the proof by running it in the Lean proof checker, and only accept the code if the proof passes.

You could even do this across languages, e.g. code in Rust, proof in Lean (see https://aeneasverif.github.io/)

In general, I would love to see more dialogue and exchange of ideas between AI Safety and formal verification & AI theorem proving communities.

Here's my own journey of discovery into this topic: http://lean4ai.org

Working on the challenge of helping coding AIs be good at proving the correctness of its code.

Related reading: Towards Guaranteed Safe AI https://arxiv.org/abs/2405.06624

Expand full comment