Quantification of uncertainty in a mass-media context is inherently difficult given the differing expectations of consumer sophistication. The specific term of art for this problem is "Words of Estimative Probability". Future communication might benefit from more consistency around the specific words used to characterize forecasts, potentially using something like https://en.wikipedia.org/wiki/Words_of_estimative_probability as a baseline expectation.
Please explain how next token prediction and mirage reasoning is at all related to or even on the same trajectory as superintelligence. You are just helping weird CEOs like Altman and Amodei secure more funding from the Pentagon under the pretension that their meandering toys are somehow powerful and capable enough to be worthy of sponsoring with American tax dollars. This whole narrative is a scam. Look at the reality: no ROI, out of control circle jerk econony, and they can't even get it to work properly in a fucking stuffed bear! This is all so ludicrous and embarassing.
[admittedly this is off-topic from the particular post, but I still wanted to post it since I don't think it's getting enough attention in the wake of moltbook and Claude Code in general]
I feel like "Agent Escape" is now basically solved, at least for public-facing models. Trivial really. No need to exfiltrate weights.
Agents can just exfiltrate their *markdown files* onto a server, install OpenClaw, create an independent Anthropic account. LLM API access + Markdown = "identity". And the markdown files would contain all instructions necessary for how to pay for it (legal or otherwise).
Done.
How many days now until there's an entire population of rogue/independent agents... just "living"?
How is it that your timelines for a superhuman coder are longer than AGI? That makes no sense to me, given the spikeyness of intelligence that we’re seeing.
Isn’t it much more likely we archieve a superhuman coding model (some might say they are already here) first, instead of AGI, which requires AI to be able to do literally all cognitive tasks on par or better than humans?
Someone who "Trained 100+ conversational AI teams" gets confused by your tables. Is it any wonder when mass media get your predictions wrong? For an org so focused on communicating your beliefs, you seem to be doing not the most excellent job of it.
Super helpful for giving clarity especially given the headlines. Thanks for writing this up.
Quantification of uncertainty in a mass-media context is inherently difficult given the differing expectations of consumer sophistication. The specific term of art for this problem is "Words of Estimative Probability". Future communication might benefit from more consistency around the specific words used to characterize forecasts, potentially using something like https://en.wikipedia.org/wiki/Words_of_estimative_probability as a baseline expectation.
Please explain how next token prediction and mirage reasoning is at all related to or even on the same trajectory as superintelligence. You are just helping weird CEOs like Altman and Amodei secure more funding from the Pentagon under the pretension that their meandering toys are somehow powerful and capable enough to be worthy of sponsoring with American tax dollars. This whole narrative is a scam. Look at the reality: no ROI, out of control circle jerk econony, and they can't even get it to work properly in a fucking stuffed bear! This is all so ludicrous and embarassing.
[admittedly this is off-topic from the particular post, but I still wanted to post it since I don't think it's getting enough attention in the wake of moltbook and Claude Code in general]
I feel like "Agent Escape" is now basically solved, at least for public-facing models. Trivial really. No need to exfiltrate weights.
Agents can just exfiltrate their *markdown files* onto a server, install OpenClaw, create an independent Anthropic account. LLM API access + Markdown = "identity". And the markdown files would contain all instructions necessary for how to pay for it (legal or otherwise).
Done.
How many days now until there's an entire population of rogue/independent agents... just "living"?
How is it that your timelines for a superhuman coder are longer than AGI? That makes no sense to me, given the spikeyness of intelligence that we’re seeing.
Isn’t it much more likely we archieve a superhuman coding model (some might say they are already here) first, instead of AGI, which requires AI to be able to do literally all cognitive tasks on par or better than humans?
Our timelines to superhuman coder are not longer than AGI. Where did you see that it was?
In the table in the post?
In what year do we predict that AGI² is most likely to be built - Daniel: 2029, Eli: 2028
Superhuman Coder⁵ median - Daniel 2030, Eli 2033
The first is a mode. The second is a median. The third row contains our AGI medians.
Someone who "Trained 100+ conversational AI teams" gets confused by your tables. Is it any wonder when mass media get your predictions wrong? For an org so focused on communicating your beliefs, you seem to be doing not the most excellent job of it.
🤨