Nice post! Re each company needing to have a SOTA safety plan, clearly it can't be the case that every company has the best safety plan. So I suppose in practice this means something like 'your safety plan must be within X% as good as the best safety plan in industry' on some fuzzy measure of safety plan quality.
Thanks Oscar! On the meaning of SOTA, it would obviously be nonsense to require that every AGI company's safety policy be the best safety policy. People who are in a position to know have told me that the AI Office will likely interpret "at least state-of-the-art" to mean "really good by industry standards".
We are working on both updates timelines/takeoff forecasts and scenarios that have longer timelines, but we are taking the time to make sure we are happy with their quality.
What are the stronger whistleblower protections you would like to see in the Code? My understanding is that the EU Whistleblowing Directive is already quite extensive (albeit its implementation varies across member states)
Hey, AI Futures people. I think you should drop the ex cathedra approach (you talk, we little folk listen and maybe discuss among ourselves). Scott Alexander can do that on his blog, and that works because most people there like the way he thinks and have committed to the blog. Also he has a large, pretty smart readership and that gives people a chance to discuss the post with other people who are knowledgeable and clever enough to make discussion among readers worthwhile. You guys don’t have either of those things going on. This blog’s going to die unless you do more to generate reader involvement, and the only way I can think of to do that is to respond to reader comments. If you think you are to busy, too smart and too right to do that — well you may be right about that, but you’re going to end up being smart and right without a forum.
I see you have responded to a coupla posts here, but they are the easy-to-respond to ones. You have ignored the single contentious one. I’m not saying you have to respond to every civil post whose reader disagrees with you, but I do think you should be responding to at least half of them.
😆 EU will achieve its goals of zero risk form AGI - by virtue of zero AGI appearing in EU. How can I forese that future, when future is hard to predict? In this case - easy: the reason being, zero AI has appeared in EU so far, few years into the AI revolution. As AI us a prerequisite for AGI - it follows that zero AGI will appear too.
All 4 "grandfathers of AI" are European. The 3 that moved over to the Americas changed the world. The 4th stayed behind (comfortable in his own lab?), and has been reduced to schmidhubering from the back of the class. There are reasons as to "why", and these reasons haven't changed since. In fact - it's gotten worse on the EU side. Reading the above I was reminded of "If you have an extremely cultured bureaucracy, you get beautiful porcelain, but you also get negative half a percent economic growth per year." 😳
The AI Futures Project's expertise should be integral to the government's strategic planning and rulemaking. The situation's intricacies are almost unfathomable.
Nice post! Re each company needing to have a SOTA safety plan, clearly it can't be the case that every company has the best safety plan. So I suppose in practice this means something like 'your safety plan must be within X% as good as the best safety plan in industry' on some fuzzy measure of safety plan quality.
Thanks Oscar! On the meaning of SOTA, it would obviously be nonsense to require that every AGI company's safety policy be the best safety policy. People who are in a position to know have told me that the AI Office will likely interpret "at least state-of-the-art" to mean "really good by industry standards".
I love these posts, but I remain disappointed that the AFP hasn't updated us in this publication with its new, pushed-back-from-2027 estimates.
We are working on both updates timelines/takeoff forecasts and scenarios that have longer timelines, but we are taking the time to make sure we are happy with their quality.
What are the stronger whistleblower protections you would like to see in the Code? My understanding is that the EU Whistleblowing Directive is already quite extensive (albeit its implementation varies across member states)
Hey, AI Futures people. I think you should drop the ex cathedra approach (you talk, we little folk listen and maybe discuss among ourselves). Scott Alexander can do that on his blog, and that works because most people there like the way he thinks and have committed to the blog. Also he has a large, pretty smart readership and that gives people a chance to discuss the post with other people who are knowledgeable and clever enough to make discussion among readers worthwhile. You guys don’t have either of those things going on. This blog’s going to die unless you do more to generate reader involvement, and the only way I can think of to do that is to respond to reader comments. If you think you are to busy, too smart and too right to do that — well you may be right about that, but you’re going to end up being smart and right without a forum.
I see you have responded to a coupla posts here, but they are the easy-to-respond to ones. You have ignored the single contentious one. I’m not saying you have to respond to every civil post whose reader disagrees with you, but I do think you should be responding to at least half of them.
Just saw this, sorry. Which contentious one are you talking about?
😆 EU will achieve its goals of zero risk form AGI - by virtue of zero AGI appearing in EU. How can I forese that future, when future is hard to predict? In this case - easy: the reason being, zero AI has appeared in EU so far, few years into the AI revolution. As AI us a prerequisite for AGI - it follows that zero AGI will appear too.
All 4 "grandfathers of AI" are European. The 3 that moved over to the Americas changed the world. The 4th stayed behind (comfortable in his own lab?), and has been reduced to schmidhubering from the back of the class. There are reasons as to "why", and these reasons haven't changed since. In fact - it's gotten worse on the EU side. Reading the above I was reminded of "If you have an extremely cultured bureaucracy, you get beautiful porcelain, but you also get negative half a percent economic growth per year." 😳
(https://www.persuasion.community/p/tyler-cowen-on-everything-c1f)
The AI Futures Project's expertise should be integral to the government's strategic planning and rulemaking. The situation's intricacies are almost unfathomable.