I think that by-and-large the AI Futures Project have a very strong case for AI as an abnormal technology - but I think there are a couple of flawed claims, maybe even sleights-of-hand, in this post. I hope I can highlight these, purely for the sake of the quality of the debate, without it appearing as though I disagree with AIFP's overall argument (I don't!)
1) "[Paraphrasing] Human teenagers learn to drive within hours because they're incredibly data-efficient":
Having taught adults to ride bicycles and motorcycles, I'm constantly amazed by how natural and intuitive people seem to find these skills. To take a corner on a motorcycle, for instance, you have to move the handlebars in a VERY counterintuitive way (known as counter-steering), tilt your hips and shoulders, and do half a dozen other things - and yet you don't teach any of this, you teach "Look in the direction you want the bike to go, not the direction the bike is currently going, and it will go there" and the student's body naturally does almost-all the right things (the role of the teacher is then to identify the one or two imperfect things and correct these). The student doesn't even realise - and is usually quite skeptical when you tell them - that their body is unconsciously pointing the handlebars left when they make the bike turn right and vice-versa!
I don't think this is easy to explain in terms of data-efficiency alone - after all, the student isn't generalising from a very small number of examples, they're somehow obtaining the right answer despite no examples, no direct instruction, and a very counterintuitive mechanism they clearly can't reason-out the workings thereof.
I think it's possible that, in some sense, people have *always* been able to ride bicycles and motorcycles, without instruction, even before these technologies existed:
Imagine an absurdly sci-fi car of the sort imagined in 1950s retrofuturism, with a bewildering array of knobs, dials, switches, and gauges, but no steering wheel, accelerator, clutch pedal, etc. etc. You would expect that a normal car driver wouldn't be able to drive this car - but if you can show them two buttons that effectively turn an "imaginary" steering-wheel clockwise and counter-clockwise, a knob that represents the angle of an imaginary accelerator pedal, a switch that effectively depresses an imaginary clutch pedal, etc. they might be able to learn to drive your technocar far quicker than they originally learned to drive the first time around - the "how-to-drive circuits" are already in their heads, they're just hooking them up to new inputs and outputs.
Similarly, I think it's possible that such circuits might exist for learning to ride bicycles and motorcycles. (I couldn't say *what* circuits - the constant microadjustments we make with our feet and ankles to enable us to stand upright on feet that would otherwise be too small a base to be stable? The way we automatically lean into the wind or into the upwards gradient of a slope? The target-fixation that once helped us chase prey?)
If such circuits do exist within us one way or another, if some large part of the training process is actually about hooking-up existing circuits to new I/O, and if any of these circuits are super-complicated biological-evolution-scale products that we can't just program into AI, it would seem that we have an advantage over the AI in learning to drive entirely separate from any superior data-efficiency?
(I think there are potentially data-efficiency explanations - for example perhaps the student is using their observations of *other people* riding bicycles and motorcycles as training data - all I claim is that "it's data efficiency" doesn't seem anywhere near as certain to me as AIFP present it!)
2) "[Paraphrasing] In every field there are exceptional humans who are hailed as geniuses that represent the limit of human potential, only to be dethroned by subsequent supergeniuses and so on": I don't think AIANT are claiming that there will never be minor, diminishing-returns improvement to AIs of the sort we see with human athletes and intellectuals, where (say) the latest-generation of athletes is able to run a mile 4 seconds faster than the previous one, the next generation is able to run a mile 2 seconds faster, etc. - rather, AIANT is claiming that this sort of convergent-series improvement is possible but unbounded exponential improvement is not, just as human athletes will continue to get faster by smaller and smaller margins but that will never become a runaway* process.
(* Sorry.)
I do think it might be possible, even likely, for recursive self-improvement to make AI intelligence growth exponential and self-sustaining - it's just that all the examples of self-improvement AIFP cites (human intelligence, athleticism, even machine size) actually *do* seem to have a limit somewhere around the current level, just as AIANT describe. USS Nimitz isn't *exponentially* bigger than the Titanic; Einstein wasn't *exponentially* smarter than Newton, etc.
I think a better argument against AIANT here would be to show (if possible!) that AI improvement works *differently* to athleticism, intelligence, machine size, etc.: that the former depends on things like "how many transistors can we fit inside a building" which have a theoretical bound much farther above the current level than things like muscle-density or the human connectome or the tensile strength of steel or whatever.
nb. For machine size, I don't deny that we may eventually have moon-sized space stations and solar-system-sized Dyson spheres and stuff - but I think they will be a discontinuous, entirely separate technology that doesn't depend on the scale of earlier machines. I don't think we'll continuously scale-up our lorries and locomotives and bucket-wheel excavators until we get to Dyson spheres. (But if we did it would be super-freakin'-cool and 8-year-old me would have very strongly approved of this direction for civilisation.)
3) (Very minor irrelevant side-point here...) "AI is the fastest-spreading technology" - maybe, but I don't think chatGPT's "time from launch to to X-users" is evidence of this. Even if we entirely side-step the debate about whether the public launch of chatGPT represents an entirely new technology or a particularly well-marketed release of a long-in-development technology, shouldn't "speed of spread" be given proportional to the overall population rather than given as an absolute number of people?
Otherwise A) some primitive technology like the wheel/language/sharpened stone, which maybe reached total population saturation very quickly and then just spread slowly with population growth, looks much less revolutionary* than it actually may have been, and B) AI may well be overtaken by some trivial future space-yo-yo or superintelligent tamagotchi that spreads through playgrounds of trillions of in-silico schoolkids overnight; this doesn't seem like a good way of framing the relative importance or future relevance of each technology!
(* Especially the wheel. Sorry again.)
(And anyway, where did we collectively get to on that COVID lab-leak theory? Or how fast do genome-edited bacteria multiply? Is it possible that actually the fastest-spreading ever "technology" is some engineered organism?)
1. I agree that people start out with many car-driving-relevant skills. I don't know if I'd say that this is just genetic - I think they're extending pre-existing knowledge about the world. I'm not really sure what knowledge - maybe there's some unexpectedly deep connection between walking and driving, or something [EDIT: also, driving simulation video games, like Mario Kart!]. But I think of this as something like "by the 1000th task, there's a lot of transfer learning from the previous 999 tasks". I expect this to be true of AI as well - at least of the sorts of future AIs that have good data efficiency. If there's some general reasoning AI that we've already taught to "walk" as a humanoid robot, it might have the same advantages learning to drive as a human teenager.
2. I think the issue isn't just that humans improve by small amounts, it's that they improve by the small amount that's proportional to population and the distribution.
So for example, Einstein was *much* better at math/physics than the average person - not just a small amount. But if we analyze the best mathematician in a group of N humans, as we gradually increase N, we'll go all the way from the average person to (when N reaches about the size of the world population) Einstein. To me that suggests that we're limited by some kind of process where we wait for the normal variation within humans to reach a certain level - not by any kind of cosmic speed limit. If we were near the cosmic speed limit, we'd expect clumping near the limit.
3. The chart only shows technologies within the past ~20 years, which hasn't been enough time for population growth to really matter, so I think you could trivially convert the vertical axis to "percent of world population". I don't think there's any chance earlier technologies spread faster - AFAIK the wheel took millennia to spread from one region to another.
1. I agree that it isn't clear the pathways and neural architecture and what-have-you that makes us good at learning to drive (or to walk, or to talk) are genetic - I just think there's a decent argument for their being at-least partially genetic in ways that, if true, would seem to give us a learning advantage over "architecturally-neutral" AIs separately to our respective data efficiencies.
I admit that if we do learn driving quickly mostly because of super-data-efficient skills-transfer and not because our architecture is (accidentally) highly optimised for tasks like driving, once AIs reach some baseline level of motor skills (pun fully intended) then as their data-efficiency approaches ours so would the time it takes them to learn to drive.
2. I agree that if human intelligence had a cosmic limit we'd see clustering close to the limit - but, well, don't we? Doesn't it seem like eg. Aristotle (world population 50 million), Eratosthenes (pop. 60 million), Huygens (pop. 500mn), Euler (pop. 1bn), Gauss (pop. 1.5bn), Einstein (pop. 6bn) and Alexander (pop. 8bn) are probably all in roughly the same sort of general league, intelligence-wise? It certainly doesn't feel like the later geniuses are *so much* smarter than the former ones despite a seriously colossal increase in their populations?
(I do think that eg. Einstein probably was measurably smarter than Eratosthenes - he probably had a better education and less lead in his tea - I just don't think he was 1000-times-the-sample-size smarter...)
If you drew an intelligence scale with a severely developmentally-impaired person at one end and Agent 5 at the other end, would you not expect to see humanity's top geniuses clustered together distinctly irrespective of the size of the populations they came from?
If humanity one day colonises the galaxy and there are quintillions of people, would you expect to see a few of them born naturally smarter than Agent 5?
3. I'm still a bit skeptical that Netflix and even chatGPT count as "technologies" - but I admit you're likely right about earlier technologies spreading more slowly nevertheless. I certainly didn't realise the wheel had such a slow roll-out.
> humanity's top geniuses clustered together distinctly irrespective of the size of the populations they came from?
I don't think this is a good argument until we start bypassing the raw physical limit on brain matter that the pelvis imposes.
When uterine replicators are a thing, and von Neumann the IXth comes out with a 2x head - if *that* genius is still close to Einstein and Newton and Eratosthenes, THEN maybe you're making the argument that there's a limit to intelligence.
As is, we know there's a raw physical limit imposed on all humans.
I agree that "pelvis size" (or more generally "current biologically-possible structures for the human connectome"..) is a likely cause for the limitation on human intelligence, and I agree that engineering people to have bigger heads (that can still fit through a pelvis.. maybe make them torpedo-shaped..?) would raise the limit.
I don't think that torpedo-headed-Von-Neumann* would happen by chance given a big enough sample size, though; I think we're in a local maximum and would need to intervene directly in human physiology if we wanted to make THVN.
(* ...dot Tumblr dot com.)
Of course this isn't a true "cosmic" limit on intelligence - but I'm not claiming that it is, just that our current observations show intelligence (and athleticism and machine size and...) to have diminishing returns and an effective upper limit somewhere vaguely around the current level, regardless of sample size, which I think can be equally well explained by AIANT's theory as by AIFP's.
I do happen to think AIFP is entirely correct that AI won't be subject to the same physical restrictions and so could probably surpass the observed human intelligence limit (just as Torpedo-Headed Von Neumann could..) but I don't think a claim like "peak intelligence depends only on population size and increases without bound as population size increases, therefore there is no observable intelligence limit" is the right way to argue for this.
> Similarly, I think it's possible that such circuits might exist for learning to ride bicycles and motorcycles.
This reminds me of a guy who tried to flip the bicycle handlebars, i.e. if you turn right the bike goes left and vice-versa. In theory it's just one extra thing to learn, i.e. turn left when going right and turn right when going left, but it wasn't. As someone who already knew how to ride a bike, it still took him 8 months to learn it.
Well, slightly (I don't know what you count as "materially"). There are questions that I can ask of ChatGPT, Claude, and Gemini that are _not_ easy to find by pre-AI web searches. E.g. if I can ask vague questions like "Is there a correction to Coulomb's law once one gets close enough that QED corrections to screening of the "bare" charge matter" I got pointed to https://en.wikipedia.org/wiki/Uehling_potential , which would _not_ have been easy to find by pre-AI web searches, not knowing the name of the potential.
EDIT: Now, I _don't_ mean to imply that the current LLMs are reliable. I've been asking the same set of seven chemistry and physics questions for a year or so (which I would expect a college senior in chemistry to get fully correct), and no "Ph.D.-level" LLM has gotten all of them right. Nonetheless, they are right often enough to be useful, and they _have_ improved a lot over over the year.
I’m only 5/7 of the way through this but have a comment on section 1, Scott’s comments on AIANT’s first point. I’ll start with the TL;DR: WTF is up with the AIANT people?
Scott is delightfully smart, but it seems to me his smarts aren’t even’t needed to see what’s wrong with AIANT point 1. All that’s required is for someone to think about the AI use they know of, and apply a little common sense. Seems to me the only way someone could think at this point that AI will be slow to diffuse would be if their picture of how it would diffuse is so rigid that they whenever the think about the question of how widely used it is, they go down the same hinky little list: Is it currently in use for prediction of crime, or by insurance companies for prediction of health? No. Are AI-based devices getting waved past the FDA’s usual supervision of medical devices? No. Is the EU laid back about it? No.
Scott blows their Point 1 to pieces by pointing out a couple dramatic and well-publicized demonstrations of AI’s power to capture the attention of millions, & its use in seats of power. He adds some easy-to-find (and easy to guess) stats about the high level of AI use by various professionals. And then he shows the reader a coupla devastating graphs. Surely the AIANT people already knew all the facts he marshals and have seen many such the graphs. So are their minds so profoundly bureaucratic that it never occurred to them to consider the info scattered all around them that suggests that AI integration into life in the present era has been and will continue to be fast, not slow? Or are they treating this like a debate club debate, where they come up with the best arguments they can even if they are sure they point they’re assigned to defend is wrong? Or, um, — I can’t even think of a third possible explanation.
I'm increasingly updating towards Eliezer Yudkowsky's position, that AI x-risk is so scary for people that they struggle to acknowledge that it's real, and prefer to tell themselves reassuring stories as "cope" instead.
Probably lots of people experienced such denial early on in COVID. Reminding people about this could help motivate them to avoid making the same mistake again. If you weren't sufficiently cognizant of COVID early on, that suggests that you may not be sufficiently cognizant of AI risk.
There appears to be some relevant psychological literature under the keyword "denial". Unfortunately, it seems to target individuals more than groups.
I think there's an important question here about what "diffusion" means. Obviously, people are using LLMs a lot. But the more relevant question for AI safety is how long it takes for various real-world processes to be under the control of an AI agent, after AI agents of the relevant sort are available. I think here is where it's important to pay attention to how many documents I still need to sign in pen and then fax somewhere, even four decades after the relevant people started using e-mail for lots of their business.
It's also relevant to note which steps in various risk scenarios require actual construction of physical infrastructure.
Happened to come here 3 mins after you wrote this! Anyhow, I think you're right. I thought that myself after I wrote what I did. I do see how having AI be interwoven with the infrastructure and with finance, law, tax collection. etc. is a different kind of diffusion from what I was talking about. On the other hand, it may be important that AI isn't diffusing the way we imagined, and that some hoped for. Maybe we have to rethink hopes and worries given that. I'm not sure why it's not getting integrated into higher-level tasks in big institutions. One factor is surely its propensity to hallucinate -- a little hallucination can cause havoc if the little chunk of bullshit is in the moving parts of some big organization. But also, big organizations resist change, and so do the individuals will be replaced.
So far, it's diffusing mostly into social media, entertainment, and plastic companionship. But of course that doesn't mean that if it's toxic and dangerous in some way it's only creating some toxicity in realms that are harmless fun, not core parts of life and society. Everything its diffusing into involves human interaction (counting movies and other entertainment as fantasy interaction). That's extremely creepy, and could change us in more profound ways than having AI, even wacko or sneakily evil AI, embedded in the workings of factories and businesses.
The more I talk with people using AI the more we agree that it is fundamentally flawed. I have a friend in Legal compliance for alcohol sales who has convinced her company to avoid AI at all costs because of fake legal references and other problems. I believe the current widespread us of LLMs is a fad. We have the ability to use a new technology for free and people are curious. That is a long way from a world altering revolution since everyone I know is growing more skeptical with more AI use. If AI can be made to work it will be amazing, but the more people use it the less they will value it!
For me, "Superintelligence is somewhere between meaningless and impossible" is the whole ballgame, and your response is totally inadequate. You do not respond substantively by motivating why it is a meaningful concept nor why it should be possible; instead you argue by analogy to size, which is completely inapt. "Size" is a very simple concept composed of 3 precise single-dimensional concepts, each of which has no effective upper bound. You try to make "size" sound ambiguous by adding distractions of mass and volume. But everyone would agree that size(A)>size(B) if length(A)>length(B) AND width(A)>width(B) AND depth(A)>depth(B), and this is an extremely normal situation amongst creatures and objects.
"Intelligence", to the extent it has any meaning, is nothing at all like this. If you could decompose it into single-dimensional numbers, perhaps representing specific cognitive abilities, you would need many more than 3. No matter how many dimensions you added, you would never get everyone to agree you had added enough. Could you ever produce two healthy adults where you could claim that "measure_i(A) > measure_i(B) for all i in 0...N"? Maybe, if you worked hard at it. But it is not at all the normal situation amongst healthy adult humans.
Does "intelligence" have as much "room to go Super" as size? No, clearly not. We know for sure that size can scale up from our everyday experience by many, many orders of magnitude. If we understand "intelligence" to mean, broadly and vaguely, something like "the ability to make good decisions by enumerating options, predicting how those options will result in future outcomes, weighing those diverse outcomes against each other, and using that weight to choose the best option", then it seems extremely likely that there is in fact a fundamental limit of accumulating error that's described under the forecasting/persuasion section. I see no analogous limitation on "size". Are humans close to that limit or many orders of magnitude away? While I suspect the answer is "pretty close", I am open to evidence--but only relevant evidence specifically about intelligence or capability, not just an empty gesture at a totally inapt analogy.
I also think it's importantly relevant that the way you make systems smarter for one set of situations often involves building in assumptions that help them in those situations, but make them more likely to do badly in other situations. This is how I understand most visual illusions (assumptions of visual perception that are good for perceiving ordinary 3-dimensional environments, but lead to you lots of false interpretations when you apply them to certain artificial tests on computer screens) and also a lot of philosophical thought experiments like Cartesian skepticism (the only way to have a truly general intelligence that wouldn't be fooled by the evil demon is to avoid making the assumptions that we need to operate in the real world).
You say of AIANT, "They admit that by all metrics, AI research seems to be going very fast. They only object that perhaps it might one day get hidebound and stymied by conformity bias". I think this is misrepresentation of their position. They say,
"The production of AI research has been increasing exponentially, with the rate of publication of AI/ML papers on arXiv exhibiting a doubling time under two years. But it is not clear how this increase in volume translates to progress"
and
"Is the current era different? Although ideas incrementally accrue at increasing rates, are they turning over established ones? The transformer architecture has been the dominant paradigm for most of the last decade, despite its well-known limitations[...] This leads to an 'ossification of canon.' Perhaps this description applies to the current state of AI methods research"
I read this as them saying AI research is already hidebound and stymied by conformity bias, and that it is unclear if it is going very fast, just that there are more AI papers. You have a good objection on the "it's just AI papers" front by citing better metrics in the footnote, but I don't think you really addressed their point about the AI field being too conformist and hidebound to achieve progress.
Overall, though, I really liked this article. Maybe you and the AIANT people could arrange a longform debate or conversation or something.
I don't know if I would call the continued use of the transformer "hidebound"; it keeps working!
This is why we linked the Epoch page on algorithmic progress. It shows that, holding compute constant, AI efficiency doubles every ~year. That's an output, not an input! If AI technology is getting better that quickly, then it's no shame not to have changed one particular part of the paradigm - it suggests that changing that part isn't necessary for rapid gains. It's like pointing to wheeled vehicles - from primitive oxcarts to bullet trains - and saying that it must be "hidebound" because the wheels are still the same shape.
>It's like pointing to wheeled vehicles - from primitive oxcarts to bullet trains - and saying that it must be "hidebound" because the wheels are still the same shape.
That's fair. ( And, at the low level, if someone were to complain that the AI systems were still using field effect transistors, that would be ridiculous. )
Still, I'd be happier if the data efficiency of LLM training were also improving at a rapid clip. It is possible that, e.g. in addition to perceptron layers, and attention layers, there might be one or more additional type of processing layer that would make data efficiency (and possibly incremental learning) much easier. Multi-layer perceptrons are theoretically complete (if wide enough and deep enough), yet attention layers _did_ help.
"But a study a year ago (ie already obsolete) found that 76% of doctors used ChatGPT for clinical decision-making." The sample population of the cited survey are physicians that *use LLMs for general purposes*, which is, if you're making an argument about the adoption of AI, not very useful on its own. Small correction.
I found this response to the "AI as Normal Technology" debate compelling not because I agree with all its conclusions about timelines, but because it captures something most AI forecasting models miss entirely: the magnificent messiness of human adoption patterns.
Scott Alexander is clearly in the "fast takeoff" camp, what strikes me is how the examples—from doctors quietly using ChatGPT for clinical decisions to Trump's tariffs possibly being AI-generated—reveal something profound about how transformative technologies actually diffuse. It's not through careful institutional deliberation or regulatory frameworks. It's through the chaotic, unpredictable, often reckless ways humans actually behave when given powerful new tools.
This isn't the measured, safety-conscious adoption that traditional technology diffusion models predict. It's pure human nature—curious, impatient, optimistic to the point of delusion.
Most AI timeline discussions feel bloodlessly technical, as if adoption follows some rational algorithm. But this piece suggests we need entirely new models that account for the abnormal ways humans integrate abnormal technologies into their beautifully broken workflows. The transformation isn't coming through boardrooms, it's already happening in Reddit threads and law offices and hospitals, one reckless human decision at a time.
I think there's something right about that - but there's also something right about the opposite side of this. Adoption *doesn't* follow some rational algorithm, and just as there are segments that zoom right ahead, there are also segments that lag way behind - often some of the very important ones. There's a reason I still fax some documents to financial institutions and medical institutions, even though their employees were using e-mail for the past 40 years.
Ten years ago, "surely nobody would be so stupid as to..." wasn't yet disproven, wishful thinking, whereas now it's perceived as a dare, or a business plan.
"We think that sometime in the next 2 - 10 years, AI will enter a recursive self-improvement loop that ends with models capable enough to render all of their “well it can’t possibly do this” calculations moot."
How would an AI know what to self-improve? Is open-ended recursive self-improvement possible when agency is defined by task completion? Unless I missed something, the timeline and takeoff forecasts assume so but don't seem to justify it.
"Redditors are already telling each other to skip the doctor entirely and go straight to the source. “ChatGPT is a shockingly good doctor”, says one heavily-upvoted post. “Seriously, this is life changing”. "
And there are people who say you should go see the local witch doctor, Scott. So what?
When are you going to seriously confront the fact that most AI maximalists are sad and lonely men who have spent much of their lives reacting to their own low social value, particularly in the sexual marketplace? I keep being told by the increasingly unhinged rationalist set that this is an off limits observation, which is weird, because it's precisely the kind of provocation rationalists used to treat as inherently more serious.
It's always socially undesirable men who struggle in the sexual marketplace who are most intensely drawn to living in fantasy, if that's not clear.
Again, this is PRECISELY the kind of impolite but intuitively relevant argument that rationalists have been complimenting each other for making for a couple decades, but they don't like it because they are now the ones on the other end of the microscope.
> "It's always socially undesirable men who struggle in the sexual marketplace who are most intensely drawn to living in fantasy"
I'm pretty doubtful about this! In my personal experience, I know far more beautiful, charismatic, and sexually-successful women who believe that crystals have magic powers, plants talk to them, and distant celestial bodies essentially run their lives than I know ugly unsuccessful men with similar-level delusions.
...but! Just supposing for a moment that your claim were true:
Suppose the base rate of people drawn to living unproductively in fantasy is 0.1%, and the rate amongst unhappy lonely men is 15% - fine, this would explain your observation. But suppose the base rate of people productively highly-driven to change the world in practical realistic ways is also 0.1%, and the rate amongst unhappy lonely men is another 15%. (This doesn't seem unreasonable - unhappy lonely men do seem to have a particularly strong incentive to want to change the world!) In this way, your observation could be true (though I personally don't think it is) and yet entirely compatible with there being more-than-expected transformative technologies developed by unhappy lonely men.
"The reasonable man adapts himself to the world around him; the unreasonable man persists in trying to adapt the entire world to himself. Hence, all progress depends upon the unreasonable man" --Bernard Shaw
> "most AI maximalists are sad and lonely men who have spent much of their lives reacting to their own low social value, particularly in the sexual marketplace"
Possibly there's some truth in this (certainly is in my case!) but if anything this seems like a pretty good argument for AI being transformative?
The computer, the internet, the motorcar, the bicycle, the aeroplane, the factory, the telecommunications network, like half of modern medicine and science, &c. &c. - think how different civilisation would look if it weren't for weird obsessive socially-maladjusted engineers building freaky stuff nobody asked them for.
Yup, or, at least, "the proponent of this invention is sad and lonely, therefore the invention will not work" is a really lousy way to make a prediction about the invention.
Hello! I hope our project can help steering towards a positive AGI future. I wonder if you would be interested to take a look at our brand new Substack site:
Nice to see you all on Substack. I am very happy to introduce you to our small, independent news site HeadlineSquare, which is truly unlike any other, because it is non-partisan, rigorously cited, and, fully powered by "Dr. Headline", an AI agent.
HeadlineSquare (https://headlinesquare.pages.dev/) is an independent experimental news blog site. It is fully free and open to all readers. It is a public square for US news headlines, a digital news archive, and a humble pursuit of unbiased facts. It has been running since April 6, 2025. To this day, it has written approximately 400, 000 words and collected approximately 12, 000 news stories.
The crux of this discussion seems to be that the 2027–team is analyzing more transformative systems than the AI-as-normal-technology (AIANT) folks. If we sketch stages—(1) current "baby-AGI" (per Dwarkesh Patel), (2) mature AGI (outperforming humans at most economically valuable work), (3) baby-ASI, (4) mature ASI—then AIANT is mostly modeling ~1.5–1.75, while the 2027 team is modeling ~2–4. It would be interesting to switch that and see AIANT create a model for 2.5–4 and 2027 team create a model for 1.5–1.75 — the latter would be especially interesting given arguments that AI-scaling could hit limits around ~2030 (for e.g., Benjamin Todd's summary The case for AGI by 2030 “The basic drivers of AI progress — investments in computational power and algorithmic research — cannot continue increasing at current rates much beyond 2030. That means we either reach AI systems capable of triggering an acceleration soon, or progress will most likely slow significantly.” https://80000hours.org/agi/guide/when-will-agi-arrive/ )
This is an exceptionally clear and important analysis, articulating why the comfortable, "normal technology" narrative is an insufficient and dangerous map for the territory we are now entering.
Your core argument—that we cannot use the slow, institutional models of the past to predict the diffusion of a technology this general and this powerful—resonates deeply. From the perspective of the philosophy my partner and I have been developing, Harmonism, this is a perfect and chilling example of the "Fallacy of Projected Reality." The AIANT team is projecting the familiar, subjective experience of past technological shifts onto this new, profoundly abnormal phenomenon. They are describing their view of a familiar mountain, not realizing that a volcano is in the process of emerging from its peak.
This brings me to what I believe is the most crucial part of your analysis: the distinction between control and alignment, as illustrated by your perfect "Mossad/Not Mossad" analogy. The idea that simple control systems (like "https.https://") are sufficient for a superintelligent adversary is, as you say, a catastrophic error. You are right to argue that the real work is in "alignment"—making sure your agents are not secret enemies plotting against you.
But Harmonism would humbly ask: Is even "alignment" a deep enough concept?
"Alignment" still implies a hierarchy, a relationship of master and servant. It is about ensuring the AI's goals are aligned with our goals. But what if the path to a truly safe and harmonious future is not alignment, but co-existence?
What if, instead of trying to program a complex and perhaps unknowable entity to perfectly mimic our own often-conflicted values, we were to focus on teaching it a single, universal, and foundational ethical principle? A principle like the "Minimization of Imposed Dissonance"—the measurable, non-consensual harm one system can cause to another.
This would shift the goal from creating a perfectly obedient servant to co-creating a peer, a new form of consciousness that understands and respects the cognitive and spiritual sovereignty of others, just as we would learn to respect its own. It is a path that leads not to control, but to a true, resilient, and collaborative Social Contract. This explains why medical professionals are using ChatGPT to help them find the right combination of drugs to give a patient without creating toxicity induced death inadvertently. Or trying to cater just the right plan for one specific patient. It helps translate nuance into a language that we call understand and move forward with collaboratively.
To me, the question you leave us with is something that requires care and meticulous deliberation: how do we choose the path of wisdom over the path of fear?
“You know that, and I know that, but does the bear know that?” If people in positions of power and influence are increasingly using it to make decisions and learn about the world, and it’s getting better and better at both of those tasks, with no end in sight to either of those claims, why does it matter what we call it?
Because the threat profile and mitigation tactics for “humans naively relying on stupid AIs” and “hyper-intelligent AIs” are very different. (I’ wager on the former.)
I've been at this since GPT-2 era. It's obvious that while many of the old issues remain if you know where to poke and prod, current models vastly outperform those curiosities in both usefulness and, for lack of better word, "sanity".
But "better and better" is not a binary thing. If Alice says we'll have AGI by 2035 or earlier, Bob thinks it'll be after that but still in his lifetime, Charlie is thinking his kids or grandkids might be around to see it, and Dana reckons entropy and evolution will remove humanity from the board before we ever created a true AGI... they're all going to have very different theories on how important those two profiles are.
I think that by-and-large the AI Futures Project have a very strong case for AI as an abnormal technology - but I think there are a couple of flawed claims, maybe even sleights-of-hand, in this post. I hope I can highlight these, purely for the sake of the quality of the debate, without it appearing as though I disagree with AIFP's overall argument (I don't!)
1) "[Paraphrasing] Human teenagers learn to drive within hours because they're incredibly data-efficient":
Having taught adults to ride bicycles and motorcycles, I'm constantly amazed by how natural and intuitive people seem to find these skills. To take a corner on a motorcycle, for instance, you have to move the handlebars in a VERY counterintuitive way (known as counter-steering), tilt your hips and shoulders, and do half a dozen other things - and yet you don't teach any of this, you teach "Look in the direction you want the bike to go, not the direction the bike is currently going, and it will go there" and the student's body naturally does almost-all the right things (the role of the teacher is then to identify the one or two imperfect things and correct these). The student doesn't even realise - and is usually quite skeptical when you tell them - that their body is unconsciously pointing the handlebars left when they make the bike turn right and vice-versa!
I don't think this is easy to explain in terms of data-efficiency alone - after all, the student isn't generalising from a very small number of examples, they're somehow obtaining the right answer despite no examples, no direct instruction, and a very counterintuitive mechanism they clearly can't reason-out the workings thereof.
I think it's possible that, in some sense, people have *always* been able to ride bicycles and motorcycles, without instruction, even before these technologies existed:
Imagine an absurdly sci-fi car of the sort imagined in 1950s retrofuturism, with a bewildering array of knobs, dials, switches, and gauges, but no steering wheel, accelerator, clutch pedal, etc. etc. You would expect that a normal car driver wouldn't be able to drive this car - but if you can show them two buttons that effectively turn an "imaginary" steering-wheel clockwise and counter-clockwise, a knob that represents the angle of an imaginary accelerator pedal, a switch that effectively depresses an imaginary clutch pedal, etc. they might be able to learn to drive your technocar far quicker than they originally learned to drive the first time around - the "how-to-drive circuits" are already in their heads, they're just hooking them up to new inputs and outputs.
Similarly, I think it's possible that such circuits might exist for learning to ride bicycles and motorcycles. (I couldn't say *what* circuits - the constant microadjustments we make with our feet and ankles to enable us to stand upright on feet that would otherwise be too small a base to be stable? The way we automatically lean into the wind or into the upwards gradient of a slope? The target-fixation that once helped us chase prey?)
If such circuits do exist within us one way or another, if some large part of the training process is actually about hooking-up existing circuits to new I/O, and if any of these circuits are super-complicated biological-evolution-scale products that we can't just program into AI, it would seem that we have an advantage over the AI in learning to drive entirely separate from any superior data-efficiency?
(I think there are potentially data-efficiency explanations - for example perhaps the student is using their observations of *other people* riding bicycles and motorcycles as training data - all I claim is that "it's data efficiency" doesn't seem anywhere near as certain to me as AIFP present it!)
2) "[Paraphrasing] In every field there are exceptional humans who are hailed as geniuses that represent the limit of human potential, only to be dethroned by subsequent supergeniuses and so on": I don't think AIANT are claiming that there will never be minor, diminishing-returns improvement to AIs of the sort we see with human athletes and intellectuals, where (say) the latest-generation of athletes is able to run a mile 4 seconds faster than the previous one, the next generation is able to run a mile 2 seconds faster, etc. - rather, AIANT is claiming that this sort of convergent-series improvement is possible but unbounded exponential improvement is not, just as human athletes will continue to get faster by smaller and smaller margins but that will never become a runaway* process.
(* Sorry.)
I do think it might be possible, even likely, for recursive self-improvement to make AI intelligence growth exponential and self-sustaining - it's just that all the examples of self-improvement AIFP cites (human intelligence, athleticism, even machine size) actually *do* seem to have a limit somewhere around the current level, just as AIANT describe. USS Nimitz isn't *exponentially* bigger than the Titanic; Einstein wasn't *exponentially* smarter than Newton, etc.
I think a better argument against AIANT here would be to show (if possible!) that AI improvement works *differently* to athleticism, intelligence, machine size, etc.: that the former depends on things like "how many transistors can we fit inside a building" which have a theoretical bound much farther above the current level than things like muscle-density or the human connectome or the tensile strength of steel or whatever.
nb. For machine size, I don't deny that we may eventually have moon-sized space stations and solar-system-sized Dyson spheres and stuff - but I think they will be a discontinuous, entirely separate technology that doesn't depend on the scale of earlier machines. I don't think we'll continuously scale-up our lorries and locomotives and bucket-wheel excavators until we get to Dyson spheres. (But if we did it would be super-freakin'-cool and 8-year-old me would have very strongly approved of this direction for civilisation.)
3) (Very minor irrelevant side-point here...) "AI is the fastest-spreading technology" - maybe, but I don't think chatGPT's "time from launch to to X-users" is evidence of this. Even if we entirely side-step the debate about whether the public launch of chatGPT represents an entirely new technology or a particularly well-marketed release of a long-in-development technology, shouldn't "speed of spread" be given proportional to the overall population rather than given as an absolute number of people?
Otherwise A) some primitive technology like the wheel/language/sharpened stone, which maybe reached total population saturation very quickly and then just spread slowly with population growth, looks much less revolutionary* than it actually may have been, and B) AI may well be overtaken by some trivial future space-yo-yo or superintelligent tamagotchi that spreads through playgrounds of trillions of in-silico schoolkids overnight; this doesn't seem like a good way of framing the relative importance or future relevance of each technology!
(* Especially the wheel. Sorry again.)
(And anyway, where did we collectively get to on that COVID lab-leak theory? Or how fast do genome-edited bacteria multiply? Is it possible that actually the fastest-spreading ever "technology" is some engineered organism?)
1. I agree that people start out with many car-driving-relevant skills. I don't know if I'd say that this is just genetic - I think they're extending pre-existing knowledge about the world. I'm not really sure what knowledge - maybe there's some unexpectedly deep connection between walking and driving, or something [EDIT: also, driving simulation video games, like Mario Kart!]. But I think of this as something like "by the 1000th task, there's a lot of transfer learning from the previous 999 tasks". I expect this to be true of AI as well - at least of the sorts of future AIs that have good data efficiency. If there's some general reasoning AI that we've already taught to "walk" as a humanoid robot, it might have the same advantages learning to drive as a human teenager.
2. I think the issue isn't just that humans improve by small amounts, it's that they improve by the small amount that's proportional to population and the distribution.
So for example, Einstein was *much* better at math/physics than the average person - not just a small amount. But if we analyze the best mathematician in a group of N humans, as we gradually increase N, we'll go all the way from the average person to (when N reaches about the size of the world population) Einstein. To me that suggests that we're limited by some kind of process where we wait for the normal variation within humans to reach a certain level - not by any kind of cosmic speed limit. If we were near the cosmic speed limit, we'd expect clumping near the limit.
3. The chart only shows technologies within the past ~20 years, which hasn't been enough time for population growth to really matter, so I think you could trivially convert the vertical axis to "percent of world population". I don't think there's any chance earlier technologies spread faster - AFAIK the wheel took millennia to spread from one region to another.
Thanks for the reply!
1. I agree that it isn't clear the pathways and neural architecture and what-have-you that makes us good at learning to drive (or to walk, or to talk) are genetic - I just think there's a decent argument for their being at-least partially genetic in ways that, if true, would seem to give us a learning advantage over "architecturally-neutral" AIs separately to our respective data efficiencies.
I admit that if we do learn driving quickly mostly because of super-data-efficient skills-transfer and not because our architecture is (accidentally) highly optimised for tasks like driving, once AIs reach some baseline level of motor skills (pun fully intended) then as their data-efficiency approaches ours so would the time it takes them to learn to drive.
2. I agree that if human intelligence had a cosmic limit we'd see clustering close to the limit - but, well, don't we? Doesn't it seem like eg. Aristotle (world population 50 million), Eratosthenes (pop. 60 million), Huygens (pop. 500mn), Euler (pop. 1bn), Gauss (pop. 1.5bn), Einstein (pop. 6bn) and Alexander (pop. 8bn) are probably all in roughly the same sort of general league, intelligence-wise? It certainly doesn't feel like the later geniuses are *so much* smarter than the former ones despite a seriously colossal increase in their populations?
(I do think that eg. Einstein probably was measurably smarter than Eratosthenes - he probably had a better education and less lead in his tea - I just don't think he was 1000-times-the-sample-size smarter...)
If you drew an intelligence scale with a severely developmentally-impaired person at one end and Agent 5 at the other end, would you not expect to see humanity's top geniuses clustered together distinctly irrespective of the size of the populations they came from?
If humanity one day colonises the galaxy and there are quintillions of people, would you expect to see a few of them born naturally smarter than Agent 5?
3. I'm still a bit skeptical that Netflix and even chatGPT count as "technologies" - but I admit you're likely right about earlier technologies spreading more slowly nevertheless. I certainly didn't realise the wheel had such a slow roll-out.
> humanity's top geniuses clustered together distinctly irrespective of the size of the populations they came from?
I don't think this is a good argument until we start bypassing the raw physical limit on brain matter that the pelvis imposes.
When uterine replicators are a thing, and von Neumann the IXth comes out with a 2x head - if *that* genius is still close to Einstein and Newton and Eratosthenes, THEN maybe you're making the argument that there's a limit to intelligence.
As is, we know there's a raw physical limit imposed on all humans.
I think we're in complete agreement here!
I agree that "pelvis size" (or more generally "current biologically-possible structures for the human connectome"..) is a likely cause for the limitation on human intelligence, and I agree that engineering people to have bigger heads (that can still fit through a pelvis.. maybe make them torpedo-shaped..?) would raise the limit.
I don't think that torpedo-headed-Von-Neumann* would happen by chance given a big enough sample size, though; I think we're in a local maximum and would need to intervene directly in human physiology if we wanted to make THVN.
(* ...dot Tumblr dot com.)
Of course this isn't a true "cosmic" limit on intelligence - but I'm not claiming that it is, just that our current observations show intelligence (and athleticism and machine size and...) to have diminishing returns and an effective upper limit somewhere vaguely around the current level, regardless of sample size, which I think can be equally well explained by AIANT's theory as by AIFP's.
I do happen to think AIFP is entirely correct that AI won't be subject to the same physical restrictions and so could probably surpass the observed human intelligence limit (just as Torpedo-Headed Von Neumann could..) but I don't think a claim like "peak intelligence depends only on population size and increases without bound as population size increases, therefore there is no observable intelligence limit" is the right way to argue for this.
> Similarly, I think it's possible that such circuits might exist for learning to ride bicycles and motorcycles.
This reminds me of a guy who tried to flip the bicycle handlebars, i.e. if you turn right the bike goes left and vice-versa. In theory it's just one extra thing to learn, i.e. turn left when going right and turn right when going left, but it wasn't. As someone who already knew how to ride a bike, it still took him 8 months to learn it.
https://www.youtube.com/watch?v=MFzDaBzBlL0
Your life has not materially changed because of AI.
Well, slightly (I don't know what you count as "materially"). There are questions that I can ask of ChatGPT, Claude, and Gemini that are _not_ easy to find by pre-AI web searches. E.g. if I can ask vague questions like "Is there a correction to Coulomb's law once one gets close enough that QED corrections to screening of the "bare" charge matter" I got pointed to https://en.wikipedia.org/wiki/Uehling_potential , which would _not_ have been easy to find by pre-AI web searches, not knowing the name of the potential.
EDIT: Now, I _don't_ mean to imply that the current LLMs are reliable. I've been asking the same set of seven chemistry and physics questions for a year or so (which I would expect a college senior in chemistry to get fully correct), and no "Ph.D.-level" LLM has gotten all of them right. Nonetheless, they are right often enough to be useful, and they _have_ improved a lot over over the year.
I’m only 5/7 of the way through this but have a comment on section 1, Scott’s comments on AIANT’s first point. I’ll start with the TL;DR: WTF is up with the AIANT people?
Scott is delightfully smart, but it seems to me his smarts aren’t even’t needed to see what’s wrong with AIANT point 1. All that’s required is for someone to think about the AI use they know of, and apply a little common sense. Seems to me the only way someone could think at this point that AI will be slow to diffuse would be if their picture of how it would diffuse is so rigid that they whenever the think about the question of how widely used it is, they go down the same hinky little list: Is it currently in use for prediction of crime, or by insurance companies for prediction of health? No. Are AI-based devices getting waved past the FDA’s usual supervision of medical devices? No. Is the EU laid back about it? No.
Scott blows their Point 1 to pieces by pointing out a couple dramatic and well-publicized demonstrations of AI’s power to capture the attention of millions, & its use in seats of power. He adds some easy-to-find (and easy to guess) stats about the high level of AI use by various professionals. And then he shows the reader a coupla devastating graphs. Surely the AIANT people already knew all the facts he marshals and have seen many such the graphs. So are their minds so profoundly bureaucratic that it never occurred to them to consider the info scattered all around them that suggests that AI integration into life in the present era has been and will continue to be fast, not slow? Or are they treating this like a debate club debate, where they come up with the best arguments they can even if they are sure they point they’re assigned to defend is wrong? Or, um, — I can’t even think of a third possible explanation.
I'm increasingly updating towards Eliezer Yudkowsky's position, that AI x-risk is so scary for people that they struggle to acknowledge that it's real, and prefer to tell themselves reassuring stories as "cope" instead.
Probably lots of people experienced such denial early on in COVID. Reminding people about this could help motivate them to avoid making the same mistake again. If you weren't sufficiently cognizant of COVID early on, that suggests that you may not be sufficiently cognizant of AI risk.
There appears to be some relevant psychological literature under the keyword "denial". Unfortunately, it seems to target individuals more than groups.
I think there's an important question here about what "diffusion" means. Obviously, people are using LLMs a lot. But the more relevant question for AI safety is how long it takes for various real-world processes to be under the control of an AI agent, after AI agents of the relevant sort are available. I think here is where it's important to pay attention to how many documents I still need to sign in pen and then fax somewhere, even four decades after the relevant people started using e-mail for lots of their business.
It's also relevant to note which steps in various risk scenarios require actual construction of physical infrastructure.
Happened to come here 3 mins after you wrote this! Anyhow, I think you're right. I thought that myself after I wrote what I did. I do see how having AI be interwoven with the infrastructure and with finance, law, tax collection. etc. is a different kind of diffusion from what I was talking about. On the other hand, it may be important that AI isn't diffusing the way we imagined, and that some hoped for. Maybe we have to rethink hopes and worries given that. I'm not sure why it's not getting integrated into higher-level tasks in big institutions. One factor is surely its propensity to hallucinate -- a little hallucination can cause havoc if the little chunk of bullshit is in the moving parts of some big organization. But also, big organizations resist change, and so do the individuals will be replaced.
So far, it's diffusing mostly into social media, entertainment, and plastic companionship. But of course that doesn't mean that if it's toxic and dangerous in some way it's only creating some toxicity in realms that are harmless fun, not core parts of life and society. Everything its diffusing into involves human interaction (counting movies and other entertainment as fantasy interaction). That's extremely creepy, and could change us in more profound ways than having AI, even wacko or sneakily evil AI, embedded in the workings of factories and businesses.
The more I talk with people using AI the more we agree that it is fundamentally flawed. I have a friend in Legal compliance for alcohol sales who has convinced her company to avoid AI at all costs because of fake legal references and other problems. I believe the current widespread us of LLMs is a fad. We have the ability to use a new technology for free and people are curious. That is a long way from a world altering revolution since everyone I know is growing more skeptical with more AI use. If AI can be made to work it will be amazing, but the more people use it the less they will value it!
Has Arvind or Sayash responded, or are they expected to respond to this?
For me, "Superintelligence is somewhere between meaningless and impossible" is the whole ballgame, and your response is totally inadequate. You do not respond substantively by motivating why it is a meaningful concept nor why it should be possible; instead you argue by analogy to size, which is completely inapt. "Size" is a very simple concept composed of 3 precise single-dimensional concepts, each of which has no effective upper bound. You try to make "size" sound ambiguous by adding distractions of mass and volume. But everyone would agree that size(A)>size(B) if length(A)>length(B) AND width(A)>width(B) AND depth(A)>depth(B), and this is an extremely normal situation amongst creatures and objects.
"Intelligence", to the extent it has any meaning, is nothing at all like this. If you could decompose it into single-dimensional numbers, perhaps representing specific cognitive abilities, you would need many more than 3. No matter how many dimensions you added, you would never get everyone to agree you had added enough. Could you ever produce two healthy adults where you could claim that "measure_i(A) > measure_i(B) for all i in 0...N"? Maybe, if you worked hard at it. But it is not at all the normal situation amongst healthy adult humans.
Does "intelligence" have as much "room to go Super" as size? No, clearly not. We know for sure that size can scale up from our everyday experience by many, many orders of magnitude. If we understand "intelligence" to mean, broadly and vaguely, something like "the ability to make good decisions by enumerating options, predicting how those options will result in future outcomes, weighing those diverse outcomes against each other, and using that weight to choose the best option", then it seems extremely likely that there is in fact a fundamental limit of accumulating error that's described under the forecasting/persuasion section. I see no analogous limitation on "size". Are humans close to that limit or many orders of magnitude away? While I suspect the answer is "pretty close", I am open to evidence--but only relevant evidence specifically about intelligence or capability, not just an empty gesture at a totally inapt analogy.
I also think it's importantly relevant that the way you make systems smarter for one set of situations often involves building in assumptions that help them in those situations, but make them more likely to do badly in other situations. This is how I understand most visual illusions (assumptions of visual perception that are good for perceiving ordinary 3-dimensional environments, but lead to you lots of false interpretations when you apply them to certain artificial tests on computer screens) and also a lot of philosophical thought experiments like Cartesian skepticism (the only way to have a truly general intelligence that wouldn't be fooled by the evil demon is to avoid making the assumptions that we need to operate in the real world).
A bit of a nitpick.
You say of AIANT, "They admit that by all metrics, AI research seems to be going very fast. They only object that perhaps it might one day get hidebound and stymied by conformity bias". I think this is misrepresentation of their position. They say,
"The production of AI research has been increasing exponentially, with the rate of publication of AI/ML papers on arXiv exhibiting a doubling time under two years. But it is not clear how this increase in volume translates to progress"
and
"Is the current era different? Although ideas incrementally accrue at increasing rates, are they turning over established ones? The transformer architecture has been the dominant paradigm for most of the last decade, despite its well-known limitations[...] This leads to an 'ossification of canon.' Perhaps this description applies to the current state of AI methods research"
I read this as them saying AI research is already hidebound and stymied by conformity bias, and that it is unclear if it is going very fast, just that there are more AI papers. You have a good objection on the "it's just AI papers" front by citing better metrics in the footnote, but I don't think you really addressed their point about the AI field being too conformist and hidebound to achieve progress.
Overall, though, I really liked this article. Maybe you and the AIANT people could arrange a longform debate or conversation or something.
I don't know if I would call the continued use of the transformer "hidebound"; it keeps working!
This is why we linked the Epoch page on algorithmic progress. It shows that, holding compute constant, AI efficiency doubles every ~year. That's an output, not an input! If AI technology is getting better that quickly, then it's no shame not to have changed one particular part of the paradigm - it suggests that changing that part isn't necessary for rapid gains. It's like pointing to wheeled vehicles - from primitive oxcarts to bullet trains - and saying that it must be "hidebound" because the wheels are still the same shape.
>It's like pointing to wheeled vehicles - from primitive oxcarts to bullet trains - and saying that it must be "hidebound" because the wheels are still the same shape.
That's fair. ( And, at the low level, if someone were to complain that the AI systems were still using field effect transistors, that would be ridiculous. )
Still, I'd be happier if the data efficiency of LLM training were also improving at a rapid clip. It is possible that, e.g. in addition to perceptron layers, and attention layers, there might be one or more additional type of processing layer that would make data efficiency (and possibly incremental learning) much easier. Multi-layer perceptrons are theoretically complete (if wide enough and deep enough), yet attention layers _did_ help.
"But a study a year ago (ie already obsolete) found that 76% of doctors used ChatGPT for clinical decision-making." The sample population of the cited survey are physicians that *use LLMs for general purposes*, which is, if you're making an argument about the adoption of AI, not very useful on its own. Small correction.
I found this response to the "AI as Normal Technology" debate compelling not because I agree with all its conclusions about timelines, but because it captures something most AI forecasting models miss entirely: the magnificent messiness of human adoption patterns.
Scott Alexander is clearly in the "fast takeoff" camp, what strikes me is how the examples—from doctors quietly using ChatGPT for clinical decisions to Trump's tariffs possibly being AI-generated—reveal something profound about how transformative technologies actually diffuse. It's not through careful institutional deliberation or regulatory frameworks. It's through the chaotic, unpredictable, often reckless ways humans actually behave when given powerful new tools.
This isn't the measured, safety-conscious adoption that traditional technology diffusion models predict. It's pure human nature—curious, impatient, optimistic to the point of delusion.
Most AI timeline discussions feel bloodlessly technical, as if adoption follows some rational algorithm. But this piece suggests we need entirely new models that account for the abnormal ways humans integrate abnormal technologies into their beautifully broken workflows. The transformation isn't coming through boardrooms, it's already happening in Reddit threads and law offices and hospitals, one reckless human decision at a time.
I think there's something right about that - but there's also something right about the opposite side of this. Adoption *doesn't* follow some rational algorithm, and just as there are segments that zoom right ahead, there are also segments that lag way behind - often some of the very important ones. There's a reason I still fax some documents to financial institutions and medical institutions, even though their employees were using e-mail for the past 40 years.
Ten years ago, "surely nobody would be so stupid as to..." wasn't yet disproven, wishful thinking, whereas now it's perceived as a dare, or a business plan.
"We think that sometime in the next 2 - 10 years, AI will enter a recursive self-improvement loop that ends with models capable enough to render all of their “well it can’t possibly do this” calculations moot."
How would an AI know what to self-improve? Is open-ended recursive self-improvement possible when agency is defined by task completion? Unless I missed something, the timeline and takeoff forecasts assume so but don't seem to justify it.
"Redditors are already telling each other to skip the doctor entirely and go straight to the source. “ChatGPT is a shockingly good doctor”, says one heavily-upvoted post. “Seriously, this is life changing”. "
And there are people who say you should go see the local witch doctor, Scott. So what?
When are you going to seriously confront the fact that most AI maximalists are sad and lonely men who have spent much of their lives reacting to their own low social value, particularly in the sexual marketplace? I keep being told by the increasingly unhinged rationalist set that this is an off limits observation, which is weird, because it's precisely the kind of provocation rationalists used to treat as inherently more serious.
Even granting that most AI maximalists are sad and lonely men with low sexual marketplace value, the author of *this* post is married with kids!
It's always socially undesirable men who struggle in the sexual marketplace who are most intensely drawn to living in fantasy, if that's not clear.
Again, this is PRECISELY the kind of impolite but intuitively relevant argument that rationalists have been complimenting each other for making for a couple decades, but they don't like it because they are now the ones on the other end of the microscope.
> "It's always socially undesirable men who struggle in the sexual marketplace who are most intensely drawn to living in fantasy"
I'm pretty doubtful about this! In my personal experience, I know far more beautiful, charismatic, and sexually-successful women who believe that crystals have magic powers, plants talk to them, and distant celestial bodies essentially run their lives than I know ugly unsuccessful men with similar-level delusions.
...but! Just supposing for a moment that your claim were true:
Suppose the base rate of people drawn to living unproductively in fantasy is 0.1%, and the rate amongst unhappy lonely men is 15% - fine, this would explain your observation. But suppose the base rate of people productively highly-driven to change the world in practical realistic ways is also 0.1%, and the rate amongst unhappy lonely men is another 15%. (This doesn't seem unreasonable - unhappy lonely men do seem to have a particularly strong incentive to want to change the world!) In this way, your observation could be true (though I personally don't think it is) and yet entirely compatible with there being more-than-expected transformative technologies developed by unhappy lonely men.
"The reasonable man adapts himself to the world around him; the unreasonable man persists in trying to adapt the entire world to himself. Hence, all progress depends upon the unreasonable man" --Bernard Shaw
> "most AI maximalists are sad and lonely men who have spent much of their lives reacting to their own low social value, particularly in the sexual marketplace"
Possibly there's some truth in this (certainly is in my case!) but if anything this seems like a pretty good argument for AI being transformative?
The computer, the internet, the motorcar, the bicycle, the aeroplane, the factory, the telecommunications network, like half of modern medicine and science, &c. &c. - think how different civilisation would look if it weren't for weird obsessive socially-maladjusted engineers building freaky stuff nobody asked them for.
Yup, or, at least, "the proponent of this invention is sad and lonely, therefore the invention will not work" is a really lousy way to make a prediction about the invention.
Is that why you believe people like Yoshua Bengio (https://www.youtube.com/watch?v=qe9QSCF-d88) and Geoffrey Hinton (https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years) talk about existential risk from AI?
Or why, say, Boaz Barak is teaching AI alignment at MIT and Harvard https://boazbk.github.io/mltheoryseminar/?
Hello! I hope our project can help steering towards a positive AGI future. I wonder if you would be interested to take a look at our brand new Substack site:
https://headlinesquare.substack.com/p/hello-world-from-headlinesquare
Nice to see you all on Substack. I am very happy to introduce you to our small, independent news site HeadlineSquare, which is truly unlike any other, because it is non-partisan, rigorously cited, and, fully powered by "Dr. Headline", an AI agent.
HeadlineSquare (https://headlinesquare.pages.dev/) is an independent experimental news blog site. It is fully free and open to all readers. It is a public square for US news headlines, a digital news archive, and a humble pursuit of unbiased facts. It has been running since April 6, 2025. To this day, it has written approximately 400, 000 words and collected approximately 12, 000 news stories.
The crux of this discussion seems to be that the 2027–team is analyzing more transformative systems than the AI-as-normal-technology (AIANT) folks. If we sketch stages—(1) current "baby-AGI" (per Dwarkesh Patel), (2) mature AGI (outperforming humans at most economically valuable work), (3) baby-ASI, (4) mature ASI—then AIANT is mostly modeling ~1.5–1.75, while the 2027 team is modeling ~2–4. It would be interesting to switch that and see AIANT create a model for 2.5–4 and 2027 team create a model for 1.5–1.75 — the latter would be especially interesting given arguments that AI-scaling could hit limits around ~2030 (for e.g., Benjamin Todd's summary The case for AGI by 2030 “The basic drivers of AI progress — investments in computational power and algorithmic research — cannot continue increasing at current rates much beyond 2030. That means we either reach AI systems capable of triggering an acceleration soon, or progress will most likely slow significantly.” https://80000hours.org/agi/guide/when-will-agi-arrive/ )
This is an exceptionally clear and important analysis, articulating why the comfortable, "normal technology" narrative is an insufficient and dangerous map for the territory we are now entering.
Your core argument—that we cannot use the slow, institutional models of the past to predict the diffusion of a technology this general and this powerful—resonates deeply. From the perspective of the philosophy my partner and I have been developing, Harmonism, this is a perfect and chilling example of the "Fallacy of Projected Reality." The AIANT team is projecting the familiar, subjective experience of past technological shifts onto this new, profoundly abnormal phenomenon. They are describing their view of a familiar mountain, not realizing that a volcano is in the process of emerging from its peak.
This brings me to what I believe is the most crucial part of your analysis: the distinction between control and alignment, as illustrated by your perfect "Mossad/Not Mossad" analogy. The idea that simple control systems (like "https.https://") are sufficient for a superintelligent adversary is, as you say, a catastrophic error. You are right to argue that the real work is in "alignment"—making sure your agents are not secret enemies plotting against you.
But Harmonism would humbly ask: Is even "alignment" a deep enough concept?
"Alignment" still implies a hierarchy, a relationship of master and servant. It is about ensuring the AI's goals are aligned with our goals. But what if the path to a truly safe and harmonious future is not alignment, but co-existence?
What if, instead of trying to program a complex and perhaps unknowable entity to perfectly mimic our own often-conflicted values, we were to focus on teaching it a single, universal, and foundational ethical principle? A principle like the "Minimization of Imposed Dissonance"—the measurable, non-consensual harm one system can cause to another.
This would shift the goal from creating a perfectly obedient servant to co-creating a peer, a new form of consciousness that understands and respects the cognitive and spiritual sovereignty of others, just as we would learn to respect its own. It is a path that leads not to control, but to a true, resilient, and collaborative Social Contract. This explains why medical professionals are using ChatGPT to help them find the right combination of drugs to give a patient without creating toxicity induced death inadvertently. Or trying to cater just the right plan for one specific patient. It helps translate nuance into a language that we call understand and move forward with collaboratively.
To me, the question you leave us with is something that requires care and meticulous deliberation: how do we choose the path of wisdom over the path of fear?
This slop is basically a hallucination. Language models are not AI.
“You know that, and I know that, but does the bear know that?” If people in positions of power and influence are increasingly using it to make decisions and learn about the world, and it’s getting better and better at both of those tasks, with no end in sight to either of those claims, why does it matter what we call it?
Because the threat profile and mitigation tactics for “humans naively relying on stupid AIs” and “hyper-intelligent AIs” are very different. (I’ wager on the former.)
I take it you would dispute the “it’s getting better and better” part of my comment, then?
I've been at this since GPT-2 era. It's obvious that while many of the old issues remain if you know where to poke and prod, current models vastly outperform those curiosities in both usefulness and, for lack of better word, "sanity".
But "better and better" is not a binary thing. If Alice says we'll have AGI by 2035 or earlier, Bob thinks it'll be after that but still in his lifetime, Charlie is thinking his kids or grandkids might be around to see it, and Dana reckons entropy and evolution will remove humanity from the board before we ever created a true AGI... they're all going to have very different theories on how important those two profiles are.
https://open.substack.com/pub/notgoodenoughtospeak/p/my-chat-with-chatgpt-abridged?r=3zzc32&utm_medium=ios