Discussion about this post

User's avatar
Pjohn's avatar
1dEdited

I think that by-and-large the AI Futures Project have a very strong case for AI as an abnormal technology - but I think there are a couple of flawed claims, maybe even sleights-of-hand, in this post. I hope I can highlight these, purely for the sake of the quality of the debate, without it appearing as though I disagree with AIFP's overall argument (I don't!)

1) "[Paraphrasing] Human teenagers learn to drive within hours because they're incredibly data-efficient":

Having taught adults to ride bicycles and motorcycles, I'm constantly amazed by how natural and intuitive people seem to find these skills. To take a corner on a motorcycle, for instance, you have to move the handlebars in a VERY counterintuitive way (known as counter-steering), tilt your hips and shoulders, and do half a dozen other things - and yet you don't teach any of this, you teach "Look in the direction you want the bike to go, not the direction the bike is currently going, and it will go there" and the student's body naturally does almost-all the right things (the role of the teacher is then to identify the one or two imperfect things and correct these). The student doesn't even realise - and is usually quite skeptical when you tell them - that their body is unconsciously pointing the handlebars left when they make the bike turn right and vice-versa!

I don't think this is easy to explain in terms of data-efficiency alone - after all, the student isn't generalising from a very small number of examples, they're somehow obtaining the right answer despite no examples, no direct instruction, and a very counterintuitive mechanism they clearly can't reason-out the workings thereof.

I think it's possible that, in some sense, people have *always* been able to ride bicycles and motorcycles, without instruction, even before these technologies existed:

Imagine an absurdly sci-fi car of the sort imagined in 1950s retrofuturism, with a bewildering array of knobs, dials, switches, and gauges, but no steering wheel, accelerator, clutch pedal, etc. etc. You would expect that a normal car driver wouldn't be able to drive this car - but if you can show them two buttons that effectively turn an "imaginary" steering-wheel clockwise and counter-clockwise, a knob that represents the angle of an imaginary accelerator pedal, a switch that effectively depresses an imaginary clutch pedal, etc. they might be able to learn to drive your technocar far quicker than they originally learned to drive the first time around - the "how-to-drive circuits" are already in their heads, they're just hooking them up to new inputs and outputs.

Similarly, I think it's possible that such circuits might exist for learning to ride bicycles and motorcycles. (I couldn't say *what* circuits - the constant microadjustments we make with our feet and ankles to enable us to stand upright on feet that would otherwise be too small a base to be stable? The way we automatically lean into the wind or into the upwards gradient of a slope? The target-fixation that once helped us chase prey?)

If such circuits do exist within us one way or another, if some large part of the training process is actually about hooking-up existing circuits to new I/O, and if any of these circuits are super-complicated biological-evolution-scale products that we can't just program into AI, it would seem that we have an advantage over the AI in learning to drive entirely separate from any superior data-efficiency?

(I think there are potentially data-efficiency explanations - for example perhaps the student is using their observations of *other people* riding bicycles and motorcycles as training data - all I claim is that "it's data efficiency" doesn't seem anywhere near as certain to me as AIFP present it!)

2) "[Paraphrasing] In every field there are exceptional humans who are hailed as geniuses that represent the limit of human potential, only to be dethroned by subsequent supergeniuses and so on": I don't think AIANT are claiming that there will never be minor, diminishing-returns improvement to AIs of the sort we see with human athletes and intellectuals, where (say) the latest-generation of athletes is able to run a mile 4 seconds faster than the previous one, the next generation is able to run a mile 2 seconds faster, etc. - rather, AIANT is claiming that this sort of convergent-series improvement is possible but unbounded exponential improvement is not, just as human athletes will continue to get faster by smaller and smaller margins but that will never become a runaway* process.

(* Sorry.)

I do think it might be possible, even likely, for recursive self-improvement to make AI intelligence growth exponential and self-sustaining - it's just that all the examples of self-improvement AIFP cites (human intelligence, athleticism, even machine size) actually *do* seem to have a limit somewhere around the current level, just as AIANT describe. USS Nimitz isn't *exponentially* bigger than the Titanic; Einstein wasn't *exponentially* smarter than Newton, etc.

I think a better argument against AIANT here would be to show (if possible!) that AI improvement works *differently* to athleticism, intelligence, machine size, etc.: that the former depends on things like "how many transistors can we fit inside a building" which have a theoretical bound much farther above the current level than things like muscle-density or the human connectome or the tensile strength of steel or whatever.

nb. For machine size, I don't deny that we may eventually have moon-sized space stations and solar-system-sized Dyson spheres and stuff - but I think they will be a discontinuous, entirely separate technology that doesn't depend on the scale of earlier machines. I don't think we'll continuously scale-up our lorries and locomotives and bucket-wheel excavators until we get to Dyson spheres. (But if we did it would be super-freakin'-cool and 8-year-old me would have very strongly approved of this direction for civilisation.)

3) (Very minor irrelevant side-point here...) "AI is the fastest-spreading technology" - maybe, but I don't think chatGPT's "time from launch to to X-users" is evidence of this. Even if we entirely side-step the debate about whether the public launch of chatGPT represents an entirely new technology or a particularly well-marketed release of a long-in-development technology, shouldn't "speed of spread" be given proportional to the overall population rather than given as an absolute number of people?

Otherwise A) some primitive technology like the wheel/language/sharpened stone, which maybe reached total population saturation very quickly and then just spread slowly with population growth, looks much less revolutionary* than it actually may have been, and B) AI may well be overtaken by some trivial future space-yo-yo or superintelligent tamagotchi that spreads through playgrounds of trillions of in-silico schoolkids overnight; this doesn't seem like a good way of framing the relative importance or future relevance of each technology!

(* Especially the wheel. Sorry again.)

(And anyway, where did we collectively get to on that COVID lab-leak theory? Or how fast do genome-edited bacteria multiply? Is it possible that actually the fastest-spreading ever "technology" is some engineered organism?)

Expand full comment
Freddie deBoer's avatar

Your life has not materially changed because of AI.

Expand full comment
16 more comments...

No posts