Discussion about this post

User's avatar
Jonathan Mann's avatar

Great thought experiment! The corporate comparison is a helpful intuition pump.

I think your core argument about symmetry has merit, but I still lean toward expecting significant diminishing returns in practice. My best guess is that the nature of discovery often involves conceptual bottlenecks that aren't easily parallelizable. While I agree with your point that we shouldn't assume we're at some special place in the development curve, I mostly expect that, in any sufficiently complex field, progress tends to follow a pattern where fundamental insights enable rapid advancement, followed by periods of consolidation and incremental improvement.

Expand full comment
andrewb's avatar

Interesting and I can see how ai automation will speed up the research that is carried out within the theoretical models.

At some point something has to be actually built and used in the real world.

If the thing you are building is a better delivery optimisation tool or chess or go, it is relatively easy to define and so we zoom into the distance.

If the thing you are building is a word processor or a screwdriver - where better is not easily definable, you need humans.

As models improve you may be able to pass results out less frequently but (generally at the moment) You will want a human in the loop who can understand that a sonic screwdriver with a 500 yard range may be an awesome screwdriver but in the hands of a teenager is non optimal.

Indeed the first signs of superhuman reasoning may be when the ai states something is a bad idea and no human would have thought of that until thew saw the explanation.

Expand full comment

No posts