skip to content
busstop.dev

Skeptical of the super intelligence singularity

/ 4 min read

I’ve been thinking about this for a while, but the intelligence singularity argument has felt wrong to me. Today I had some time to think about it more, and I think I have gotten to a point where I can articulate it somewhat, as to whether it is outright wrong is another matter.

The argument, roughly: build an AI smarter than humans, it builds something smarter than itself, repeat, and eventually you get something incomprehensibly powerful. It feels almost mathematical, and maybe that’s the problem. It’s sneaking in assumptions that work fine for numbers but might not work at all for intelligence.

Two assumptions in particular:

  • Universal scale of intelligence — that “smarter” means something consistent across different kinds of minds.
  • Just keep adding more intelligence - once you’ve figured out how to make something smarter, you can just keep doing that indefinitely.

Mountains not ladders

Let’s start with maybe the less interesting one, that there is some kind of intelligence ladder.

We say “humans are smarter than dogs” and it feels obviously true, but is it universally true from all points of views? Humans are better at language, abstract reasoning, planning years ahead. Dogs are better at reading a room socially, tracking a scent across a city, coordinating with a pack under pressure (at least according to AI lol). I think this intuitive measure of intelligence is useful for us, but not universal, it’s a perspective.

If I had to pick a core mechanism for what intelligence is, I’d probably go with pattern recognition — the ability to detect regularities in the world and use them. Every cognitive system is doing some version of this, from a crow figuring out how to use a stick to an octopus solving a puzzle with a nervous system that’s nothing like ours. We differ not only in the “amount” of pattern recognition we do, but also have different modalities in how it is applied.

So the crow and the octopus aren’t below us on some ladder. They’re somewhere else entirely:

A picture that feels more right to me: imagine a landscape with many peaks. Each peak is a qualitatively different form of intelligence — a different way of applying that underlying pattern recognition. Humans may be near one peak, characterised by things like our metacognition, or the mass of our neural circuitry relative to the IO (our bodies). But it may just be one of the peaks, not a step on a ladder that everyone is climbing.

But the biggest problem with this is that, its a simplified picture. Many dimensions are very hard to intuit, so its easy to fallback to 2d and the assumptions that come with it. With enough dimensions, there might not be different peaks at all, there might just be one super peak of intelligence that is not obvious in 2d, but is almost inevitable in higher dimensions.

Just keep adding more intelligence

I think what the mountain metaphor captures nicely is that there will be multiple paths to the peak, there will be dead ends, and you might need to descend before you can ascend again. If you want to climb a ladder, if you can make the first step, you just need to do more of the same. Different paths in a mountain may require different skills and tools, and you can even arrive at a dead end or get lost and go around in circles.

I think intelligence may be the same way, there may be a path to peak intelligence, but taking the first step is probably not a guarantee you’ll get there or that you will even make meaningful progress, you may get stuck, you may be climbing the wrong mountain.

And I think this is at the core of my skepticism. You can’t just “keep adding more intelligence” - even if there’s a singular summit of intelligence, each step forward is more like a discovery than an increment. Not a step on a ladder, but just footsteps on the ground, and there is no guarantee where it will take you.