Loading Icon

Loading...

Exploring General Intelligence

General Intelligence as a Manifold Over a Task Space

The psychometric view of “general intelligence” being a latent scalar that evaluates an agent’s ability to complete a broad range of tasks has begun to feel less and less plausible to me which is a sentiment shared among those within the field of AI as well. The idea that a single score is reflective of an agent’s ability across multiple domains hints to me as being a bit reductive. Perhaps general intelligence is not a quantity but a structure?

Human capability, in the real world, is not simply limited to a select few domains. An expert who has specialised in one domain could at the same time perform competently at a task that is similar to their field of expertise and even tasks that are well outside of the neighbourhood of their domain, for example, a mathematician cooking or a musician doing logical reasoning. Their skill varies when moving from task to task and provided that we move in a way that each successive task is similar to the preceding one, my intuition compels me to believe that this variation is continuous rather than there being sudden spikes and steep valleys. If we imagine a landscape with tasks laid out in such a way that those similar are placed in each other’s neighbourhood, perhaps a person’s competence across the terrain would resemble a continuous surface instead of disconnected spikes.

Lately, I find myself imagining general intelligence, not as a score but rather as a manifold-like structure over the task terrain. It still remains an intuition, unsubstantiated and unproven yet it also feels compelling, at least to me. If anything like this picture turned out to be true, then it would raise the question of what the geometry of the manifold would reveal about the way in which competence flows between domains.

Something which felt resonant with this intuition was the paper on “fractured entangled representation” by Stanley et al.[1] Perhaps not directly associated with this manifold hypothesis, it explores how structure isn’t represented coherently within contemporary large models. A tiny perturbation to a weight results in a chaotic and incoherent change to what the models trained using SGD were generating. By contrast, the serendipitous process behind Picbreeder resulted in a more unified representation of what the models generated.

In that work, this disparity was attributed to the nature of SGD—which is used in training today’s large models—in contrast to how evolutionary processes result in a more factored internal world. At the very least, the fracturedness of these representations would make us question what kind of internal geometry human cognition might exhibit. Though speculative, perhaps the manifold model would be a viable candidate to capture this unification present within human cognition which enables us to function the way we do? Perhaps humans learn in a way that naturally builds continuous internal spaces? It should be noted that our understanding of human internal geometry is limited and so should be seen only as an analogy.

Perhaps we could make the jump from his vectorial description to one which imposes a richer geometric structure—perhaps a manifold over a task space? Hence the problem lies not in measuring competence across disparate domains—which is something we are already passably good at—but rather finding a methodical and rigorous way to map tasks themselves, to define similarity amongst them, their neighbourhoods, and the curvature of this cognitive terrain.

I haven’t arrived at a solution. The question still remains unanswered and a resolution would perhaps help clarify what we even mean by AGI moving forward. What if general intelligence is—bluntly put—nothing more than the continuity of a mind’s internal landscape?

References

[1] Stanley, K. O., Lehman, J., Clune, J. (2025). https://arxiv.org/abs/2505.11581