A theory of intelligence that denies teleological purpose

What is needed to unite the four most popular psychological paradigms

From Narrow To General AI
16 min readJul 20, 2024

The mysteries of the mind have been around for so long, and we have made so little progress on them, that the likelihood is high that some things we all tend to agree to be obvious are just not so. — Daniel Dennett, Consciousness Explained

A researcher in the field of Machine Learning will encounter many presuppositions about how to start training a new AI model. The procedure always begins by defining a task for the AI to engage in, as well as an accompanying measure of success. Popular tasks include replicating some ground truth data set, optimising a robot’s performance, winning a game, etc. Success in these tasks is always calculated via a numerical value that, by its increase or decrease, indicates a more or less intelligent agent.

If Artificial Intelligence is intended to be an echo of human intelligence, then by implication there should be metrics by which you can gauge the success of human intelligence as well. This raises the obvious question(s):

What are human minds “supposed” to do?

What is the “human task”?

By what clear criteria would you measure human success or intelligence?

If our goal is to create an AI that is at least comparable to human intelligence, and AI agents are designed around measures of success on their respective tasks, then it seems imperative that we discover such a task and accompanying metric for humans. Any general Artificial Intelligence (AGI) would be measured against the same overall benchmark by which we measure ourselves.

The formulation of a detailed and rigorous theory of “what AGI is”, is a small but significant part of the AGI community’s ongoing research…the fleshing out of the concept of “AGI” is being accomplished alongside and in synergy with these other tasks. — Artificial General Intelligence: Concept, State of the Art, and Future Prospects, Goertzel

Many possible candidates for metrics of human success have been proposed over the years. Unfortunately, all are either over-generalisations which ignore numerous real-world cases, or they are too vaguely defined, capable of being applied even to contradictory tasks, and thus not useful for measuring performance in any practical agent. This post will investigate this core question, that of defining the “human task”, and see if we can resolve the impasse by probing its underlying assumptions.

How did we get here?

In contrast to their historical predecessors (e.g. classical philosophy, religious doctrine), modern theories of mind all embed within them an implied functional purpose for the brain. This serves as a benchmark against which to measure intelligence. It is also a relatively recent paradigm in psychology. Prior to Thomas Hobbes, theories of mind were generally moral in nature; they described what you should you do to be good. We have since replaced moral imperatives (as teleological goals) with functional ones like survival, reproduction, happiness, power, resources, equilibrium, knowledge, accurate prediction, and so on, as the purported reason the human brain exists.

The resulting theories rely on mechanical interpretations of the mind — you could even say that a mechanical interpretation is necessary for a theory to properly be “psychology”. Each implicitly defines what the human task is, forming the backdrop for research in Cognitive Science and Artificial Intelligence. In the latter field, such theoretical foundations generate the success criteria against which to measure the performance of a given AI.

It should go without saying that there is no consensus about which of these candidates represents the true purpose of human cognition in general. It is curious that the very purpose of the brain, the one thing we should all unanimously agree on, remains so elusive. We all feel we know, intuitively, what intelligence is, and yet few of us could describe it in functional terms, or in enough detail to apply it to a practical AI projects. This is a serious problem since, if no one knows what the mind is supposed to do, then no one knows how to implement it either, and our work has yet to begin.

Perhaps the problem goes deeper than we think. Framing any candidate as encapsulating the purpose of human thinking may be precipitous, since that assumes the mind has a purpose. This at the very least empirically dubious, as it assumes one goal can apply for all minds. There is much idiosyncratic diversity — quirks, predilections, etc. — in human behaviour that may not be easily accounted for with respect to a single overarching purpose. Evolutionarily speaking, we would reasonably expect to observe some variation in our species’ ambitions. There may be too many unique answers to what the brain does to define intelligence teleologically, as though evolution had a “plan” (or intelligent design) for what we should do.

Ultimately, the fact that no theory of human purpose plausibly covers everything that the mind does should cast doubt on the presupposition that such a purpose even exists. Perhaps it only appears to be so from our perspective — the perspective of humans who, once a purpose has been defined for the mind, hope to designate any given mind as good, bad, better, worse, etc. That is, the viewpoint of a moralising observer. A general human purpose, a “meaning of life”, may be a self-generated chimera, a projected mode of social censure or intellectual approbation.

Removing “purpose”

All that having been said, I think it would be equally wrong to assert that there are no common patterns at all in human thinking. No organ of the body is truly chaotic. Each has its regular functions, which are finite in number, and a space of variation beyond which it can be fairly said to have broken down. A better way to frame the mind, then, might be as a set of proper functions or procedures, rather than general purposes. We could usefully rephrase the question what is the purpose of the mind? to simply what does a mind do? This perspective would allow us to make practical progress towards an explanation of the mind, while leaving open the possibility of defining it non-teleologically.

The main difficulty with this alternative is that you can no longer rely on the crutch of finding a high-level “purpose” for various aspects of the mind. So far this convenience has allowed researchers to avoid having to explain the mind in all its rich mechanical detail. They can instead refer to vague, high-level trends like awareness, learning, perception, or generalization. Consider, as an analogy, how you would describe a car. Were you to describe it teleologically, with respect to its purpose, the task would be quite easy: a car is used to transport people and goods along roads. No details need to be given; you don’t even need to know that it has an engine.

In contrast, if you eschew purposes, and stick only to processes, every detail must be described: the fuel injection into the cylinders, the rotation of the axles, the gear ratios in the transmission, etc. The same applies to the mind. If you want to avoid any possible biases or over-generalizations introduced by injecting a purpose, then every mental event — its nature, its causes, its alterations — must be elucidated. Current research into the human mind has been hampered, ultimately, by simply not understanding it in enough mechanical detail (a somewhat obvious observation). The task of AGI is nothing if not to create a new paradigm: one driven not by purposes, but purely by states and changes.

It is difficult to know how to start this second task. One possible avenue, the one we’ll take here, is to review the many existing paradigms under whose mandates people have been trying to explain what the mind does, and see if some consensus may be reached among them. The next section will provide an overview of these paradigms, grouped into four general categories. We’ll show how each of them only describes a partial image of the human mind, and from there try to find the common threads that connect them.

So to begin, here are the four most popular candidates for what the mind does:

Candidate 1: The mind tries to optimise some metric

As mentioned at the beginning of the post, this is the most popular paradigm employed across Machine Learning research. Researchers first define a task, and within that task they hard-code a metric to act as a signal for success (e.g. a loss function or reward). The AI model or agent must then optimise that signal in the quickest and most efficient way possible. This approach is meant to mirror our own, human pursuits of everyday desires like money, food, pleasure, comfort, etc.

Unfortunately for proponents of this approach, human cognition does not always follow a process of optimisation. Although we may in a few cases, and for a limited time, try to optimise some value like personal income, in most cases we lazily do only the bare minimum necessary to get by (satificing). Whereas “optimisation” implies the agent is continually engaged in goal-oriented behaviour, being led from in front, so to speak, much of the time we humans need to be relentlessly pushed from behind. We must be prodded in order to do, or even to think about anything.

This lazy behaviour could be explained as optimising within constraints, e.g. to reduce energy consumption. But such an explanation would then have the opposite difficulty: to explain why many people still expend precious energy on seemingly profitless pursuits or hobbies, like climbing Mount Everest, or whittling figurines they will never sell. It seems that we can decide, capriciously and on a case-by-case basis, when we will optimise towards any goal, and when we will satisfice or act contrary to it. A viable theory of optimisation must therefore explain these individual policy choices within some higher-level policy governing their selection, and the mechanism for that is not clear from the theory itself.

There are so many odd or aberrant examples of human behaviour — martyrdom, altruism, suicide, asceticism — for goal-orientation as it stands to be a practicable framework for measuring success in real research, at least not without significant further elaboration. Anytime opposite behaviours (e.g. asceticism vs. selfishness, martyrdom vs. the desire for long life) would fit a theory equally well, the theory becomes useless as a measure of performance for any practical AI agent. A theory that could apply to all observations is no theory at all.

Candidate 2: The mind is a problem solver

This second paradigm defines the human task as one of solving problems, both real and imagined, as they come up, rather than engaging in continual global optimisation. This would explain why humans tend to satisfice, since they only address problems as it becomes necessary to do so. The first question that arises is, of course, how the mind comes to know what is and isn’t a problem. Indeed, much of the time the mind is not solving problems, but rather defining them — such as when a child learns to feel shame at others’ opprobrium, or pride at their approval. This is not problem-solving per se, but problem-finding. And although such self-defined problems are rooted in deeper biological ones, framing cognitive behaviour as problem-solving necessarily begs the question — why does your mind define problems the way it does?

In practical AI research, agents are generally given their tasks (i.e. problems) by a researcher. These tasks hard-code their problem-definitions, as well as solution-definitions, via rigid constraints. The concepts involved, such as moving an object to a location, or distinguishing between images of dogs and cats, as well as what counts as having done so successfully, are directly injected into the agent by the trainer. In the real world, such direct intervention is, of course, unrealistic. Even in the context of school and education, educators cannot force their lessons on their pupils. There is little you can do for a student that simply does not care about the topic, or who mishears, misunderstands, or misinterprets their teacher. In contrast, within supervised didactic paradigms, such as those used in training AI, ignoring the trainer is not an option for the agent; and an active interest in the lesson is no prerequisite.

Rigid problem definitions are ultimately why we seem only to be able to create narrow AI, and not general ones. Consider that whenever you define a problem for an agent, and gauge its intelligence accordingly, you are naturally compelled to introduce inductive biases which specialise for the task at hand, since a special-purpose algorithm tuned to the task will always perform better than a general one. The agent’s architecture and data pre-processing then become a large part of its success. The result is akin to the fable of Stone Soup, as data preparation and inductive biases end up doing most of the heavy lifting. To generalise an agent across multiple tasks, even ones we haven’t thought of yet, one would have to find the common substrate to all problems; i.e. you must discover the inductive bias that helps humans solve the general “human problem”. This requires that you discover a common driver behind all tasks, which is what we had difficulty finding in the prior section.

In the end, the greatest practical difficulty in applying “problem-solving”, as a system, to AI research is that a “problem” is a learned, subjective construct, specific to an agent’s momentary perspective. It does not have any obvious hard-coded correlate in underlying physical circuitry. It can only be defined within the context of a mind that already exists and is defining problems for itself, and therefore does not give you any purchase on how to implement it in an AI — unless you already know how to kick-start the process.

Candidate 3: The mind understands or models the world

This is another common paradigm in AI development. It is sometimes combined with goal-oriented behaviour or task optimisation (as in Model-Based Reinforcement Learning). In other cases it is treated as a standalone task. Truth and statistics regularly overlap here. Modelling the universe is done solely on the universe’s terms: whatever the universe presents to the agent’s sensory inputs becomes the standard or ground truth to be replicated in the agent’s mental model.

The problems with this paradigm have already been elaborated in many prior posts. In brief, not only is the task of world-modelling intractable when applied to the fractal complexity of the real world, it is simply not useful without additional, task-specific cognitive structures that shape what is learned. Culling your world-model is necessary to pare down the infinite universe to a subset of conceptualisations that matters to you, rather than merely what is statistically true about your endless flood of daily experiences. This is most clearly demonstrated by those abstract concepts that cannot be modelled as empirically given by the world, such as space, time, beauty, good, or existence. Such concepts must come from within the mind itself.

The core underlying problem with this paradigm stems from the fact that all world-modelling in humans is driven and shaped by our desire to learn. Consider how humans arrange the world in certain ways so as to understand aspects of it better — e.g. building electron microscopes to study molecules, or litmus tests to measure acidity. The preponderance of such interest-driven, active learning suggests that there are preexisting motives already present in your mind before learning even began, which seek to build up your understanding of the subject matter according to their own requirements. Any knowledge gained must satisfy such cravings for knowledge; which means the motives preceded and drove the subsequent process of understanding. Therefore, understanding the world is not the fundamental layer of cognition, the motives are. And although empirical learning is still a critical part of the process — no one lives exclusively in their own fantasies — it is not the whole picture, even when restricted to cognition.

Candidate 4: The mind pursues spiritual development

This last option is generally ignored in scientific research, which shuns spirituality either out of diplomatic prudence or because it has no scientific language to describe it. However, given that many people privately believe that spiritual growth — and to a lesser degree artistic self-expression — is their life’s purpose, it should not be neglected in this list. The problem, of course, is how to define it in a way that allows the hypothesis to be accepted or rejected (not to mention applied in AI). Many people believe that spiritual understanding necessarily evades all definition. On the other hand, the fact that spiritual doctrines exist, and exhibit common trends across the world and throughout history, means the subject matter does not entirely escape description.

For the purposes of this post, we need only highlight that spiritual education is a means of confronting and dealing with the difficult transitions of human existence. It is, in this sense, an understanding of understanding itself. To explain: all of understanding is based on the assumptions, conceptualisations, and most importantly the motives you bring to it (see the previous section). Any change in those motives, as a consequence of some difficult transition in life, alters your mode of understanding the world around you. For example, a painful confrontation with some popularly held misapprehension — a harmful belief that no one seems to question — may, through shock or disillusionment, push you to study principles of critical thinking, and that would fundamentally alter your outlook going forward.

Your entire worldview can in such cases change without your intending it to. The hidden roots of all conscious human experience, namely the causes of those transitions, precede the act of understanding itself, and are largely beyond your immediate ken. And since the endless problems of humankind have no final solution, you can never be sure you have reached the ultimate, correct standpoint from which to survey truth.

Spirituality provides a means to address this ceaseless self-transformation, by facing up to the fact that the struggles of humans cannot be finally understood nor completely controlled. Any spiritual doctrine is therefore a reaction to the issues of how your mind interacts with its environment; it is not that interaction itself. This makes spiritual exploration secondary to (or a subset of) the original cognitive processes themselves, and thus not the solution — the “human task” — we are looking for. Spirituality is not the answer to the question of this post — it only directs itself to it.

Despite being individually insufficient for our purposes, there is still something in each of these paradigms that should be acknowledged as valid. For example:

  1. Task optimisation: Although your mind does not try to optimise any known metric, it is still driven by underlying motives in almost every activity, even when modelling the world, or applying logic and reasoning in that pursuit. There is likely a way of framing optimisation that would, at the limit, define the human task.
  2. Problem-solving: Your mind may not be completely characterised as a problem-solver, but its moment-to-moment experience is composed of solving the problems it has defined and that it encounters. Self-defined problems can be viewed as derived, long-horizon resolutions to more fundamental problems, like hunger or pain; and the same may be said of self-defined solutions. So depending on how you frame the theory, all of conscious life is an act of problem-solving.
  3. World-modelling: Though objectively modelling the universe may not be tractable, all cognition about the world, and all planning based on that understanding still arrives empirically through your senses. Your mind may select some subset of experiences that is most useful to it, but it does not invent anything from whole-cloth. Moreover, “understanding” the world can be reframed as understanding it in such a way so as to achieve your goals, which matches what humans actually do. That would implicitly include planning as part of cognition, and make the system complete.
  4. Spiritual development: Spirituality, even on its own terms, may seem like a dead-end for scientific descriptions of the mind, since it apparently defies explanation. However, the pursuit of deeper knowledge of the fundamental drivers of human experiences is ultimately what this post is about. Such information, were it available, would be a key ingredient for any spiritual education to be effective. So a truly successful spiritual project would entail an answer to the question of the human task.

Each of these paradigms appears to have part of the answer, or is perhaps the complete answer, but in disguise, misunderstood, and misconstrued. The goal of AGI (general intelligence) research is to pull these together into a coherent picture of a common underlying process, while satisfying the requirement for completeness that was the original motive behind this post.

Sometimes the most obvious truths are also the hardest to initially recognise. As may be apparent to some readers, the common threads that unite these paradigms have already been laid out in the process of doing the analysis above, in addition to their elaboration across other posts on this site. For example:

  1. Wishful thinking and judging truth are on a continuum; what you want to believe, or what is useful to believe according to your motives shapes what you end up accepting. This combines goal-orientation (candidate 1) with world-modelling (candidate 3).
  2. Any plans you create for resolving a problem are part of the act of world-modelling; understanding the world is the same as building causal plans about it. Thinking is therefore a form of control over the world. This combines problem solving (candidate 2) with world-modelling (candidate 3).
  3. Changes in your needs alter what you judge to be reasonable; goal-orientation and understanding are linked by a never-ending transformation of motives. This combines goal-orientation (candidate 1) and world-modelling (candidate 3) with spiritual development (candidate 4).
  4. Concepts are the expression of underlying motives; you cannot define a given concept except as an interaction of your perceived needs with what reality provides. This combines goal-orientation (candidate 1) with world-modelling (candidate 3).
  5. The creation of tensions, as well as their negations is how the mind defines problem-solution pairs; your flow or stream of mental events is guided by these broader changes in motives. This combines goal-orientation (candidate 1) with spiritual development (candidate 4).

The goal of this and other posts on this site has always been to provide enough practical detail so one no longer needs to rely on purposes, and can finally stop asking what the mind is for. Early on, I began by describing the necessary primitives, such as tensions, recreations, and introspection. Subsequent posts discussed how purpose-based thinking affects your judgments and interpretations of the psychological evidence available to you, and how altering presuppositions can change the conclusions you reach.

It may still be difficult to see how all these, put together, could result in something that qualifies, completely and in all respects, as a mind. What we are hoping for is an agent which, set loose in an infinitely complex environment can, of its own accord — and with social help of course — learn to contemplate quantum physics, or converse with others about possible ways of resolving its existential problems. The number of intermediate steps between a blank “infant” brain and this picture of an adult cannot be understated, and I don’t believe any book will ever be able to encapsulate every moment of cognition for even one individual mind.

The best we can do is to describe the necessary primitives and their interactions in a way that allows them to be applied to all cases — from learning to solve calculus problems, to learning to contemplate your own existence. This is why the variety of topics covered in this site has been quite broad, since no mental event can be neglected without neglecting the mind itself. Together these posts treat the mind not as a machine with a single purpose, but a set of processes and events which cumulatively appear as if they have multiple purposes.

To unite and resolve the contradictions within the four paradigms above requires that you first transcend any one of them, and — as Dennett proposed in the quote that started this post — question the most popular assumptions about how best to interpret what the mind does. It is unlikely that the same assumptions that have served so well for narrow AI can also help in making it general.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.