Common hurdles to understanding the human mind

The most persistent challenges on the path from narrow to general AI

From Narrow To General AI
14 min readJul 27, 2024

The formulation of a detailed and rigorous theory of “what AGI is”, is a small but significant part of the AGI community’s ongoing research…the fleshing out of the concept of “AGI” is being accomplished alongside and in synergy with these other tasks. — Artificial General Intelligence: Concept, State of the Art, and Future Prospects, Goertzel

So far we’ve discussed how difficult it has been to define the core “human task”; that is, what the mind does. There are many reasons the attainment of this definition has been so elusive. An awareness of the obstacles that prevent progress or misdirect research may provide clues as to how to overcome them. In this post we’ll list those hidden ideological and practical impediments to self-understanding, dividing them into two broad categories. The first category are focused on human factors, that is, how a researchers’ own attitudes and motives can stand in the way of their understanding. The second deals with pragmatic difficulties in developing a comprehensive theory of mind.

Part 1: The greatest impediment is yourself

The single greatest impediment to progress towards a full theory of mind is also the most rarely recognized. Understanding the mind is like understanding any topic — e.g. finance or literature — you must approach the topic with the right attitude, being receptive to the concepts involved. If there’s is any aspect of your beliefs about the mind that you are unwilling to question or challenge, your own dogma will likely stand in the way of arriving at a complete, comprehensive theory.

Here are a few of the reasons a wayward investigator may accidentally close doors to their own education:

Excessive moralising

With a topic as socially sensitive as the nature of the human mind, there is a tendency to moralise — to designate good vs bad thinking, right vs wrong judgments. When you theorise about the mind, you can’t help but be aware of the social and political setting in which your theories will be discussed. Any ideological pressure exerted by your peers will influence what you publicly say and even privately think about the topic, making you subconsciously reticent to follow unpopular threads of argumentation. If it turned out, for example, that the mind were inherently irrational, or even immoral, such a premise would be difficult to promote to others.

And so it has been that historical theories which hint at distasteful implications — e.g. that human existence is a meaningless struggle in a dog-eat-dog world (Hobbes), or that morality is an expression of weakness (Nietzsche), or that psycho-sexual repression drives intellectual evolution (Freud) — had difficulty attracting a wide audience, being rejected by those who found the possibility either uncomfortable or socially deleterious. Whether or not the theories contained a nugget of truth, those avenues were largely shuttered in favour of more “sensible” (i.e. tasteful) opinions.

Equally, any attempt to promote a positive, pro-social, or optimistic perspective of psychology can be just as inhibiting on scientific progress. Foucault, in Madness and Civilisation, describes how moral imperatives permeated every aspect of the definition of insanity, the phenomenon being described at various times as a moral failing, an archetypal symbol of human frailty, or a cause for pity, charity, and empathy. Each of these mindsets frames aberrant psychology in its own moral light, and aligns them with a contemporary prerogative for inculcating societal cohesion. Indeed, treating failures of the mind such as psychiatric disorders still remains one of the strongest motivators to understanding how the brain works — and so the above ideological influences have not disappeared. And though useful in its own respect, the pursuit may bias new discoveries by skewing them towards concepts and categories framed around helping the afflicted. Researchers inevitably evaluate a mind as healthy or unhealthy, competent or incompetent, and in so doing inject an observer’s moralising judgment.

A last, more subtle form of moralising is an evaluation of relative intelligence. This is frequently a driving force in the field of Cognitive Science. Rooted in a researcher’s judgment of what counts as “intelligent”, an experiment will situate its test subjects on a continuum according to what it implicitly asserts as valuable; gauging them as more or less rational, competent, or reasonable according to how well they perform on tests of logic and reasoning. This projects the expectations of the researcher onto the evaluation of the subject. On a similar note, scientific judgments of artistic merit regularly misunderstand or misconstrue the nature of artistic expression as identical with popularity or sales — relegating art to an industrial/consumer process.

Uncomfortable truths

The truth about the mind, whatever that may prove to be, will govern all aspects of human existence and happiness. This revelation may, in the process, expose uncomfortable truths that some people are not ready to confront. For example, to seriously consider any mechanical model of the psyche, you must first be comfortable with the notion of the mind as a deterministic machine, without demanding the theory reserve space for free will. Not everyone is up to this challenge. There may be an unwillingness to accept that, say, human happiness cannot be maintained for any extended duration of time, or that all truths, even scientific ones, are relativistic, or that life and the world have no inherent meaning or purpose. To understand any theory requires a certain comfort with the latent connotations it carries; discomfort with any of these will be an impediment to understanding what the mind is and how it works.

Truth is not always empowering. Yet one of the main reasons people seek to understand the mind is psychologically pragmatic: to give them strength and confidence in their self-understanding, to reinforce an image of themselves as unified, self-consistent, and self-reliant. For many people, to deny the self, and insert in its place a directionless automaton amounts to a denial of everything important about their own existence, a kind of overture to nihilism. We should not be surprised when the mind resists such a conclusion. A common symptom of this underlying prejudice is expressed in dogmatic statements like “I know that I am conscious, and that is beyond doubt”. Such assertions suggest an aversion thoroughly analysing the brain or consciousness. A willingness to break down one’s ego and precious sense of identity may be necessary for progress along the path to general AI.

Lazy reductionism or abstraction

With a machine as complex as the brain it is often easy to abstract away various aspects behind hand-waving explanations rather than dealing with concrete details. This tendency is more a pragmatic necessity than anything nefarious, since it reduces the burden of work to a subset that can be addressed, while abstracting the rest that remains out of reach. But the practice is not always benign. A pernicious example is the tendency to defer a given mental process to murky evolutionary “explanations” (i.e. “we evolved the ability to do X”) which hides any real explanation behind a plausible-sounding non-explanation. It redirects the conversation away from an analysis of systematic cognitive patterns, towards a naturalistic or ecological perspective, one that is ultimately unproductive when designing AI.

In the short term, reductionism may be unavoidable. Human minds are overwhelmingly complex, able to perform diverse functions across a variety of domains. Art, science, philosophy, maths, socialisation, athletics, spirituality, playing games, self-understanding, etc. are only a few examples. To explain the mind is to explain how it performs all of these, and that requires at least a passing familiarity with each of them, if not a deep empathy with the motives that drive each discipline. Few people are such polymaths as to claim to be familiar with all fields of human exploration. The temptation therefore, is to simplify or dismiss anything a researcher doesn’t understand as unimportant or unnecessary, since the alternative — to try to understand — may be too difficult or take too long.

AI researchers, for example, tend to be scientifically-minded, and this shows in models of mind which idealise a rational, logical image of human cognition. Art, spirituality, or poetry are consigned to the undifferentiated heap of irrational human artefacts. In the long run, however, to understand the mind is to understand all of it. Sticking to one field or perspective, and downplaying others you can’t relate to as erroneous or misguided will always remain an impediment to developing a full theory of mind.

Species pride and species denigration

As members of our species, it is tempting to take pride in our intellectual superiority as rational creatures, and reject any suggestion that we may not be as intelligent or objective as we think we are. Those with a rationalist bent want to believe that humans are more competent, thorough, and logical than we actually are. They may deny models that try to pull back on the full possibilities of logical AI, as unnecessarily limiting the agent’s capabilities.

On the other hand, it is also common to denigrate our species as irrational, animalistic, deeply flawed, immoral, predictable, and self-deluded. This is an equally unproductive perspective. It is occasionally rooted in resentment or misanthropy, though in most cases it is simply a desire to knock down our over-extended egos to more realistic levels. Either way, one can go too far in denigrating human intelligence. An recent tendency is for researchers of natural language models to reduce language production to a mere imitation of what others have said before — thus denying the ability for language, and ultimately thinking, to evolve with human needs.

Private motives

The desire to deeply explore and fully understand the mind is not everyone’s cup of tea; and it is difficult to teach someone who has no interest in, or is actively averse to psychology about its nuances. Those who engage in the pursuit are usually driven by private motives that are strong enough to push past the many disappointments they inevitably experience along the way. However, these motives often become their own roadblocks¹. The desire for priority (to be the first discoverer of AGI), or the need to be recognized for one’s contributions, or simply to help resolve one’s own private psychological issues may steer an individual away from certain more promising paths. Overweening pride or arrogance, the need for public esteem, a fear of failure or embarrassment, and any number of public-commitment fallacies are similarly detrimental to intellectual growth and continual progress.

Character conflicts, as are rife in academia, may also impact one’s ability to publicly admit one’s own mistakes. Researchers are only human, after all, and their thinking can be derailed by the impetus to defeat their political or ideological opponents. Wherever there are party allegiances, its members will tend to misunderstand or reject anything from their opponents’ camps, even when a synthesis of ideas would prove more productive. Even simple clashes of personalities may impede the attainment of useful knowledge. For example, a visceral distaste for cold, clinical researchers, or airy-fairy, wishy-washy artists, or idle armchair philosophers, can preemptively close off avenues of useful investigation if the prejudicial feelings are strong enough.

Part 2: Practical impediments

Lack of psychological primitives

Any model of the mind, right or wrong, must necessarily define fundamental elements or components from which to build up its explanations. Such psychological primitives include consciousness, concepts, tasks, sensory inputs, memory, phenomena, attention, and so on. All of these are, of course, human conceptualisations based on what we have observed in ourselves, and may not reflect any real neuro-anatomical structures in the brain. Moreover, all primitives have porous boundaries that bleed into nearby ones. For example, declarative memory overlaps with the process of generalising via concepts; alternatively it can be seen as an instance of episodic memory that has been entrenched through repetition. These and other ambiguities imply that the primitives discovered so far are only ad-hoc rules of thumb that have broad (though not universal) applicability. They are not an actual foundation for a system, at least nothing iron-clad, immutable, and which applies without exception.

Primitives are always defined, and only make sense, within a context of assumptions regarding what they will be used for. Much of the confusion in defining primitives stems from the theoretical frameworks of which a given primitive is a part, (including the four theories of purpose described in the previous post). For example, if you believe that the mind is an information-gathering and world-modelling machine (paradigm 3), then all memories are pieces of some world-model — they are never ad hoc inventions relative to the needs of the agent. On the other hand, if you believe the mind intends towards spiritual growth, then every memory of an experience is instructive in the Jungian sense — there are no accidents.

Even when you concoct a set of primitives that are true in general, they may still not be instructive for building a mechanical model of the mind. For example, it goes without saying that a sufficiently well-functioning mind intends towards its benefit (or what it believes is its benefit)— be it physiological, spiritual, emotional, or social. But defining what “benefit” means, how it evolves over time, how the mind knows it has achieved a given benefit, and how to resolve conflicts between two such benefits is left vague and difficult to implement. Or, as another example, it is clear that the mind generalises from previous experiences to new ones, but how it does so successfully, and how it knows when a generalisation is appropriate is as yet unclear.

Introspection considered perilous

The mind presents many internal roadblocks to a full exposition of what it does. Notably, it tends to adapt in exactly the wrong ways, so as to undermine any theories you may build about it. For example, once it is well-known that humans are subject to certain psychological tricks and fallacies — e.g. perceptual biases or marketing tricks — those who become acquainted with this fact develop defences against them, and may make the tendency largely disappear. Critical thinking has always been the study of how to overcome such all-too-human foibles. And the mind appears limitless in its ability to adapt in opposition to any pattern claimed about it. As a consequence all theories of cognition are only probabilistic, more applicable to those individuals who are not yet aware of their own limitations. Sometimes an individual may even choose to act contrarily, to spite any predictions made about it, as Dostoevsky famously pointed out:

There is one case, one only, when man may consciously, purposely, desire what is injurious to himself, what is stupid, very stupid — simply in order to have the right to desire for himself even what is very stupid and not to be bound by an obligation to desire only what is sensible. — Dostoevsky, Notes from the Underground

Another challenge to data collection is that much of what your mind does is invisible to introspection. This is unfortunate, since introspection and self-report are the primary tools we have for mapping cognitive functions. Even MRI scans are only useful if you know how to interpret the electrical graphs with respect to a background theory. So having a cognitive theory is a necessary prerequisite to MRI research. These theories are frequently laden with biases they have inherited from folk psychology and older schools of philosophy. For example, the popular split between episodic memory and declarative memory may be an echo of the traditional split between subjective and objective understandings of the world — a split that has since been called into question by postmodern semiotics. And philosopher Daniel Dennett spent much of his career trying to undo our preconceptions of the singular “controller” in the brain, a bias that our culture has carried with it via a long tradition of social and political identity formation.

What little information introspection gives us may also be misinterpreted. The timing and continuity of mental events are difficult to gauge. We may see as continuous and gradual what is actually a series of punctuated, erratic steps. For example, a concept like sandwich may be viewed as a singular, unified mental entity, yet the appearance of ambiguous cases like hot dog suggest that concepts are more a process of case-by-case evaluation than a static entity.

Sluggish language

All understanding of the mind must be done within the constraints of the human mind. And since the result is ultimately expressed in a descriptive symbolic language, researchers are subject to the limitations of existing conceptual systems, and these are slow to change. Definitions of psychological concepts carry historical implications and connotations that are difficult to erase from the words we use (e.g. the word “unconscious” still harbours Freudian implications). Each English word has evolved in the context of certain assumptions regarding how it will be used, which give the word its meaning. For example, the definition of “memory” connotes that its purpose is to be an accurate description of the way the world is, as presented to the senses. This makes it impossible to talk about a type of memory that is relativistic or functional: a memory that is meant to serve a purpose, and is as much a fabrication as it is a record of events. There may simply not be the right words to describe how the mind as it actually is, at least not until new paradigms have been laid out first.

What applies to language also applies to scientific theories. Normal science (as described by Kuhn) can only proceed on the basis of shared assumptions of what to focus on, and how to frame and interpret experiences. And since the progress of science depends on peer-review and therefore peer-approval, popular paradigms are slow to change by default. Scientists will usually research topics they suspect other scientists will find interesting, using terms their contemporaries will understand, and concepts they are familiar with. The act of changing fundamental concepts and assumptions is never the work of normal science itself; one must look outside it, to philosophy.

Combinatorial complexity

One last practical roadblock to a description of what the mind does is that nearly all events in an individual’s psyche have their own complicated history. The question “why did I think of an elephant just now?” can never have a general answer. This dependency on history is more true of the brain than of any other organ. The brain can be broadly seen as an organ of “memory”, i.e. of incorporating the past into the present. Although other organs also incorporate the past to a limited extent — e.g. muscles grow larger with strenuous exertion — the brain is perhaps the only one that makes history its primary subject matter.

Exploring such a stateful system is a complicated task, since to explain any individual thought or behaviour you must trace it through its unique causal pathways, and these may have left no breadcrumbs for you to follow. Small differences can have enormous consequences. Weight training may make the difference between being able to lift 50 pounds or 100 pounds, but education can make the difference between making a bow and arrow and making an atomic bomb, even if the necessary equipment and resources are available in both cases. The words you say and actions you take will vary greatly based on small differences in past experiences.

The list above is as comprehensive as I’m able to compile. As I can’t be sure that I’ve included everything, feel free to add suggestions in the comments.

Across all these examples, one thing remains constant: any theory of the mind must fit into, and be formulated according to the limitations and quirks of human understanding. It may seem as if to understand the mind you must possess a nearly perfect mind yourself; which is a tall order, and unlikely to be satisfied by any flesh and blood researcher. Fortunately, this is not really a requirement. An awareness of these pitfalls, along with a willingness to compensate for them through the help of others with complementary strengths, has always been the best path forward for theoretical progress.

¹ Note: the desire to recreate the mind through technology (AI research) is also a personal bias.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.