From Narrow to General AI: Series Outline
A Comprehensive List of the Obstacles on the Road to AGI (link): The goal of AGI research is simple: to understand the mind with enough detail to recreate it, or perhaps create something better. Developing AGI presents a mountain of challenges, and it is easy to lose hope in ever accomplishing it. Introspection can’t always be relied upon, and is full of paradoxes and apparent contradictions which, as a researcher, you must overcome. In addition to such philosophical quandaries, you are also called upon to explain moment to moment thinking, a broad array of fields and disciplines that humans engage in, the avenues of personal growth, etc. and do so in a way that is both inoffensive and understandable.
Section 1: Unconscious Machinery
The first part of this series analyses thinking from the introspective side and establishes the basic, invisible, and automatic mechanisms that drive higher-level mental actions.
- Guidelines for How to Begin: Looking for functional patterns in cognitive behaviour is tricky since they act on the mind before we are aware of them, and so they are mostly invisible to introspection. Any full explanation of mind, however, must embrace a rules-based outcome, without leaning on the crutch of free-will. This post ends by cataloguing all the behaviours that the mind must do; i.e. that are automatic, unconscious, and uncontrollable.
- The Smallest Units of Introspection: Introspection is a slippery task, since studying the mind simultaneously changes it. Positing static mental structures like “concepts”, “plans”, “memories”, etc, where one sees only a chain of fluid thoughts may be jumping the gun. This post establishes a more granular, evolving interpretation of mental events as loosely connected, actively motivated reactions to other thoughts and experiences.
- Constraints on the Formalization of AGI: Any theory of mind must be understood by and learned by that mind. When it comes to introspection, the act of observing and learning about oneself implies that there are two parts of the mind at a time, one seeing the other. Continuing this trend recursively creates an infinite regress of self-studying minds. To resolve this problem, the role of self-study must be taken on by such parts as needed. The conclusion is that at least two learning systems are required for introspection.
- Small Islands of Consciousness: This post tackles one type of learning, namely “becoming aware of something”, which is closely tied to forming memories. Becoming aware is a momentary process, which changes the mind in a way directly related to its cause. The post discusses how the mind can become aware of things in itself, just as in the external world, and shows the implications of this on the experience of consciousness itself.
- A Layer of Self-Generated Experiences: Concrete thoughts (i.e. visual and audio ones) are often rough recreations of past experiences. The most common example are “memories”. There is another example wherein the mind recollects its own thoughts — e.g. remembering the voice your head, or an imagined fantasy setting. By remembering your thoughts, those thoughts can then be composed together into new thoughts or overlaid on top of external experiences. They make up the mind’s self-generated model of reality.
- Motivations, Minus Feelings: Motivations are difficult to study since they are only known indirectly by their effects; they are otherwise invisible. One such effect is that they drive learning and mental change. Subjective feelings are an introspective reinterpretation of motivations; but since introspection is also a type of self-awareness, and thus self-learning, the perceived nature of feelings is suspect, as the feelings you see are shaped by what you “need” to see.
- Thinking is Motivated: This post connects motivations, concrete thinking, and planning into one theory. Instead of splitting world-modelling (truth) and planning (goals) into two stages where the first feeds into the second, the two are combined into one simpler step, where the mind automatically learns to recreate experiences that previously solved problems. For example, the thought of the word “food” is both a fact (name of item) and a plan (what to say if you’re hungry).
- Mental Tensions: Continuing from the last post we show how “tensions” drive learning, particularly the act of “becoming aware of” something. You become aware of some content because it solved a problem for you. Later, when in similar circumstances, that “memory” reappears as both an expectation (what will happen) and an intention (what I’d like to happen). This leads to the conclusion that conscious awareness is driven by underlying motives.
- Anxiety and the Third Way: This post addresses how tensions, the negative part of motivations, are formed. Tensions include things like not getting what you want or hunger. Those that are acquired, as opposed to innate, are formed when an existing tension (of any kind) registers as “inescapable”, that is, it repeats quickly without an intervening solution.
- Concepts, Abstractions, and Problem-Solving: We finally address the question of abstract concepts, and show how they are tightly connected to problem-solving. Namely, the facets of an abstract concept are represented as thought-solutions to an underlying problem. This reverses the usual way of modelling concepts —generally we think that people first learn concepts, then use those to solve problems. In the process we show how abstract concepts can be created in AGI by using only concrete thoughts.
- Machinery for Generating Insights: This post covers the other half of problem-solving, that of defining solutions. To solve a problem, you must have at some point learned what counts as a solution. You learn the latter during “moments of insight”, when your mind defines criteria by which the ongoing tension is inhibited from appearing again — e.g. heights leads to tension, but heights plus in a plane leads to no tension. This process, repeated across multiple circumstances, defines a template for a solution.
- The Self, Free Will, and Agency in AGI: Our limited understanding of our minds leaves room for “agency” and “free will”, which are difficult to explain away. This post narrows down that intuition — that humans have “agency” — to the tendency to direct focused mental efforts. It also defines where the consequent thoughts come from, and describes how your motivations drive your spontaneous thoughts.
Section 2: From Narrow AI to General AI
The second part of the series approaches the problem from the other side, discussing how modern AI can be updated to carry out the behaviours discussed so far.
- How An AGI Can Survive Outside the Lab: AI research provides AIs with a safe space, an explicit ontology, and easy problems for it to train on and solve. The real world is not so clear cut, and “truth” is difficult to discover. This post begins to address the difference between the lab and the world, and how objective world-modelling is an inefficient way to interact with a dangerous world. Following on previous posts, an alternative is presented: thinking about what you need in a given context, not about what is “true”.
- From Narrow to General AI: (Quote) “If there is one thing that has hindered AGI development it’s the lack of understanding of what exactly a complete human mind entails. Human researchers are necessarily limited, and few would claim to understand the full breadth of human psychology. As a consequence there is a risk of reducing the targeted capabilities of an AGI to what the researcher him/herself understands and values.”
- Fluid Concepts are Necessary for General AI: Concepts, goals, and abilities are interconnected. A narrow AI cannot define how its framing concepts are applied, which means they are not robust to changes in hard-coded assumptions. E.g. if the narrow AI is hard-coded to only take in UTF characters, it will falter when faced with hand-written pen strokes—its concept of “character” cannot be extended without human intervention.
- Rethinking Symbol-Grounding in AGI: The post starts by taking a basic model-based RL and discusses how its world-model can be improved by selectively focusing on the aspects of experience that are useful for planning. This leads to a new way of defining concepts: by their “affordances”, rather than by “facts”. Doing so grounds concepts in their utility to the observer.
- The Impossibility of Teaching Basic Numeracy to AI: A case study that focuses on our concept of number — where it comes from and how we learn it. We show that a fundamental feature of numbers, namely that they can be “more than” or “less than” each other, can neither be hard-coded in an AI nor learned from objective definitions. The only viable alternative is to base the concept around useful interactions (affordances).
- Why the word “existence” can’t be learned by a classifier: Learning by association, aka conditional prediction, is the bread and butter of most ML techniques, including language acquisition. Yet certain words can’t be correlated with any specific experiences since they can be interpreted into any experience, e.g. “existence”, “now”, “me”, etc. To address this challenge we should not look for the cause of words, but rather their utility.
- The paradox at the heart of AI-based identification: Determining the source of truth leads to contradictions: Theories of object recognition currently require that the source of truth is given by a human or labelled dataset. The entry discusses the paradox of creating an AI that can identify objects, including abstractions like “love” and “time”, from scratch and without external help. It outlines the questions that will occupy the next few posts.
- Pragmatics precedes semantics: Explainable AI as post-hoc rationalization: Explicit identification, rather than being a step in the planning process, is a parallel process of converting experiences into expressions; making it a type of communicative problem-solving. Humans first learn how to interact with experiences, and only later create explicit world models as after-the-fact systematizations. This makes explainable AI a type of post-hoc rationalization.
- AI research needs a drastic inversion of its epistemology:“Meaning” does not arise from features, but from motives: A critical entry in the series, we discuss how identification is always tied to a problem-solution context. The usefulness of the act of identification is its meaning. This is demonstrated in metaphorical thinking, which humans find more natural than deriving explicit definitions. The needs of expression are the roots of explicit meaning, and drive all thinking about identity.
- Why AI has difficulty conceptualizing “time”: How to bridge the gap between AI and transcendental concepts: AI have difficulty conceptualizing transcendental concepts like time, which have no sensory correlates. Building on the last post, this post addresses one of the most difficult problems in identification — how you learn to identify abstractions like time from the ground up. All conceptualization has rough and imperfect beginnings, based around solving immediate problems. Time is shown to be the solution to problems like waiting, running late, etc.
- [Autonomous identification (part 5): Identification is an ongoing process of invention:] This is the final entry in this series on autonomous identification. It discusses how identification is always a motivated, effortful process of equating two dissimilar things, and that even logical consistency is based on the utility of this correspondence. The motive with which you come to identify an experience matters to what you discover, and it is not a freely chosen action. We finally portray concepts as active agents trying to solve identification problems on their own terms.