From Narrow to General AI: Series Outline

Including summaries and links

From Narrow To General AI
8 min readJul 19, 2023

Introduction

A Comprehensive List of the Obstacles on the Road to AGI (link): The goal of AGI research is simple: to understand the mind with enough detail to recreate it, or perhaps create something better. Developing AGI presents a mountain of challenges, and it is easy to lose hope in ever accomplishing it. Introspection can’t always be relied upon, and is full of paradoxes and apparent contradictions which, as a researcher, you must overcome. In addition to such philosophical quandaries, you are also called upon to explain moment to moment thinking, a broad array of fields and disciplines that humans engage in, the avenues of personal growth, etc. and do so in a way that is both inoffensive and understandable.

Section 1: Unconscious Machinery

The first part of this series analyses thinking from the introspective side and establishes the basic, invisible, and automatic mechanisms that drive higher-level mental actions.

  1. Guidelines for How to Begin: Looking for functional patterns in cognitive behaviour is tricky since they act on the mind before we are aware of them, and so they are mostly invisible to introspection. Any full explanation of mind, however, must embrace a rules-based outcome, without leaning on the crutch of free-will. This post ends by cataloguing all the behaviours that the mind must do; i.e. that are automatic, unconscious, and uncontrollable.
  2. The Smallest Units of Introspection: Introspection is a slippery task, since studying the mind simultaneously changes it. Positing static mental structures like “concepts”, “plans”, “memories”, etc, where one sees only a chain of fluid thoughts may be jumping the gun. This post establishes a more granular, evolving interpretation of mental events as loosely connected, actively motivated reactions to other thoughts and experiences.
  3. Constraints on the Formalization of AGI: Any theory of mind must be understood by and learned by that mind. When it comes to introspection, the act of observing and learning about oneself implies that there are two parts of the mind at a time, one seeing the other. Continuing this trend recursively creates an infinite regress of self-studying minds. To resolve this problem, the role of self-study must be taken on by such parts as needed. The conclusion is that at least two learning systems are required for introspection.
  4. Small Islands of Consciousness: This post tackles one type of learning, namely “becoming aware of something”, which is closely tied to forming memories. Becoming aware is a momentary process, which changes the mind in a way directly related to its cause. The post discusses how the mind can become aware of things in itself, just as in the external world, and shows the implications of this on the experience of consciousness itself.
  5. A Layer of Self-Generated Experiences: Concrete thoughts (i.e. visual and audio ones) are often rough recreations of past experiences. The most common example are “memories”. There is another example wherein the mind recollects its own thoughts — e.g. remembering the voice your head, or an imagined fantasy setting. By remembering your thoughts, those thoughts can then be composed together into new thoughts or overlaid on top of external experiences. They make up the mind’s self-generated model of reality.
  6. Motivations, Minus Feelings: Motivations are difficult to study since they are only known indirectly by their effects; they are otherwise invisible. One such effect is that they drive learning and mental change. Subjective feelings are an introspective reinterpretation of motivations; but since introspection is also a type of self-awareness, and thus self-learning, the perceived nature of feelings is suspect, as the feelings you see are shaped by what you “need” to see.
  7. Thinking is Motivated: This post connects motivations, concrete thinking, and planning into one theory. Instead of splitting world-modelling (truth) and planning (goals) into two stages where the first feeds into the second, the two are combined into one simpler step, where the mind automatically learns to recreate experiences that previously solved problems. For example, the thought of the word “food” is both a fact (name of item) and a plan (what to say if you’re hungry).
  8. Mental Tensions: Continuing from the last post we show how “tensions” drive learning, particularly the act of “becoming aware of” something. You become aware of some content because it solved a problem for you. Later, when in similar circumstances, that “memory” reappears as both an expectation (what will happen) and an intention (what I’d like to happen). This leads to the conclusion that conscious awareness is driven by underlying motives.
  9. Anxiety and the Third Way: This post addresses how tensions, the negative part of motivations, are formed. Tensions include things like not getting what you want or hunger. Those that are acquired, as opposed to innate, are formed when an existing tension (of any kind) registers as “inescapable”, that is, it repeats quickly without an intervening solution.
  10. Concepts, Abstractions, and Problem-Solving: We finally address the question of abstract concepts, and show how they are tightly connected to problem-solving. Namely, the facets of an abstract concept are represented as thought-solutions to an underlying problem. This reverses the usual way of modelling concepts —generally we think that people first learn concepts, then use those to solve problems. In the process we show how abstract concepts can be created in AGI by using only concrete thoughts.
  11. Machinery for Generating Insights: This post covers the other half of problem-solving, that of defining solutions. To solve a problem, you must have at some point learned what counts as a solution. You learn the latter during “moments of insight”, when your mind defines criteria by which the ongoing tension is inhibited from appearing again — e.g. heights leads to tension, but heights plus in a plane leads to no tension. This process, repeated across multiple circumstances, defines a template for a solution.
  12. The Self, Free Will, and Agency in AGI: Our limited understanding of our minds leaves room for “agency” and “free will”, which are difficult to explain away. This post narrows down that intuition — that humans have “agency” — to the tendency to direct focused mental efforts. It also defines where the consequent thoughts come from, and describes how your motivations drive your spontaneous thoughts.

Section 2: From Narrow AI to General AI

The second part of the series approaches the problem from the other side, discussing how modern AI can be updated to carry out the behaviours discussed so far.

  1. How An AGI Can Survive Outside the Lab: AI research provides AIs with a safe space, an explicit ontology, and easy problems for it to train on and solve. The real world is not so clear cut, and “truth” is difficult to discover. This post begins to address the difference between the lab and the world, and how objective world-modelling is an inefficient way to interact with a dangerous world. Following on previous posts, an alternative is presented: thinking about what you need in a given context, not about what is “true”.
  2. From Narrow to General AI: (Quote) “If there is one thing that has hindered AGI development it’s the lack of understanding of what exactly a complete human mind entails. Human researchers are necessarily limited, and few would claim to understand the full breadth of human psychology. As a consequence there is a risk of reducing the targeted capabilities of an AGI to what the researcher him/herself understands and values.”
  3. Fluid Concepts are Necessary for General AI: Concepts, goals, and abilities are interconnected. A narrow AI cannot define how its framing concepts are applied, which means they are not robust to changes in hard-coded assumptions. E.g. if the narrow AI is hard-coded to only take in UTF characters, it will falter when faced with hand-written pen strokes—its concept of “character” cannot be extended without human intervention.
  4. Rethinking Symbol-Grounding in AGI: The post starts by taking a basic model-based RL and discusses how its world-model can be improved by selectively focusing on the aspects of experience that are useful for planning. This leads to a new way of defining concepts: by their “affordances”, rather than by “facts”. Doing so grounds concepts in their utility to the observer.
  5. The Impossibility of Teaching Basic Numeracy to AI: A case study that focuses on our concept of number — where it comes from and how we learn it. We show that a fundamental feature of numbers, namely that they can be “more than” or “less than” each other, can neither be hard-coded in an AI nor learned from objective definitions. The only viable alternative is to base the concept around useful interactions (affordances).
  6. Why the word “existence” can’t be learned by a classifier: Learning by association, aka conditional prediction, is the bread and butter of most ML techniques, including language acquisition. Yet certain words can’t be correlated with any specific experiences since they can be interpreted into any experience, e.g. “existence”, “now”, “me”, etc. To address this challenge we should not look for the cause of words, but rather their utility.
  7. The paradox at the heart of AI-based identification: Determining the source of truth leads to contradictions: Theories of object recognition currently require that the source of truth is given by a human or labelled dataset. The entry discusses the paradox of creating an AI that can identify objects, including abstractions like “love” and “time”, from scratch and without external help. It outlines the questions that will occupy the next few posts.
  8. Pragmatics precedes semantics: Explainable AI as post-hoc rationalization: Explicit identification, rather than being a step in the planning process, is a parallel process of converting experiences into expressions; making it a type of communicative problem-solving. Humans first learn how to interact with experiences, and only later create explicit world models as after-the-fact systematizations. This makes explainable AI a type of post-hoc rationalization.
  9. AI research needs a drastic inversion of its epistemology:“Meaning” does not arise from features, but from motives: A critical entry in the series, we discuss how identification is always tied to a problem-solution context. The usefulness of the act of identification is its meaning. This is demonstrated in metaphorical thinking, which humans find more natural than deriving explicit definitions. The needs of expression are the roots of explicit meaning, and drive all thinking about identity.
  10. Why AI has difficulty conceptualizing “time”: How to bridge the gap between AI and transcendental concepts: AI have difficulty conceptualizing transcendental concepts like time, which have no sensory correlates. Building on the last post, this post addresses one of the most difficult problems in identification — how you learn to identify abstractions like time from the ground up. All conceptualization has rough and imperfect beginnings, based around solving immediate problems. Time is shown to be the solution to problems like waiting, running late, etc.
  11. Creativity and innovation arise from motives: How the mind can invent brand-new concepts: Human creativity involves the invention and expression of novel concepts. You often create them from scratch as necessity drives their invention. In fact, all concepts, as they exist in your mind are ones you invent for yourself, even if others also have similar ones. Thus concept acquisition and concept invention are the same process.
  12. Interpretation is not a free choice: Conscious identification is always an act of interpretation, that is, of attaching a thought to an input that is not the same as the original. This means it is motivated by some reason, which draws attention to the object. You don’t merely identify things around you arbitrarily or at will. You are motivated to identify something, which makes the act subjective and coloured by the motive.
  13. [Logic and interpretation:] Logical thinking of any kind requires you first identify the terms and predicates in your experiences (e.g. “Socrates”, “mortal”), and since identification is motivated, the foundations of logic are also motivated. We show that logical inconsistency is inconsistency of goals.
  14. [AI and classical objectivity:] This post tackles the difficulty of getting an AI to learn how to identify based on features of an instance, and how it leads to a vicious cycle of abstractions built on abstractions.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.

Responses (3)