From Narrow to General AI

How assumptions and constraints define narrow AI

From Narrow To General AI
6 min readJul 8, 2023

This is the fifteenth post in a series on AGI. See the previous post here. See a list of all posts here.

The first 14 posts focused on basic, automatic rules that describe the foundational units of cognition. At this point in the series we turn to questions of practical implementations of AGI.

Broad AI.

If there is one thing that has hindered AGI development it’s the lack of understanding of what exactly a complete human mind entails. Human researchers are necessarily limited, and few would claim to understand the full breadth of human psychology. As a consequence there is a risk of reducing the targeted capabilities of an AGI to what the researcher him/herself understands and values. For example, a mathematician may easily see the value of an AGI that can quickly solve complex polynomial equations, but may not understand why anyone would want to solve a complex philosophical quandary. They may read a treatise by Kant or Hegel and see in it only an irrational and quirky waste of time¹. The AGI that such a person creates will either ignore these aspects of human experience or reduce them to their behaviourist representations; in the same way that LLMs model language based on what they predict the average person would say, instead of what an agent itself would want to say.

Everyone has a different image of what an ideal human being is. It is tempting to project that ideal into your theory of AGI without realizing that it is not representative of any real mind; especially if scant few humans are ever able to attain to that ideal. On realizing that humans can’t meet your ideal, a second temptation pops up to say that such a model is beyond AGI, it’s an ASI (Artificial Super-Intelligence). This is its own trap: if you don’t know what the average human mind does, how can you decide what is involved in going beyond it? The only option in such a case is to resort to measuring external behaviour according to some benchmark — e.g. how many polynomials it can solve per hour.

Such a benchmark inevitably embeds the values of its creator in it. A benchmark by definition must define “better” and “worse” performance, and therefore what counts as ideal behaviour. These ideals may be uncontroversial and universally accepted, but each one excludes or ignores some other set of values. Even a simple classifier implicitly encodes values. It hard-codes what the focus of the activity is — objective labelling of images that matches some common designation— and ignores, say, aesthetic appreciation or fantastical storytelling based on those images. Even within its own domain, its architecture assumes that there is always a single, objectively correct label for any given image, and that no disagreement or ambiguity exists—e.g. should you label a picture of a human as an “animal” or not?

Narrow AI are made narrow by their architecture and design. Each such AI has a fixed goal, a purpose, and its design encodes that purpose. It also encodes a solution space, an ontology, and the developer’s assumptions regarding them. All become part of the unchanging structure of the AI.

Let’s clarify this by looking at how, for example, modern Large Language Models (LLMs, like GPT) embed implicit assumptions into their architectures. Such assumptions act as constraints on the LLM’s behaviour and ultimately on how it learns. There are many, so we’ll subdivide them into categories.

Constraints of interpretation

  • The squiggles and shapes (letters) should be perceived as a fixed set of UTF characters, not as arbitrary pen strokes or colour pixels.
  • The LLM should pay attention to the letter characters, and interact with them. They cannot be ignored.
  • These characters form sequences which should be perceived in order, left-to-right or otherwise².
  • The sequences should be grouped into words, morphemes, or phrases.
  • Only some combinations of characters are correct or useful³.
  • The characters and symbols are ends in themselves — rather than being useful in relation to the external world or to any thoughts they may conjure up.

Constraints on actions

  • A response is required from the LLM, such as a word prediction, a continuation, or an answer to a question.
  • Not all responses are equal; there are better and worse ways to respond.
  • The LLM should keep training as long as it is able to improve its responses, without stopping of its own accord because it is tired or bored.
  • The best way for the LLM to respond is to closely match a given distribution of responses that form a dataset. It shouldn’t invent its own words, or answer outside the expectations of the dataset, e.g. by staying silent or using expressive dance.

Semiotic assumptions

  • Groups of these characters represent something meaningful. That meaning exists in a separate representation space outside the characters themselves.
  • The meaning of a word in a given context is singular. Two LLMs in one and the same context may not interpret distinct and equally valid meanings for it. Any divergence is assumed to be a result of incomplete or differential training rather than personal preference.
  • Optionally, words can comprise higher-level concepts, or can be grouped into a hierarchy of categories.

The list above is only partial. It doesn’t include pre-processing like stemming, lemmatization, and removing stop words and punctuation. What it does show is the large number of assumptions built into any given narrow AI. We use such built-in assumptions because they are the only way we know to make the AI productive. In the process we also constrain its abilities to those assumptions.

By contrast, for humans none of the above listed assumptions are built-in or given; they are all learned. A quick review of the list shows that for each of them, multiple counter-examples exist.

Imagine an AI that could generate the above constraints without needing to have them programmed by a developer. It would learn of its own accord that it should read words in a linear series, or that it should relate a word to other words as part of its socially agreed-upon definition. Such an AI could develop its own goals and approaches, solve a broader range of language problems, and handle exceptional cases, many of which a programmer may not have thought of. For example, it could learn to translate a new, unknown language from scattered fragments, as happened with the discovery of the Rosetta Stone.

The goal of this post and subsequent posts is to describe this alternative path: how an AI can transition from narrow to general by gaining the ability to create its own assumptions and goals. This is a necessary and significant factor in moving from narrow to general AI, so it is worth trying to figure out how this can be done. What we’ll discover is that in doing so the AI will also be able to create its own ontology. That is, it will be able to derive working definitions for abstract concepts like number, optimize, or conversation — and do so from scratch, without explicit prompting or specialized architecture. This approach has many benefits — e.g. if you don’t hard-code the concept of number to a fixed set of functions, the AI is able to invent new ways of working with numbers, such as inventing irrational, non-monotonic, complex, or imaginary numbers.

This approach brings along with it some philosophical quandaries. If, say, you don’t hard-code what truth means or how to achieve it, the AI will have to invent its own approach to determining truth, which may not be to your liking (i.e. your values). This is a creative space we allow it. Such an approach has historically proven fruitful, as the definition of truth has been redefined multiple times throughout the history of logic and epistemology.

In the next post we will break down what exactly an “assumption” is when applied to an AI, and show that assumptions, constraints, goals, and concepts are all facets of the same core mental function.

Next post: Fluid Concepts Are Necessary for General AI

¹ Hegel’s Science of Logic ultimately equates opposites like being and nothing to each other. This may seem self-contradictory, and is unconventional to say the least.

² A Natural Language Processor doesn’t realize that it is reading left-to-right; it perceives letter sequences in the same way we move through time.

³ For comparison, numbers don’t behave that way; you can combine them in any way you like.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.