Fluid Concepts are Necessary for General AI

Narrow AI are bounded by their hard-coded ontology

From Narrow To General AI
8 min readJul 12, 2023

This post is the sixteenth in a series on AGI. You can read the previous post here. See a list of all posts here.

In the previous post we discussed how narrow AI are defined by built-in assumptions about how they approach problems. We explored the case of Large Language Models (LLMs) and the constraints around language processing that are implicit in their architectures. A LLM assumes, for example that:

  • Letter characters are important and the LLM should interact with them.
  • The LLM should react to these characters with its own letter sequences.
  • The meaning or usage of a word in a context is singular.
  • Not all responses are equal; there are better and worse ways to respond.
  • The best way for the LLM to respond is to closely match a given distribution of responses that form a dataset.
  • etc

There is a curious connection between these assumptions and another well-known feature of cognition. Taking the listed assumptions in turn, each can be interpreted in four different ways:

  1. As an assumption, that encodes the processes which are necessary or useful, given the problem domain.
  2. As a constraint on what is learned, and how it is learned; solving those problems in that specific way is literally the only thing the AI can do.
  3. As a shortcut or heuristic; the AI can cut through some foundational learning and jump right to solving the problem.
  4. And as an abstract concept.

The first three are self-evident; the last requires a bit more clarification. Each of the capabilities mentioned in the previous post can be thought of as embodying the functional side of an abstract concept. Together they form the implicit ontology of the AI. For example:

  • That letter characters are important and should be interacted with implicitly encodes the concept of “written language”.
  • That the LLM should react to these characters by producing its own words encodes the concept of “responding”.
  • That characters should be grouped into recurring patterns encodes the concepts of “words” or “morphemes” depending on the division.
  • That only some combinations of letters are correct encodes the concept of “vocabulary” or “grammar”.
  • That the meaning or usage of a word in a context is singular encodes the concept of a “definition”.
  • That the LLM should keep training as long as it is able to improve its responses encodes the concept of “optimization”.
  • That the agent should respond in the predefined format (e.g. words, not dance) encodes the concept of “appropriateness”.
  • That the words point to a space of meaning that is distinct from the words themselves encodes the concept of a “referent”, and more generally “semiotics”.

You can see that the above hard-coded assumptions — or capabilities, or constraints — are each closely tied to a concept. Together these comprise the ontology of the problem domain. However, the ontology is only implicit. These are not full “concepts” in the cognitive sense of the term. In fact, they can’t become full concepts in narrow AI — and that is the point. A narrow AI is not so much defined by the ontology it has available to it, but rather the ontology that is just outside its domain; they are like a mould that shapes its learning within.

Every concept has implications and consequences which manifest when it is applied to some pragmatic use case. For example, the concept of “symbol” involves interpreting variable pen strokes into a finite set of entities (e.g. different pen strokes that all imply the letter “A”). The assumptions listed in the previous article are examples of how the relevant concept is usefully applied in that context. Narrow AIs try to build as many of these applications into their architecture as possible.

However, these concepts aren’t really in the AI itself, they live outside it. They are applied piecemeal to the best abilities of the humans that built them, and they work as long as the developer’s own assumptions hold¹. They are at work both inside and outside the training/inference loop — they direct the design of the architecture itself, and the tuning of its hyper-parameters. In a way, the humans building the AI are also involved in the AI’s learning loop, constantly rewriting and adjusting it until it meets their needs.

Unfortunately, “living” concepts shift and adapt to changing conditions which may arise. Consider how the concepts of property or murder need to adapt fluidly to unusual circumstances like wartime. No concept can ever be so perfectly and so clearly defined as to account for all edge-cases and contingencies.

So when unexpected cases arise a narrow AI can’t engage with its boundary concepts and adapt them— it is corralled within these assumptions like a fence. It must accept their concrete applications as given, in the same way that you as a human can’t step outside your body and senses². So a LLM will struggle when circumstances change in a way its designers never intended, e.g. when:

  • it needs to solve cryptograms, where each visible letter represents a different letter, and must be swapped before the sentence can be understood,
  • it encounters words that are written phonetically, like “peetsah” (pizza),
  • it must read a new language with unfamiliar structure, such as logograms (Mandarin), connected scripts (Arabic), phonetic (NATO), or any arbitrary invented script (Tolkein’s Elfish)³,
  • it must read words in which all but the first and last letters are jumbled — “typoglycemia”, which humans can still read — or where words are rearranged according to some contrived pattern, such as pig latin,
  • it must read letters that are placed together in odd arrangements, such as in company logos.

These and other cases are relatively simple for you to address, but would be impossible if the hard-coded assumptions of LLMs were hard-coded in humans as well. As you solve problems, your concepts naturally expand to include them, whereas in an LLM each of the above situations would have to be addressed by a new, manually programmed rule or function. In so doing, the trainer is extending what language means to include arbitrary manipulations of character order, as in pig latin, or extending ASCII to also include Mandarin logograms (UTF).

The LLM couldn’t do it of its own accord. The assumptions and motivations live outside of the ontological “fence”. Like train tracks, its design directs its learning by defining the allowed domain. And like train tracks, the constraints make narrow AI productive, at the same time keeping it “narrow”.

If AI is ever to be general, the assumptions and constraints that narrow AI builds into its processes must be devised or discovered by the AI itself, without requiring direct interference into its architecture. Once learned they would then drive problem-solving within that domain, and the AI would be adaptable to the “long tail” of exceptions. If a narrow AI could know that its goal was to match some dataset’s distribution, it could also, hypothetically, write code to accomplish it. If inference results deviated from expectations, it would be no longer need a human to manually tune its hyper-parameters, since the agent would always know that it is failing in achieving its goal — a goal it itself created. But how can an AI make its own rules regarding such scenarios?

The best way to answer this question is to look at how you acquire or invent the above rules. You weren’t hard-coded with a certain perception rule or action pattern, such as reading from left-to-right, or matching a word to a fixed vocabulary list. You created those assumptions out of a deeper motivation. And the motivation was simple: you wanted to understand what is written.

When you see a logo with an odd arrangement of letters, and spot a letter among them, you might suspect there is something to read there, but cannot decipher what it is. The need or drive to correctly interpret written symbols is acquired in childhood. This happened when parents or teachers asked you to read a text, and you were faced with indecipherable letters and their disapproving faces. It now leads you to try various configurations until you identify a word-like series in it, or you give up (also a type of solution)⁴. If you do manage to solve it by discovering a word in the jumble, the new action pattern is automatically elicited the next time you try to read something similar. Your concept of “reading” has now expanded in scope: it includes non-linear text.

If, on the other hand, you see a heap of Scrabble tiles in a bag, or random squiggles or lines, you simply ignore it. You may have been fooled by such gibberish as a child, and realized that others ultimately accept that you can’t decipher it. Making the decision that you “won’t understand what you see” is a type of solution in itself —it prevents you from exerting unnecessary effort. In an LLM, the act of ignoring gibberish would have to be hard-coded as part of the definition of a response. In creating your own rules and assumptions your own responses become robust, because they derive from a deeper underlying goal to which you are striving, and can be adjusted or even circumvented as needed.

This approach has a potential weakness, namely that the AI’s performance will only be as correct and effective as the AI is willing to put in the effort. We mentioned earlier that the assumptions in narrow AI provide shortcuts to speed up learning — so LLMs don’t have to go through the long process of learning to recognize letter characters, or to read left-to-right. On the other hand, there is no limit to the amount of creativity a general AI can employ, as long as it meets the underlying needs. This is useful when it comes to solving difficult problems in which the definitions of the concepts are not clear cut, such as, say, “increasing productivity while maintaining ethical standards”.

All this may sound reasonable at a high level, but the real challenge is in the implementation. How exactly does one get from basic needs to complex goals, thinking and acting? How can this approach lead to an agent that can devise, say, mental steps for performing long-division? In the next post we’ll dive into how an AGI can create and maintain fluid concepts by centring each around a group of goals.

[Next post: Reversing concept creation]

¹ When the assumptions prove brittle, we tend to say that the data wasn’t “clean”, which is perhaps throwing the blame on the data rather than the brittleness of our algorithm.

² For now.

³ Unless they are first reduced to the UTF characters that it recognizes.

⁴ It should be noted that the problem which a concept “solves” cannot include the concept itself in it, since that would create a circular definition. For example the concept of reading cannot be defined as a solution to the problem of “I can’t read this”, since the latter includes reading in it. Rather it solves the problem “I must understand or decipher these shapes”. We will dig into this in more detail over subsequent posts.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.