Rethinking Symbol-Grounding in AGI

World-models are built of affordances, not facts

From Narrow To General AI
10 min readJul 20, 2023

This is the seventeenth post in a series on AGI. Read the previous post here. See a list of all posts here.

In the previous two posts we’ve been working on the assumption that in order to create a more general AI one should take a narrow AI and gradually expand its bounding ontology to include more and more contexts. For example, you might begin with a Large Language Model (LLM) that only accepts UTF characters and expand it to read handwritten text. There is another approach available to us as well. You can start from the other end — that is, take an open-ended Reinforcement Learning (RL) agent and get it to build up concepts and constraints based on its experiences.

Whether RL itself qualifies as “general AI” may be up for debate; some versions presume to be general-purpose. However, in practice they tend to be trained around — and are only useful to — a narrow set of tasks (e.g. playing Atari games), outside which they have difficulty generalizing. To get them to generalize well is the challenge confronting us now.

Current model-based Reinforcement Learning divides cognition into two parts: world-modelling and action planning. The rationale behind this separation is that having an effective model of the world helps in planning the best actions to achieve an agent’s goals. Intuitively, this makes sense, and seems to match how humans have come to master the world around them. The fact that model-based RL have, in practice, not been as successful as model-free RL is therefore curious. Model-free RL forego the world-modelling and planning phases, and focus only on learning the correct actions (the policy) given the circumstances. And although model-based RL have proven effective in small, discrete environments where the actions and outcomes are constrained (e.g. chess, grid-worlds), in an open world the diversity of circumstances that an agent can find itself in is overwhelming, unless they have been pre-filtered and pre-structured into a set of meaningful constraints.

For example, grid-worlds build functional assumptions into the available interactions — the agent can only move in a limited set of directions; moving into a wall has no effect; it can instantly take actions like flipping a light switch, etc. None of these complex behaviours needs to be learned from scratch by the agent. Such hard-coded interactions define the implicit assumptions and consequently the bounding ontology of the agent. The real universe, however, is infinitely complex, and hard-coding constraints into a general-purpose RL makes the agent brittle to unexpected changes. An RL agent must instead build concepts like room or destination around its experiences, in a way that is useful to it.

Pragmatically, the only way for an RL agent to create concepts is for it to distill its thousands of experiences and convert them into something that will be useful for it in the future. The approach that is commonly used in model-based RL is to apply self-supervised training and recreate the agent’s inputs. The model tries to “compress” past experiences into a more compact structure and representation. Variations on this exist, but underneath it all the assumption is that useful concepts, or even “truth” are somehow located in this compressed representation.

In practice it has been an uphill battle, and the success of this approach has been proportional to how much the human trainers curate the datasets, stage the training curricula, and impose a prior structure (like formal logic) on the world-model. All of these are varieties of externally imposed constraints, which once again make the models narrow and brittle in unexpected situations. Our chaotic universe continues to be far too complex for an RL agent to fully represent. And yet humans can somehow do this. So what’s missing?

It is perhaps worth stepping back and recalling why an RL agent needs to build a model of the world in the first place. It does this, as mentioned, to help with planning. If an agent were to collect only the information necessary or useful for making effective plans, then it may be able to whittle down the fire-hose of life experiences into that subset which matters. Then, by the time it reaches the planning stage, the necessary preparation for making decisions will already have been done.

This means we are no longer letting frequency of occurrence be the primary arbiter of the representation; the agent will no longer believe that X is true merely because X frequently happens. The agent’s plans and motives — its intentions — must be involved in the data collection somehow. We are blurring the line between world-modelling and planning, and this will influence the kinds of world-models the RL agent will conceive. At the limit, the two may even become the same thing.

There is one obvious problem with this approach — how can the agent know what will be useful to it in the future? The only way to resolve this conundrum is to assume, tentatively, that the experiences in the world that have proven useful in the past will also be useful in the future. Since it can’t foresee its upcoming needs, the best it can do is keep a memory of those experiences that were involved in getting it from a problem state to a solution state. For example, if it wants to leave a room, and seeing someone else pulling down on a doorknob solved that problem, then remembering the experience would be useful, and a good thought to have in mind later on.

This approach is starting to resemble model-free RL quite a bit, with one difference — the model is not just of an action policy, but also of knowledge of what experiences have proven useful before. The agent is effectively storing intentions regarding what it would like to experience, with no actions associated. A past positive experience becomes a future intention. Doing so combines the best of both model-free and model-based RL; it lets utility drive what is recorded. The agent’s thoughts related to a given experience, and thus it’s world-model, can be narrowed down from all of “truth” to only what has been useful before.

The intersection of truth and utility may be difficult to intuit, since we generally think of the two as distinct. To clarify this, let’s relate it to another well-know concept: affordances. An affordance is a mental model of some part of the world that takes into account what is useful to the observer. For example, understanding a doorknob means understanding that it is useful for opening doors. An affordance is the set of thoughts that equates the understanding of what a thing is to its utility to the agent. Our perception and interpretation of it revolves around that utility.

To say that “a doorknob is used for opening doors by hand” includes a complex of useful information in one statement: the problem that the object² solves, the plan of action to take, and the solution it accomplishes. This is far more useful to an RL agent than an objective description like “a doorknob is a metal protrusion on the front of doors that can rotate around an axle and releases a latch between the door and door-frame”. The latter is barely useful at all, and the agent would have a lot of difficulty turning that into a plan given the vagueness of the terms involved (e.g. where is “the front” of a door?) Not only that, but the agent would struggle to collect such information in the first place. Most concepts are highly nuanced, and each instance or example of one can only impart a tiny fraction of the whole — thus a workable objective definition takes a long time to learn³. In any case, deriving an objective definition isn’t as important as knowing how to react to an experience, and adapt your reaction as exceptions come up. Focusing an agent’s world-modelling efforts on forming a comprehensive and impartial “truth” about the world may be too much effort for too little benefit.

The pragmatic approach would be to fill an agent’s world representation with affordances; that is, by defining “understanding” as “the best way to interact with a given object or experience”. Doorknob would now implicitly contain the thought of turning it, followed by the door opening. This combines solutions as actions (turning a doorknob) with solutions as thoughts (a memory or intention to turn a doorknob).

As an added benefit, an affordance can be learned quickly while solving problems. As the RL agent tries, then finally manages to open the door, it can store that last experience in connection to the object at hand, having learned something useful about interacting with doorknobs in the future. Whether the doorknob contains a rotating axle and latch, a magnet and electronic switch, a fingerprint sensor under the handle, etc. is irrelevant to its immediate use and can be discovered after the fact as needed — namely when things don’t go as expected. Such exceptional cases would then become new problems that require new solutions; and thus the concept of doorknob expands on a problem by problem basis⁴.

Affordances are always contextualized around the agent’s subjective problems. This means a concept will shift how it is applied given the specifics of what the agent needs in that context. For example, the concept of an object’s colour seems like an objective feature, easily formalized and learned by an AI. But if you ask an AI to name the colour of a car, you don’t want it to consider the entire spectrum of light hitting its camera, rather only the body of the car (ignoring the windows and wheels). This is a type of contextual problem-solving; the agent must be taught to ignore certain colours in an object and focus on others before giving an answer. For flowers, the focus should be on the petals, not the leaves or seeds. It should also call a grey car “silver”, which is an appropriate term for cars, though not for grey clothes.

The above examples show that affordances are at work in language too— a word is an affordance. The sound-label the agent gives to a concept is one aspect of how to usefully deal with the underlying phenomenon in a multi-agent (social) context. For example, the word “help” isn’t just a verb, it’s a useful utterance. Speaking the word, or even getting another agent to speak it, can alleviate ongoing problems. That word is like a doorknob, an affordance. To think of the word “help” when faced with a problem is to have an intention. Any word can be an affordance — in fact, since there are an infinite number of potential concepts that you can give names to (e.g. that weird feeling of missing a step while going up stairs, that blob of toothpaste stuck to a sink…) and only a few of these have names, it is fair to assume that the words a language has are the ones that the society found useful to communicate.

This is a non-trivial observation. We are taking a principled step back in AI epistemology: instead of asking the agent to define, say, the colour of a car as an objective attribute, we are defining the same knowledge as “if someone demanded of you what colour a car was, how would you solve that problem?” If you expand “word” to include any symbol, this has implications for how knowledge itself is defined in AI, and consequently logic too.

Creating knowledge from scratch has always been a difficult practical challenge for AI research. The symbol-grounding problem is one manifestation of this challenge. Defining a concept only by how it relates to other concepts (e.g. WordNet, knowledge graphs) gives it no anchor in the agent’s actions and experiences — and by extension the agent’s utility. The symbol-grounding problem is a problem because we start by trying to get the AI to define the objective truth about a concept (its definition), and only once we have that, applying the knowledge to the agent’s use cases and planning. By reversing the direction of definition and starting with the agent’s use cases, the symbols are already grounded. A grounded definition is one that is composed of affordances, not facts.

Reframing knowledge as an attempt to stitch together affordances greatly simplifies the task of world-modelling, and takes the process from slow and intractable to immediately useful for the agent. It also resolves many other outstanding challenges in agent cognition, such as:

  • It provides an avenue for RL to engage in learning by imitating others (not to be confused with imitation learning)— seeing a problem solved by others is just as useful as solving it yourself; both cases provide you with a future intention.
  • The RL agent’s learning about the world becomes more experimental, like testing hypotheses.
  • The RL agent can avoid unnecessary and dangerous exploration, since it only learns something new when it is motivated to by a problem.
  • It keeps concepts natural, fluid, and adaptive to exceptions.
  • It allows social motivations to influence language use, e.g. it opens the possibility for an RL agent to lie about what it knows to an enemy. The question “where is the rebel base located?” is very different from “if an enemy asked you where the rebel base was located, how would you solve that problem?”
  • It allows the RL agent to acquire transcendental concepts that appear to have no concrete instances, like space and time, by bootstrapping up from the agent’s motivations (more on this in future posts).
  • And it even explains quirks of human psychology, such as why humans often believe what they want to believe, or remember emotional information more readily than banal facts.

There is a lot to take in here, and the reader might be struck by some apparent objections to this approach. For example, how can objective knowledge be possible at all if world-modelling and thinking itself are composed only of affordances? In the interest of keeping this post short, we will address such objections in the subsequent posts, where we’ll also discuss examples of how using affordances in world-models helps solve some of the more intractable challenges of AGI development.

Next post: Learning the concept of “number

¹ For simplicity, you may include both a priori and a posteriori sources of intuition under the umbrella of “experience” if you like.

² “Object” is used loosely here, it can apply to any set of experiences, like a jog or a sound.

³ The language of this post tacitly assumes that an object, such as a doorknob, exists in reality, and we are gathering affordances about it. This is only a didactic convenience — the approach outlined doesn’t require that objects exist at all, only their useful interactions. This makes it useful for arbitrary learning environments like abstract computer simulations (e.g. a financial market).

⁴ This approach is reminiscent of the exemplar theory of concepts, except that affordances are the “applied” side of exemplars — e.g. reading, when applied to English, entails reading from left to right.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.