Pragmatics precedes semantics

Explainable AI as post-hoc rationalization

From Narrow To General AI
12 min readFeb 7, 2024

This post is a continuation of the previous one, which explains why identification using AI is so difficult. This is also the twenty-first post in a larger series on AGI. You can read the previous post here. You can also see a list of all posts here.

Identification, in modern cognitive research, is considered the first step in a broader process of explicit planning and problem-solving. The mind connects specific sensory experiences to general categories (e.g. the appearance of an elephant with the category elephant), and thereby creates the set of entities that the agent then manipulates to achieve its aims. For example, an autonomous vehicle might first identify other cars, as well as traffic lights and road signs, predict how they will behave, then decide how to make a left turn using that information. The whole process looks roughly like the following:

These middlemen — the explicitly identified entities — are not always necessary¹. And for good reason: if they were, an infant couldn’t drink from a bottle without first having clearly-defined concepts of drinking and bottle. Explicit identification would be an undue burden in such cases. Similarly, a concept of smell is not needed to avoid bad smells, or even to indicate to others that you don’t like a smelly item. You don’t even have to consciously know the cause of your repulsion — you can simply react.

Or consider the word “hot”. Infants aren’t born knowing the difference between hot and cold — they often confuse those words. But a toddler would be poorly served if they first had to learn how to identify a sensation as “hot” before they could tell others that you didn’t like it. Fortunately, in practice you start at the other end: you first learn how to respond to heat by quickly avoiding things that hurt you, then learn how to communicate to others that you don’t like those burning objects (e.g. “bad!” or “ow!”), and finally you can unite related thoughts and experiences under a label: “hot”.

Even then, the word need not be a conscious, descriptive label. When a toddler first learns to say “hot!” they are not giving a factual account of a type of bodily sensation. Rather, like yelling “help!”, they are using the word as a tool to attract attention. In summary, clear definitions for concepts are always a late arrival in the psyche, and are generally an afterthought.

Pragmatics precedes semantics

In fact, converging on clear definitions or definitive criteria isn’t even necessary. You may have noticed that you usually know how to use a word before you can clearly define what it means. People frequently spout newly-acquired terms while being only dimly aware of their true definitions; words like “cornucopia”, “peruse”, “paradigm”, “rigmarole”, or “wherewithall” often fall into this category. You might say that you know such words by their “feeling”. But — like the child yelling “help!” — what you actually know is the problem contexts or social motivations which would drive you to use them, and which they would be effective at addressing.

To more clearly explain that last statement, consider the word “here’s” in the phrase “here’s a block”. Its literal meaning (its semantics) indicates the location of the block. When you use it, however, what you really mean is “look at this block” (its pragmatic use). It should come as no surprise that when toddlers learn to use “here’s” in phrases like “here’s a block”, they’re not simply giving you information about its location. They want you to pay attention to it, and to them. The upshot is that learning a word’s pragmatics precedes learning its semantics. A toddler will often know how to use the word “here’s” before they even understand the meaning of the word “here” by itself. To them, the meaning of a word is not yet separate from its practical usage, which is to say, its effect on other people.

Every act of speech, whether spoken by children or by adults, has a reason for being uttered. In each case you are solving a problem related to communication. For example, you might be trying to look knowledgeable, or to please others, or to change their behaviour. This motivating need draws a set of thoughts and experiences together, and connects them to expressive sounds (words) that become tools for its purposes. You don’t impassively discover and label inert concepts as they float around in your mind. You assign experiences and thoughts to verbal expressions in order to solve a specific communication problem. And you evaluate the result on its moment-to-moment utility to your social goals.

It is in speaking that linguistic thinking is accomplished. The thought doesn’t pre-exist the expressive bodily activity of speaking because it is only in the act of talking to ourselves or to others that the thought is articulated, and becomes a determinate thought. — Scaling-up skilled intentionality to linguistic thought

Declarative knowledge is also a procedure

In modern cognitive science there is a widely-recognized split between procedural and declarative knowledge. Procedural knowledge corresponds to knowing how to do something, such as skilfully playing tennis. Declarative knowledge entails knowing facts about something, such as the weight of various tennis rackets.

However, this division ignores the obvious fact that declaring knowledge is also a linguistic procedure, one that must be learned in a social context. You must know how to communicate the weight of a tennis racket in English, what degree of precision is usually appropriate, as well as the social contexts in which it is useful to do so, such as when answering questions about rackets. So in the same way that you come up with intentions for what you will do, and carry them out, you also come up with intentions for what you will say, and carry those out. These latter intentions are what we call “declarative knowledge”.

Among the skills and abilities people develop in the human form of life are skills for expressing, either in the activity of speech or in writing, ways of thinking about the world. — Scaling-up skilled intentionality to linguistic thought

There is no inherent distinction between declarative and procedural knowledge — both are problem-solving processes. Every linguistic piece of knowledge you have is an intention² for how you (or someone else) would communicate it. This is why it’s phrased in your native language, and in a way that would be grammatically clear to others.

Since declaration is by nature communicative, other social motives and pressures are also brought to bear on it. For example, if your goal when speaking is to impress others, then your declared beliefs will be shaped by the desire to look good, to appease authority figures, to justify yourself, to influence others to your benefit, even to appear “impartial” i.e. fair and objective. On the flip side, it is quite common for people to reject knowledge that conflicts with their political agendas, makes them look bad, or which it would be uncomfortable for them to admit.

Declarative knowledge is not some impartial, rational function that groups stimuli based on their inherent features or frequency of occurrence. Rather it is a parallel process of learning how to correctly interpret experiences as language, or as other expressive identifiers like symbols and diagrams. It is a social skill that must be acquired just like any other.

The diagram above shows two separate problems being solved given the same visual input. In the top row the agent must lift a box (procedural), and in the bottom row it must name the objects that are present (declarative). In the first case the task might be something like “how do I lift it?” In the second row it’s “what would I name this?” Note that in the first row the mind doesn’t explicitly identify the highlighted “grip points” when lifting boxes. The yellow areas are simply the sections of its sensory experience that it implicitly learns to respond to with the appropriate hand-actions. Such entities are often referred to as “affordances”, in that they highlight those aspects of the agent’s sensory inputs that are useful to its goals³.

The agent is not modeling the causal structure of the environment per se, but rather those aspects of the environment that are important within its specific niche. We think that what is “inferred” in active inference […] are not objects or properties of objects, but rather anticipatory patterns that specify a solicitation. — Self-organisation, free energy minimization, and optimal grip on a field of affordances

Affordances reverse the customary order of identification. The act of problem-solving now latches on to and defines useful, implicit “identifiers”³. In the same way, the yellow areas in the second row are the part of your visual input that you learn to respond to with words. This naturally implies that the yellow box areas are just as much affordances as the grip points above are, but this time for the act of speaking. So if we think of both sets of yellow areas as implicit identifiers, then explicit identification must always be preceded by an act of implicit identification. The two are, at bottom, the same.

Explicit identification is not, therefore, a prelude to planning, but rather a parallel process of learning the right responses for effective communication.

A practical example from robotics

At the company at which I work, our robots perform pick and place operations on groceries. Over the years, we have approached this challenge in a few different ways. In an earlier iteration, we created a pipeline that first segmented the items it saw in the bin — i.e. identified them — then performed stages of reasoning about those segments to find the best spot to pick:

We soon realized that we didn’t actually care about the boundaries or identities of the items in the bin. Our ultimate goals were to pick the item successfully, to scan a bar-code, to place it correctly, to keep hold of the item in transit, etc. So why not target those instead? We already had clear signals for those outcomes on which we could train a model. So we trained a grasping model to predict which points, when grasped, would be most successful according to those criteria. The resulting image is a map, not of objects, but of affordances:

Systematization happens after the fact, and is incomplete

There are times when we as humans interact with objects in unconscious, habitual ways. In such moments you may be dimly aware that some implicit identification is taking place which is necessary to the interaction. “I picked up the cup”, you think, “so I must have identified the cup somehow”. But when you now turn your attention to what those key objects were — either in thought or in physical reality — a completely different process of identification occurs, with different goals and results. Your desire to put your experiences into words produces different solutions than your desire to interact with those experiences. For example, if you want to tell someone to lift a cup, it is useful to identify the whole cup, not just the handle.

There is no necessary correlation between how you explicitly label experiences and how you implicitly make use of the same. You should not expect a common category or concept to come out of both. Only in retrospect might it seem like an underlying concept of cup was there in both cases, guiding your actions and thinking. This concept, however, is an interpretation you invent after the fact. The only way you could even know that there is a causal concept of cup anywhere in your mind is to give it some sort of concrete identifying mark, like a word or image, in your thoughts. This act of “marking”, as explained in a previous post, is itself what makes concepts feel clear, discrete and unified. Because we communicate using discrete words, any concepts that rely on words end up appearing “discrete” too.

It should be obvious by now which of the two processes, interaction or explicit identification, comes first. Identification is an attempt to make an implicitly functioning process explicit. Words, diagrams, and other audio-visual identifiers are all the result of this process of post-hoc re-interpretation. Your explicit world-model is itself a collection of useful expressive thoughts⁴.

Any such formalizations you invent will never be perfect or complete representations of an implicit process. Consider how difficult it is to explain, for instance, the precise actions involved in walking on two feet. Your explanation will inevitably leave out many edge cases, such as maintaining balance, or intuiting which objects you will bump into⁵. Or consider how difficult it is to clearly set out what makes something morally “right” or “wrong”. Yet in practice you find you can both walk and deal with exceptional moral cases as they arise; you merely can’t express them systematically.

Explainability is a type of social justification

This separate-systems view of identification has implications for explainable AI (XAI), a field aimed at understanding why AI make the decisions they do. XAI, in principle, requires that the agent have basic, clearly identified representations — e.g. cups — which must be present and in use before the agent plans or carries out any activity. Afterwards, you can lay out the agent’s reasoning and evaluate it according to the rules it followed⁶. This means that the AI must have been playing by those rules a priori. For example, you may conclude that it planned its actions using sound logic, rather than going on gut feeling or spurious associations.

Yet affordances like the grip points in the image above are not by nature interpretable. They were originally learned in order to cause gripping actions, not to cause explanatory words, or to be communicated clearly. To be interpretable, a second affordance mapping between experiences and words must be created to serve the needs of explainability. And this mapping, as we saw in a previous post, will be shaped by the social motives that are inherent in all communication — if only basic ones, like the need to express ideas grammatically.

Why you did something and how you explain it are not the same.

For humans, explaining why you did something is usually an after-the-fact rationalization designed to defend your actions to others. The phrase “explain yourself” actually means “defend your actions”. When explaining yourself, you are trying to reframe the memories of your behaviour into a set of phrases that either justifies or systematizes them. Both serve the purpose of collaboration: justification is needed to assure others that you have the right intentions, and systematicity is needed to enable replication and consistent symbolic manipulation. It’s no wonder that these are exactly the same motives that drive the search for “explainable AI” — the AI is being asked to defend its actions.

Moving the act of identification into its own parallel system, and assigning it to a process of social expression helps resolve a puzzle we noted in the post that began this series: why are there were so many disparate and contradictory theories of representation learning. Now we’re can see that it’s because there is more than one underlying motive or goal driving identification, and it is this underlying goal that shapes what you recognize in the world. Yet we’ve only begun digging into the connection between identification and motives. In the next post, we’ll show the full ramifications of this inversion of thinking, and how it completely upends many assumptions about cognition itself.

Next post: The great epistemological reversal

¹ As demonstrated with successes using model-free reinforcement learning.

² This post simplifies thoughts as “plans” or “intentions”, though as discussed in another post, a more accurate description is “a sensory experience that previously preceded a solution, and which gets recreated as a self-generated sensory experience”.

³ The term “affordance” is usually defined in contrast to explicit knowledge.

⁴ The separation of useful thoughts vs useful actions will become the foundation for symbolic logic, which will be discussed in a later post.

⁵ The popular expression “if you can’t explain it, you don’t understand it” is thus a bit misleading; it equates “understanding” to “declarative understanding”.

⁶ The same is true of probabilistic models; in the end they must define clear, interpretable dimensions that are assumed to matter, or not to matter as the case may be.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.