The paradox at the heart of AI-based identification
Determining the source of truth leads to contradictions
This is the twentieth post in a series on AGI. You can read the previous post here. You can also see a list of all posts here.
Autonomous identification is an unsolved problem
Real life is not an annotated dataset. Nature doesn’t come with pre-made annotations that help you correctly categorize all the things it contains, like “atoms” or “hurricanes”. All we have is a stream of ever-changing experiences, which never repeats itself. People, and ultimately AGI, must be able to create their own labels and ground truth from within that stream.
Any AGI would only be considered viable if it engaged with the world autonomously. It must be able to choose what it will learn through exploration, and without being sent back to a lab every week for mandatory brain-washing. No trainer should need to inject prepared datasets, or force regressions based on what it deems to be the correct answers to life’s myriad questions. Doing so would cap the agent’s knowledge and growth to its trainer’s limited understanding.
If there is human intervention once the evaluation starts then we are really evaluating the programmers, integrators or curators, and not only the AI system (or component). […] It is not unfair to say that we evaluate the researchers that have designed the system rather than the system itself. — Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement
The value of autonomous learning is demonstrated most clearly when it comes to an AGI’s ability to identify the things around it; e.g. to identify a window as a “window”, a person they know by name, or the colour of a ball as “red”. The ability to identify objects, features, and events in its perceptual field is generally considered to be the foundation for most higher-level cognitive abilities, and thus of critical importance. A robot, it is commonly assumed, cannot determine how to stack boxes on top of one another unless it can first isolate and identify boxes within its stream of unstructured visual inputs.
Although identification appears to be a simple and straightforward behaviour — compared to more complex ones like solving physics problems — it is by far one of the most difficult. It is so difficult in fact, that there is no existing theory in AI research that even attempts to explain how an agent can learn to identify entities autonomously — that is, without a trainer directly forcing this acquisition through regression and labelled datasets. The full breadth of this challenge becomes apparent when you consider how an AI could learn to identify abstract entities such as war, love, time, wealth, shape, direction, memory, etc. This is no small omission.
What is the source of truth?
The reason for the difficulty stems from a critical and apparently insurmountable paradox: namely, it is hard to determine what the source of truth for identification is. Consider, for example, an AI that is trying to generate the category of elephant from scratch:
- To generate the mental category of elephant, the AI must somehow combine many examples of elephants and extract a common pattern or structure from them.
- To combine many instances of elephants the AI must be able to recognize an sensory input as belonging to the category elephant, and exclude instances that belong to orange, dog, etc.
- To recognize an sensory input as being an instance of an elephant, it must assign it to the general category of elephant.
- Therefore, in order to discover the category of elephant by induction from instances, it must already have the category by which it can identify those instances.
This is a self-contradiction. Prototype theory, exemplar theory, inference learning, and feature learning all suffer from the same internal contradiction. They all expect that someone who knows which group the examples belong to is present to inject this knowledge from outside. Identification cannot escape its reliance on a supervised training regimen. If there is no external source of truth, the process cannot be bootstrapped autonomously by the agent itself.
This is especially problematic if the concept in question is brand-new, and thus no source of truth outside the agent even exists. Consider, for example the following paradox: which came first, the concept of a telephone, or the first working telephone? By the above listed theories of object recognition, either answer leads to a contradiction. If the first telephone existed before the concept of the telephone, how did its creator make it? If it came after, how did its creator learn about the concept if no examples existed?¹
The following series of posts will address this gap and resolve the underlying paradox. But first, in order to make clear how daunting the challenge before us is, let’s outline the true scope of the task, as well as the inadequacies of existing approaches.
The limitations of clustering
Say that your goal is to design an AI which, when set loose in an arbitrary space, can isolate and identify objects and events around it. At first glance, it seems such an agent must first be able to delineate physical ‘objects’ in its perceptual field so that it knows what to identify. It must separate instances of, say, “boxes” from the background sensory noise.
One intuitive approach to accomplishing this is to find regular patterns in the agent’s sensory experiences and cluster them together based on their similarity. From there, the AI can build up a statistical model of its experiences that highlights the most common trends, and call those “objects”. This is perhaps the most common unsupervised approach, e.g.:
Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. — Boltzmann Machine
Completely nonredundant stimuli are indistinguishable from noise²… Thus, redundancy is the part of our sensory experience that distinguishes it from noise; the knowledge it gives us about the patterns and regularities in sensory stimuli must be what drives unsupervised learning. — Unsupervised Learning
If a category comprises two distinct clusters of examples, network models can create a separate hidden unit for each chunk — Oxford Handbook of Thinking and Reasoning
This approach, however, will only sometimes work, and only for a small subset of items and events: those which have enough regularity to count as having been repeated. It may learn to identify a “green ball” based on a recurring colour and circular outline. But by what clear or repeatable criteria could it identify examples of “causation”, “existence”, “change”, or “shape”? As discussed in another post, the majority of words in the dictionary do not have well-defined sensory features that allow for empirical classification. Consider how even the majority of words in this paragraph do not have clear audio-visual correlates.
Although audiovisual similarity within a certain margin of variation does play a role in identification, there is still a broad range of sights and sounds that could be united under a common type. These are far too varied to be based on consistent repetition of sensory inputs.
There is also a lot of ambiguity and arbitrariness when it comes to identifying even concrete objects. For example, there is no objective way to determine where a given “mountain” ends and the “valley” begins. Such decisions are ultimately a matter of perspective, and are made by the agent, rather than on the basis of sensory criteria. If the source of truth for identification were derived from repeated sensory patterns, you would expect there to be no such indecision.
In fact, we may have jumped too far ahead with this first step. Before something can be identified at all you must know what you are identifying it as, or what category or type you are assigning it to. That is not a straightforward task. There is often ambiguity regarding which label should be given to a particular instance (is it a “hill” or a “mountain”?) The same set of experiences can be interpreted in a number of ways, and as many different concepts³.
This brings us back to the “chicken and egg” paradox mentioned above — if you can only learn to identify a category or type based on common features, how did you know which features to connect to which type?
Words are not a reliable source of truth
One possible approach to defining categories or types is to base identification on the frequency of co-occurrence between groups of stimuli and a word or label. This is what modern supervised classifiers do. If the sight of various boxes frequently co-occur with the word “box”, there may be a reason to unify all those experiences under one linguistic concept, since they all predict the same thing — they predict the word “box”.
Unfortunately that approach also breaks down in practice. Consider the word “star”. It can refer to a celebrity, a point of light in the night sky, a cosmological object which includes the sun, or a 5-pointed shape. These all use the same word, yet they feel like different, albeit related concepts. There are deeper, more nuanced undercurrents of meaning running through these than merely their common label. The connection between them is not the word, but more like a feeling, a desire to exalt an object as “brilliant”.
Other homonyms have unrelated meanings; e.g. “date” can mean either a calendar time or a fruit. If words were the primary means of unifying concepts, homonyms like “date” would cause irresolvable confusion. Relying on the frequent association of stimuli with a label also ignores cases where you can learn to identify something via a single experience (one-shot learning), and need no repetition.
Linguistic identification is socially guided for collaboration
There is another challenge with learning to identify by clustering experiences and predicting their labels: even something as apparently straightforward as identifying the colour of an object is influenced by language and prevailing social conventions. For example, what we call “red meat” is actually maroon-brown and “white meat” is pinkish-beige. The designations of the colours “red” and “white” are not strictly descriptive, but rather help people distinguish between them by heightening the contrast in their colours⁴. The labels did not arise as a neutral or objective groupings, they were skewed to serve a useful purpose.
Subdividing the colour spectrum into explicit colours like “red” is ultimately an arbitrary decision. Every parent realizes this when their child asks them what colour a bluish-greenish-beige object is, and they must make up an answer. When you teach a child to name colours, you are not teaching them some essential “truth” about these colours, you are teaching them to recite socially useful words when presented with a certain set of stimuli. From the child’s perspective they are merely learning the responses to those exemplars that will get your approval. They are not deriving a deeper taxonomy of colours.
You could, if you wanted, teach a child haphazard or finely-sliced colour names that are different for each unique object — which we in fact do. The colour names we give to cars follows a different pattern than those for crayons. None of this should imply that the child is confused about the wavelength of colour that is entering their eyes. They are simply learning the responses that effectively answer your questions.
Think of a child learning to name colors: much of the child’s learning happens unobtrusively and in an unnoticed way through the imitation of others. In this kind of situation the child learns to care about the right thing [emphasis added], that is, acquires the concerns of his or her community. — A Rich Landscape of Affordances
Language is not an impartial set of identifiers for frequently occurring experiences. We only invent or learn words if they are useful for communication and collaboration. For example, we have no word for mismatched socks that are left at the bottom of a laundry basket. Yet we have a word for “betrayal”, despite the fact that we experience mismatched socks far more often than we experience betrayal. So why does only “betrayal” have an English word? Because it is useful to discuss betrayal with others — to warn them, or to get revenge. There is little value in discussing mismatched socks in the laundry with anyone.
Language is not an abstract construction of the learned, or of dictionary makers, but is something arising out of the work, needs, ties, joys, affections, tastes of long generations of humanity, and has its bases broad and low, close to the ground — Walt Whitman, Slang in America
“Betrayal” and ”red” are ultimately English words. And English words are meaningless outside the community of English speakers. An AI learning to identify “white meat” would never be able to do so without involving itself in our cultural discourse. Words are not objective markers of truth, they are social tools. Identification using words is necessarily a socially-guided activity.
Cognitive research relies on language conventions
When we do experimental research into object recognition we rely on test subjects being able to communicate their “private” thoughts to scientists using “public” words. Experiments can only address concepts that have been communally defined before the experiment began, and only to the extent that you can assume shared definitions. Otherwise it is impossible to discuss the test subject’s thoughts with them. The test subject must first retrofit their idiosyncratic thoughts into English words, and abide by the common expectations we all have regarding them. By that point social conventions have already skewed the subject under study.
In a sense, when you’re doing scientific research into the human ability to identify entities, you are really only checking how effectively we’ve socialized the test subject into the prevailing norms. It is a kind of societal audit, rather than a study of how individuals identify or recognize objects. You may even end up exposing cracks and inconsistencies in this communal edifice, aka language. Despite this, much research into identification is written as though there is an inherently correct and incorrect way to identify the objects in question, unrelated to time-bound social conventions or the limitations of language.
All identification is skewed by social expectations
So far we’ve been discussing the influence of socialization on language and communication. Beyond that, socialization fundamentally alters what we believe is true about the world. Consider the following question: what colour are rivers? It is curious that we encourage children who are colouring drawings of rivers to colour them blue. Rivers as seen in real life are usually brown, grey, green, and white. On the other hand, photos of rivers tend to portray those relatively rare situations in which they are reflecting blue skies, because that is the representation we find most pleasing.
Since the colour blue is generally associated with water, due to oceans being blue, using that colour helps distinguish and identify rivers within a landscape. As a result we end up erroneously identifying the colour of rivers as “blue”, and even remembering real life rivers as being blue, despite the preponderance of experience to the contrary. It may be useful for us to identify rivers as “blue”; it is not, however, a faithful representation of our sensory inputs.
Identification is at the intersection of many challenges
In this post we covered the broad range of challenges involved in teaching an AI how to identify entities. They can be summarized as:
- How does an agent learn or invent the categories that need to be identified autonomously, without being force-fed labelled instances?
- How does the agent then learn to assign labels to individual experiences?
- How does the agent resolve ambiguous cases?
- How does language shape the categories the agent classifies by?
- How do preference and usefulness influence the process of identification?
The theory we are searching for must cover all these cases. Moreover it must be carried out by an AGI autonomously, from the ground up. The widespread use of supervised training has created the illusion of a simple problem by simply side-stepping the hardest parts. But this apparently simple task is really one of the most challenging in the field.
The history of AI research into identification has repeatedly shown that trying to address one of the problems above tends to undermine others. Deriving categories from word-labels helps explain how socially-guided identification is, but makes it difficult to explain how homonyms or terms like “star” are kept distinct. Some researchers have, as a result, suggested that there must be multiple systems of identification, each useful for different tasks:
There is good reason to believe that the cognitive system uses many different kinds of representations in order to provide systems that are optimized for particular tasks that must be carried out…
…when evaluating proposals about representations, it is probably best to think about what kinds of representations are best suited to a particular process rather than trying to find a way to account for all of cognition with a particular narrow set of representational assumptions. — Oxford Handbook of Thinking and Reasoning
This approach, however, raises its own difficulty: how can the AI know it must use one system of representations for a task instead of another? How can an AI know, for example, that some hand motions should be interpreted as sign-language, and others as indicators of direction (pointing)? It must already have learned how to identify sign language, and to distinguish it from other hand-behaviours. So to identify, it must already know how to identify. This is the same paradox with which we started this post.
Dedicated systems only side-step the problem
In AI research this difficulty is generally ignored by building dedicated systems that can only perform one task, an approach referred to as narrow AI. Narrow AI models predefine the space of entities that will be identified by the agent, and even directly guide the act of identification. For example, an AI built to solve math equations would take as input only mathematical characters presented in sequence; as opposed to hand-written math equations on a crumpled sheet of paper in a 3D classroom, which the AI may choose to attend between doing chores and playing games at home. Dedicated systems merely push off the inevitable challenge of combining all types of identification under one umbrella. And in the end, such combination is the heart of AGI.
The design of unified architectures modeling the breadth of mental capabilities in a single system is a crucial stage in understanding the human mind — Principles of Synthetic Intelligence
The seeds of a more comprehensive answer were hinted at earlier when we mentioned that identification should serve a useful purpose. If you include achieving social goals as a type of utility, you’ll start to get a hint of a possible solution. Indeed the five challenges listed above, far from being incompatible, actually solve each other.
To see this, however, you must first step beyond certain preconceptions and assumptions about what it means to “identify” something; namely the crutch of “objectivity”. The paradoxes of identification which hold back AI research are built on certain erroneous foundations about objective truth, and this need to be excised in order for the field to move forward. We’ll address these in the next post.
Next post: Pragmatics precedes semantics
¹ Clearly imagination has something to do with this, but imagination is generally downplayed in theories of object recognition. Empirical theories assume that real, physical examples of the concept already exist.
² If this were true, specific memories of singular events, such as an oil painting you only saw once, would also be “noise”.
³ Indeed the nature of concepts themselves — what they are and how they work — is poorly understood, and even their existence is suspect.
⁴ The same is true of the labels we give to human skin colours.