How Empiricism Hinders the Development of General AI

Concepts are aspects of the observer, not the world

From Narrow To General AI
15 min readAug 4, 2023
Concepts are pieces of yourself projected onto the world.

“Making computers learn presents two problems — epistemological and heuristic. The epistemological problem is to define the space of concepts that the program can learn…

…The heuristic problem of algorithms for learning has been much studied and the epistemological mostly ignored.” — Concepts of Logical AI, McCarty

Humans have the remarkable ability to invent new concepts with minimal help or guidance. This underappreciated skill is the source of our species’ creativity; it underpins the development of new technologies, artistic movements, social trends, political revolutions, and even military strategies. Novel concepts like exponent, technocracy, or search engine are regularly invented and disseminated within societies, becoming tools in a broader, more expansive landscape of discovery and progress. Your own ability to extend your ontology (your set of concepts¹) increases the breadth and flexibility of your mind. Concept creation is therefore an essential stepping stone in the development of Artificial General Intelligence, especially one that intends to contribute to the progress of our species.

The ability to invent novel concepts from scratch is, however, conspicuously absent in modern AI research. This is no small omission. An agent’s ontology bounds its scope of understanding, its cognitive behaviour, and its ability to plan actions. Despite this, there are scant few methods to introduce concepts into AI. The only options are either to have them inculcated directly by a trainer — such as forcing classifiers to train to labels — or else to embed them into the architecture as priors²; the latter is often necessary for concepts that have no concrete representation, such as left, improve, or existence.

In contrast, the only person that can insert a concept into your mind is yourself. Even when learning common, mundane concepts like shirt or game your mind must still derive each for itself from its experiences, and of its own volition. Other people can, at most, be the occasion for a variety of unstructured sounds and images which you then perceive. It is up to you to organize those into structures and useful patterns, and compile your personal versions of concepts. In fact, you will usually have the beginnings of a concept before you ever attach a word to it — it is difficult to teach someone a word unless they already have a notion of what it’s referring to. Sometimes you never even attach a word. Aligning your own concept with those of the wider society is an optional step. As a result you generally won’t know, when you derive a concept, if others have already found a similar one and given it a name. And so people regularly invent concepts that didn’t exist before, such as doomscrolling, phishing, or staycation.

This inside-out approach to inventing concepts is the reverse of how concepts are introduced into AIs. Research into the topic has so far focused on concept acquisition, and generally ignored concept invention as an near-insurmountable task. There is, in my estimation, not even a theory in the literature to explain how the mind invents hitherto undiscovered or novel concepts. Every existing theory presupposes that the concept being learned already exists and is known through instances. Rule-based, exemplar, prototype, explanation-based, and Bayesian theories all require that you codify knowledge about a concept into a set of examples around which the concept is moulded. Even approaches based on clustering assume that the agent knows what counts as “an instance” — as in “an entity that should be clustered” — which means the inputs have been pre-filtered.

Behind the specifics of each theory lies a common hypothesis: that humans also learn concepts by being exposed to examples and (usually) their labels³. This, of course, implies that it is impossible to invent a concept if instances don’t already exist, which categorically excludes creative thinking. For example, did the concept of a typewriter exist before the first typewriter was built, or after? If before, how was the concept learned when no examples were available? If after, how was it built?

The reliance on examples is a by-product of a broader philosophical commitment in cognitive science, as well as AI, namely a commitment to empiricism. Empiricism, as epistemology, revolves around the premise that everything that is learned is learned from sensory experiences. It has its origins in such seminal philosophers as John Locke:

“Knowledge of the existence of other finite beings is to be had only by actual sensation.” — Essay Concerning Human Understanding, Locke, Book 4, Chapter 11–1

When applied to AI, it entails that “truth” may be gleaned exclusively from the data the AI is trained on. Subjective differences can only hinder its discovery. Although subjectivity and empiricism are not fundamentally in conflict, in AI empiricism has been extended beyond epistemology, into something akin to “empiricist ontology”. This is the proposition that concepts are imposed on the mind via experiences, with no consideration of the agent’s preferences or choice. The sole function of an agent is to mirror the truth already in the data, often by compressing or re-formulating the inputs, but never by adding anything from its own preferences, values, or goals⁴. A concept should have nothing in it regarding what the AI wants to believe — it should understand the world based on what it is given, and nothing else.

This commitment is rarely stated out loud, yet it permeates almost every facet of the field. Techniques such as calculating loss based on KL-divergence, or reconstruction loss, assume that the sole purpose of the AI is to compress the input data as accurately as possible. The ubiquitous practice of training models by randomly sampling data into train/validate/test sets also presupposes that the truth behind the data is indifferent to the observer. Two observers who have different experiences of the same underlying concept, or perhaps the same experiences in a different order, are expected to derive the same “truth” about it. Differences between agents represent only anomalies or margins of error. Random sampling explicitly tries to remove such subjective differences, which are deemed incidental, and are not part of the concept’s true essence — that lives outside the agent.

Model Based Reinforcement Learning (MBRL) is another good example of the empiricist underpinnings of cognition in AI. It separates the agent’s policy training — where the goal is to interact productively with the environment based on a value function — from its world-model — where the goal is to mirror the world as accurately as possible, with no values or preferences introduced. This “church-and-state”-style separation makes the point clear: values have no place in the agent’s world-model.

Similarly, in Large Language Models, the very first piece of text on which the agent is trained may be a work by Proust, or Aristotle’s Ethics. There is little effort to build a foundation before turning to more complicated, composite ideas. Even when training curricula are applied, they are only a more efficient means to converge on this same end. The inescapable assumption in all the above is clear: the only metric of importance is the ability to efficiently reflect the training data.

Concepts, then, are expected to automatically arise out of a sort of compression of the inputs. Concept delineation — where concepts start and end — is presumed to be latent in the data itself, perhaps as a cluster in some latent representation space. An intelligent agent is therefore one that can find these hidden groupings and exploit them. It should add nothing of its own preferences in the process; at its best, it is a mill that grinds data.

The ‘accusation’ of empiricism is not a controversial one, and is unlikely to be denied by any researcher. Indeed it is worn as a badge of honour, as having contributed to the apparent productivity of AI in the last few decades. This approach, however, also leads to the fundamental impasse noted above — there is little in empiricism to indicate how to create a new concept from scratch without being guided by the prescient hand of the trainer.

“Most AI work on heuristics, i.e. the algorithms that solve problems, has usually taken for granted a particular epistemology of a particular domain, e.g. the representation of chess positions” — Concepts of Logical AI, McCarty

Locke also bumped up against this problem in his Essay. He suggested that when creating concepts, the mind first ingests a variety of experiences, then filters them into concepts based on “useful” groupings:

“But knowledge began in the mind, and was founded on particulars; though afterwards, perhaps, no notice was taken thereof: it being natural for the mind (forward still to enlarge its knowledge) most attentively to lay up those general notions, and make the proper use of them, which is to disburden the memory of the cumbersome load of particulars.” — Essay Concerning Human Understanding, Locke, Book 4, Chapter 12–3

However, he did not elaborate on how exactly the agent decides which concepts should be created. He left it as an act of “will”. To a modern, data-oriented approach, this is not practicable guidance, since adding large volumes of data won’t allow an agent to form concepts unless it also has a way to define what a “useful” grouping is.

This post will fill in that gap, and propose that much of the confusion surrounding concept learning can be resolved by letting agents include their motives and values in the process of concept creation. Personal motivations are not usually considered a fitting cradle for truth since reality is deemed to be objective, and must be given to us by the world itself. The purpose of the agent’s world-model is merely to reflect that truth. However, as we dig into what exactly is entailed by a “useful” grouping, you’ll discover that even the most apparently objective concepts have more in them of the subject’s preferences than you may at first have realized.

Consider, for example, the concept of dog. Most people would feel comfortable saying that dog is an objective concept. If ever any concept were based on impersonal criteria dog would be it. So it may surprise you to learn that dog cannot in fact be distinguished from wolf by any known empirical criteria. The two species can interbreed — the result is known as a wolfdog. Their similarity in appearance also makes visual features ineffective for distinguishing the concepts— a husky looks more like a wolf than it looks like a chihuahua. Nor is their habitat the deciding factor — a wolf doesn’t become a dog simply because you put it in a living room. To suggest that childhood education and socialization are the formative influences is merely to beg the question — where did the people who taught you learn it themselves? Who came up with it in the first place?

Given these objections you might propose that the two concepts are therefore on a gradient; but that also begs the question, since it requires you to define what makes something more dog or more wolf. For every empirical criteria proposed you can find exceptions that prove that it was not that feature which forced the distinction; something else must be used to justify the exceptions. One can hand-wave a subtle combination of the above attributes as the source of the concepts, or take a glance at their DNA, but that wouldn’t explain why non-scientists, and even children easily understand a strong distinction between wolf and dog. What, then, drives the separation between them?

The answer is obvious: dogs are tame. It is humankind’s own appraisal of dogs’ friendliness, and their utility as helpers to our species, that gave root to this distinction between the two concepts. “Tameness”, however, is ambiguous and lies on a gradient. You must choose where you draw the line — e.g. would you categorize a wolfdog as a wolf or as a dog? A zoologist may argue that tameness is in fact an objectively measurable property of the species, and so it can serve as the empirical force that pushes the two concepts apart. But not all dogs are tame — many are vicious. Similarly wolves can be tamed with a bit of effort. And yet we don’t label violent dogs as “wolves”, as would be demanded if empirical criteria were the only drivers of concept definition. When you see a snarling rottweiler, your mind is drawn to thoughts of what this creature could have been, if history had been different. It is still a dog, just a “bad” one. So it is not their tameness that continually creates the concept of dog in the observer’s mind, but rather his or her desire for them to be tame. Dog is a wish.

Cognitive theories of concept acquisition all expect that personal choice is excluded from the act of designating concepts. Reality is supposed to tell you the truth, you don’t tell it anything. And yet every animal and plant on earth that you would classify is subject to the same ambiguity as dog and wolf, as there is no clear way to distinguish a species and its evolutionary predecessor. Where the line is drawn depends on the classification needs of the scientist that studies it.

“Dog” is as much inside the observer as in the creature itself.

There was already a clue to the motivated roots of concepts in Locke’s Essay. Concepts, he argued, are composed from experiences which the mind chooses to group together in a way that is useful to it, perhaps because it is efficient to communicate the idea. That glue — namely what is useful — he assumed would be provided for by one’s free will, which can choose what concepts it will form, and what examples will be included in each. He viewed the “will” as a neutral substrate, in that it did not add anything to the substance of the concept, only provided a way for it to coalesce. He didn’t dig further into the issue, nor tried to mechanize that “will” to determine why it groups some experiences together and not others.

Had he gone further, he would have realized that the entire foundation for his outside-in empirical theory was based on an inside-out process of concept invention. The specific motives that shape a given concept will determine how it groups experiences, as the motives for companionship and domestic help shape the designation of dog. Locke may not have thought there was anything of relevance to be discovered in human will and motivation. He, along with the above theories of concept learning, place their focus on the patterns in external stimuli. But to leave out the agent and their motivations is to cut the heart out of every concept. Empirical observations may provide the substance or the occasion for a learning a concept, but the individual’s motives decide how those experiences will be grouped together.

Every concept is created by the interaction of your needs with the possibilities presented by the environment. If you don’t care, if it is not useful, if it doesn’t solve a problem for you, then no concept forms — no matter how many labelled examples you see. This has implications for their content, since each instance you attach to a concept must resolve the underlying motivation. The question isn’t “what experiences are part of the concept X?” but rather “what experiences would it be useful for me to group under X?” For example, no matter how enticing an object looks, it is not food unless it addresses hunger or nutrition. On the other hand, odd examples like intravenous drips may qualify as food since they meet the underlying need.

Although bringing motivations into concepts appears to make training AI more complicated, it actually explains how you learn concepts which are otherwise impossible to define objectively. Consider the concept of game. Game has historically been difficult to define based on empirical criteria. It has variously entailed a system of rules, a goal or objective, obstacles, participants, and a myriad of other attributes. Yet no definition has been proposed that does not either exclude unusual games like Calvinball or Snakes and Ladders, or else includes examples that are clearly not games.

One criteria, however, that has been conspicuously absent from every objective definition of game is that a game must be “fun”. This is because “fun” is a subjective appraisal that varies per observer. Any definition that required a game to be fun would result in contradictory examples where the same entity was both a game and not a game. And yet this omission has been the cause of the failure to define game, because it ignores how the concept is learned on an individual, subjective basis.

For most of us the word “game” is acquired in early childhood, when you were invited to participate in activities that you deemed fun, and which involved made up, often arbitrary rules; tag, hide and seek, etc. At some point someone said the word “game” before you began playing, and you learned to use that word as a signal to initiate a game. Learning the word was therefore motivated by a desire to have rules-based fun.

At some point, you invited a friend to play a game that you enjoyed. But they rejected your offer, because they didn’t enjoy your suggested activity. This may have come as a surprise: you didn’t imagine that something you enjoyed would not be fun for others. You eventually resolved this contradiction by accepting that different people enjoyed different activities, and accepted that people will call something a “game” as long as someone, somewhere considered it “fun”.

This is about as far as most people go when defining game. But others, notably those who design games for a living, found that this introduced a conversational stumbling block. As part of the analysis involved in their profession, they needed some objective way of defining “game” around which they could have productive discussions. The goal was to create an empirically testable set of criteria that everyone could agree on. Subjective, idiosyncratic preferences were excluded as being unproductive, or not contributing to consensus. Thus “fun” was removed from the definition. In so doing they burnt their only bridge to the answer.

Since then book after book has been written trying to square this circle with no a hope of success. Even Wittgenstein famously stumbled against this stone⁵. It seems no one could accept that game, like beauty, was in the eye of the beholder, because as part of the social discussion around that word, the definition needed to be one they could all objectively use and empirically test. Somewhere in that process they forgot that the idea of game was originally learned out of a personal desire to have fun. “Fun” is not so much a property of a Platonic ideal called “game”; it is the reason you invented the concept in the first place.

When you learned the word “game” you were learning a useful verbal signal to solve a specific problem; just as learning the word “food” is a tool for a toddler to satisfy a different need. The scope of the concept denoted by the word extends only insofar as it achieves those ends. The additional step of giving the concept a name happens because the underlying needs of the concept have encountered social opportunities to resolve them — by getting help from others. A word is a social tool, after all.

“Game”, like “beauty”, begins its life in the observer.

The need for clear, objective definitions is a separate and additional motivation. You must already have an intuitive, informal version of the concept in your mind before you try to define it objectively. Any properties that you subsequently discover about the concept are a later addition, an elaboration, and not the original impetus that formed it. This process requires extra effort and inventive skill. You would only ever attempt it because it makes collaboration easier, or you can no longer rely on saying “I know it when I see it”. Finding common ground requires that you excise your messy personal preferences from what is supposed to be the clear and unmuddied stream of impersonal truth. And so, as happened with game, a mismatch between truth and definitions often arises.

You may notice a common pattern between the case of dog and that of game — in both scenarios the objective definition is created by experts who care about accuracy. A lab assistant may, in the context of his research classify tomatoes as a fruit, or bananas as berries, because it suits his need for clear categorization based on identifiable, empirical features. And when he goes home, he may once again revert to his private concepts, and view tomatoes as vegetables. Humans can do this: carry multiple conflicting interpretations in their minds based on contextual needs.

Not so in AI. Empiricism as applied to AI expects the needs and motives of the agent to not play any role in deriving concepts. A tomato can’t both be a vegetable and a fruit merely because the observer wants it to be in different contexts. Yet in practice such flexibility is necessary since, as we saw above, the only definitions you can derive yourself will be based on your needs. As long as AI are limited to training on empirical criteria, and ignore the motivated side of concepts, they will falter when creating them, and will depend on humans to externally introduce their ontology.

¹ This reflects the popular AI usage of “ontology”, rather than the philosophical one.

² In a few cases, researchers have observed concepts as “emergent” in an agent’s actions, such as Reinforcement Learning agents that “share” or “communicate”. These, however, are inferred by the observer, not explicitly expressed by the AI itself.

³ An undue dependence on words as the final arbiters of distinguishing concepts would lead to bilinguals deriving two concepts, and homonyms causing confusion.

⁴ You may include transcendental or a priori concepts — those provided by the form of sensibility — under the umbrella of this criticism. It would make little substantial difference, as this post is focused on the motivated aspect of concepts.

⁵ His answer, in my opinion, merely side-steps the problem and leaves it unresolved.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.