Concepts don’t exist — as objective, phenomenological, cognitive, or neural structures

The case for removing concepts from cognitive science and AI research

From Narrow To General AI
14 min readOct 28, 2023

It can be difficult to convince someone that concepts don’t exist. Everyday experience appears to provide overwhelming evidence to the contrary. Concepts are not only intuitively perceived to be active in daily life, they are also a widespread feature of theories across AI and cognitive science, where they are assumed to be necessary for symbolic and logical thought¹. Most who read the title of this post would be tempted to brush off the argument as patently, demonstrably absurd. It’s akin to trying to convince a European 500 years ago that God doesn’t exist, when everything around them appears to be evidence of, and indeed presupposes God’s existence. Any contrary argument is likely to be taken as the result of sophistry or word-wrangling, or because some critical piece has been neglected.

Despite their seeming obviousness, it is worth noting that there is still no complete and unambiguous explanation for what concepts are, or how they work on thoughts —and indeed how to program them into AI. The human ability to learn and create concepts is multifaceted and complex. AI theories and implementations generally only touch on one or two of its features, while neglecting large numbers of counter-cases. This has lead some researchers, notably Lawrence Barsalou, to suspect that the way we think of concepts is flawed. Perhaps the whole notion of concepts — as a native mechanism for grouping experiences — is untenable.

That is the challenge put forward by this post: it will deny that the ability to group experiences into concepts is a built-in cognitive function. Given the diversity of theories on the topic, some clarification of terms is in order. Across the sister fields of cognitive science and AI research, a broad range of formulations have been used to explain what concepts are, from functions or logical predicates, to “attractors” pulling together experiences within probabilistic spaces. There is one property, however, that is common to all theories: they all view concepts as discrete entities for mental organization. A concept is an identifiable thing or category, meaning they are not fungible. Without this property, the notion of a concept stops making sense.

To say that concepts are discrete should not imply that they have clear or static boundaries. Ambiguity is inherent in all concepts, notably abstract or subjective ones like beauty and art. Even the line between salad and dessert may be difficult to draw in practice (e.g. fruit salad). I take this as a given. “Discrete” means that they are specific singular entities, like nodes, against which the mind may compare experiences, and around which it groups them. Connectionist theories of concepts categorize experiences into concepts probabilistically (e.g. this meal is 30% likely to be dessert, 50% salad, etc), but even they require that you have pre-defined the two concepts: dessert and salad. (Categorizing experiences is presumed to be necessary to subsequently reason about them — e.g. if an item is 90% dessert, then we should eat it after the entree.)

So discreteness is a fundamental aspect of all concepts, without which it is fair to say that concepts as we know them wouldn’t be meaningful. This post questions the validity of any formulation of concepts that relies on discreteness, including but not limited to GOFAI and Connectionist models. I’ll do this by showing that the reasons given for why people believe concepts exist depend on sources of evidence which are unreliable, if not erroneous.

Pro forma, we’ll start by dispelling the most common belief about where concepts come from: the objective world. Though this issue has been largely settled in philosophical circles, it is necessary to dispense with this ancient argument before moving on. Briefly, all concepts that you and I are aware of are at least partly subjective. They are psychological entities into which a given mind groups its various experiences. Different people faced with similar experiences can perceive any concept they are inclined to in those experiences (e.g. calling a flower a miracle or a coworker a jerk). None are obligatory. If due to preference, or error, or brain damage, or ignorance, or even artistic license I interpret a chair in front of me as “ice cream”, this is a function of my own mind, not of the world.

The correspondence theory of truth might argue that the correctness of an interpretation depends on how well an interpreted concept corresponds with reality. Interpreting a chair as ice cream would therefore be objectively incorrect. Yet, as has been noted, the universe will not enforce any one interpretation on all minds. So even within correspondence theory, “correctness” must still remain a human judgment call. To objectively prove that any given interpretation matches reality, you would somehow have to compare your subjective mental concepts against an objective view of the real situation. But the latter isn’t possible. No one can step outside their own mind and see the world as it truly is, so no comparison can be done. Therefore the notion that concepts come from the objective world is either inconsistent — different minds have different concepts — or can never be shown to be correct.

The above arguments were already settled by philosophers a few hundred years ago. Research into concepts has since turned away from the outside world to focus on the inner one. Yet despite being unmoored from objective reality, there must still be evidence of some kind for why people believe that concepts exist, which is where this post comes in. It will challenge those sources and the evidence gathered from them.

To begin with, there is no scientific experiment or empirical observation that can be used to prove that any given concept “exists”, and by extension that concepts exist at all. Experiments and observations require that the observer already have pre-defined the concepts under investigation as well as how to identify instances of them — e.g. the question “is this an instance of a mammal?” requires that mammal is already defined.

This leaves only two remaining ways by which you or anyone else can come to know about concepts. They are (1) linguistic communication from other people, and (2) introspection. The only reason you and I believe that concepts exist is because firstly, we communicate about them as a society; and secondly, you have looked inside your mind and think you see concepts there. Someone mentioned “cars”, and named a concept of car; then you looked inside your mind and saw that you indeed had a grouping, or pattern of internal experiences that was also car. You may have then compared notes with others and come to a tentative conclusion that there is a concept car. This is why you believe concepts exist; you have no other sources of evidence. However, as we’ll see, both language and introspection actually introduce conceptual structures into raw observations where there may not have been any to begin with.

Language structures its content

Let’s begin by looking at language. Language, in order to function, must make its content discrete. The communicative functions of language force us to align our diverse and fluid experiences into a set of finite, common words. If I made up words on the fly to reflect the uniqueness of my experiences, no one would understand what I was saying. Even composite words in languages like German (e.g. Orangensaft = orange + juice) are made of discrete, widely recognized entities; they are more akin to hyphenated words in English (e.g. “hobby-horse” or “old-fashioned”). For a truly new word to be conceived and to enter into use takes time. It can’t be done on the spot, without prior explanation.

Thus the discreteness of concepts is a built-in requirement of language itself, one that does not necessarily reflect what an individual mind is doing. When you have difficulty expressing an idea or feeling, that is a sign that there is a disconnect between your mercurial thoughts and the limited number of words you are being forced to shoehorn them into. Concepts in the context of language are only “concepts-as-words”; they reflect the entries in the prevailing dictionary, and they change only slowly as societies evolve and coordinate their activities.

So much for language. But what about introspection? Surely as you look into your mind and see patterns of events, they appear to coalesce around various concepts like car or beauty. There must be some innate cognitive structures shaping your thoughts around these, right?

Introspection creates structures

Introspection is problematic. Although we may feel confident that we can see clearly into our own minds, it is prudent to think twice about what you observe in there. For example, imagine that I were to think of an image of a house. If I then ask myself what concept that image is a part of, the first thing that might pop into my mind is “house”. That is, I may think of the English word “house”, either as a sound or as its written letters. That word, and other concrete thoughts which appear to me are the only way that I can identify what concept I have assigned the image to. I do not have direct visibility into the underlying forces that made this connection happen (more on that in a bit).

However, the original image and subsequent word are not the same thing. The latter is an interpretation; one that depended on the set of English words I happened to know. I have stepped outside the image itself and added a thought-interpretation onto it. Depending on what I was trying to do, I might make recourse to any of a number of interpretations. If I were contemplating housing prices, the words “semi-detached”, “mansion”, “duplex” or “townhouse” may have occurred instead of “house”. Other images, symbols and feelings may also occur and become attached. Regardless of how I understand my own thoughts, and even if I don’t use English words, the act of perceiving what concept something is assigned to is always an act of perceiving its interpretation, which means I am connecting or transforming it into something different from what it is.

The label I assigned to it is not necessarily “true” or accurate to the content of the image either; it may merely have been based on convenience, as being “close enough” given what I have. I was, in a sense, forced to shove the experience into one of many existing interpretive categories, whether correct or not, because that is the only way my act of introspection could have determined the meaning of the image. Connecting the image to the word “house” had meaning because the word “house” had preexisting connections that made it useful². If I had instead interpreted the image of the house into a vague set of thoughts, such as a jumble of letters or images that I felt uniquely identified it (e.g. “⇰↺♫◭🔴✭$⌥”), this would not be a useful interpretation. It would be meaningless.

Thus the act of introspection itself required that I connect the specific, unique image of the house to available, common identifiers. This is analogous to what happened in the case of language above. The image was not necessarily attached to those identifiers before I tried to understand my thoughts. Indeed it often takes a bit of effort to know how to correctly interpret a thought — most thoughts are vague and nebulous. This makes the assignment a intentional act, not an automatic mental function. Had I not wanted to introspect, or to interact with my mental images in some other way, the connection may never have arisen. I began looking at my thoughts with a question: “what is that in there?” This act of looking demanded an answer. And the answer I got was the interpretation: “house”.

This argument alone does not prove that concepts aren’t native mental structures. There may be some underlying force that drew my mind to connect that image to the word “house” and not, say, to “giraffe”. In this sense, concepts may be more like attractors — drawing together thoughts along common threads. The English word “house” may have been my attempt to name that underlying force. Were you to exclude the reference to that word, you may still sense an underlying, non-linguistic feeling of “house-ness” in the image. Perhaps therein lies the concept; the feeling may be a sign that there is something common to both the image and the word, pulling them together.

It is worth noting that the feeling of “house-ness”, which we are taking as a sign for the existence of the concept, is not identical to the feeling of “duplex”, or “mansion”³. The feeling of the image changes — and so presumably does the concept — depending on the interpretation. And the interpretation changes based on what you’re trying to do. The word you connect the image to can even influence the feeling of the original image. For example, knowing that a house you just saw is technically designated as a “mansion” may shift how you feel about the same image of the house. Thus it is an oversimplification to say that there is an underlying concept which acts as a one-way, centralizing cause connecting thoughts together.

More importantly, the feeling itself is also an introspective entity, just like the images and words. Your feeling about something may change over time. The concept that appears to be designated by the word “mansion” may feel glamorous when you are young, but may start to feel exploitative or wasteful as you get older. So neither feelings nor words clearly indicate the existence of a singular, distinct concept.

Critically, the feelings and even words that you connect to thoughts migrate for personal, motivated reasons, and not because the data changed. In all the examples above, my interpretation shifted based on what I wanted or what I needed to accomplish. The consideration of house prices and affordability got me thinking about “townhouse” and “duplex” and their associated feelings. Considerations of politics and fiscal responsibility reframed my feelings about “mansion” into a less complementary light. Were I to start thinking about family and safety, the word “home” and its associated feelings might arise, all from the same image of a house⁴.

So what are we left with? Where is that innate, intuitively obvious structure called “concept” in all this mess? There doesn’t seem to be any one place we can find it: neither feelings, nor words, nor images, nor patterns of thinking are clear indicators of their existence. Nor are concepts necessary for cognition, as is so often presumed. As discussed in previous articles⁵, your motivations and goals can, by themselves, determine which thoughts you connect with which other thoughts, thus obviating the need for concepts that would perform that function. This raises an obvious question: if concepts apparently don’t exist, why do we have the word “concept” at all?

Because we want to see them

The answer is that concepts do exist in as much as it is useful for you to believe they exist. The title of this post asserted that concepts don’t exist as phenomenological, neurological, objective, or cognitive structures. And this is still true: concepts do not arise automatically in the mind as though driven by some innate neurological or cognitive capacity. But concepts do exist in two other forms mentioned already: as social inventions, and as introspectively imagined structures.

We already saw how, as social inventions for communication, concepts and words are roughly aligned. In everyday conversation, a concept is a thing we can somehow describe using existing words — or by inventing new ones , as long as our language group agrees. Words are the communal, discrete, and stable basis on which discourse around concepts revolves; subjective, personal concepts are too fluid and differ too greatly between individuals. Any ambiguity or confusion that arises during discourse only occurs when people attempt to clearly align on what a given word actually means; e.g. when they try to apply “salad” to specific meals.

Introspectively, on the other hand, concepts are imagined structures; they are artifacts of the very act of introspection, and they serve its purposes. We imagine we see them in the pattern and flow of our mental events because they are useful building blocks for self-understanding. This is true both for individual concepts (e.g. desire, happiness) and also for concept itself. Just as the imagined concept of me or self is useful for understanding and communicating my needs, designating a mental event as a concept can be useful for understanding my own cognition. Both inventions are nonetheless still drops of paint in a chaotic river of thoughts.

And just like all other concepts, the need to structure my introspective activities in terms of concepts may disappear at any time if it becomes useless. It is always possible for me to think about my own experiences without believing that I have concepts. I can consider the usefulness of each specific thought connection, with utility being the mechanism pulling thoughts together and apart; all while remaining oblivious of concepts. Concepts are only a passing, convenient invention of introspection, adequately serving the purposes of self-understanding.

Concepts exist in the same way beauty exists: as social constructs and introspective creations.

However, if you are trying to design AI, concepts can distort the project by injecting into the mechanical underpinnings what are merely introspectively useful constructs. This, I believe is the main reason concepts have been a hassle for AI research, and why the resulting models end up being rigid, brittle, over-constrained, or confused; we are chasing a phantom architecture.

Ignoring concepts is difficult

Letting go of the belief in concepts is not easy. The very premise seems self-contradictory. It is difficult to write (in words) about the fact that concepts don’t exist, since the functions of language and human communication require that you frame everything into discrete units, which strongly suggests concept-like structures. Introspection also requires that you re-interpret your thoughts into common, centralized labels in order to understand and communicate them. Both processes continually mislead us; just as for centuries we believed that subjective colours are part of the external world because our eyes interpreted them in every visual experience.

No matter how hard you try to recognize that concepts don’t exist, and no matter how compelling the evidence, the brain defaults to them at every turn. Concepts are useful; they provide a stable foundation, a sure footing on which to begin analyzing the flow of your thoughts. This makes them intellectually attractive. In contrast, without concepts introspection feels unstable, like standing on a plank in the middle of the ocean. Few people are comfortable with that situation. The clarity and productivity of our own ambition for self-understanding both require us to make up these units of cognition called concepts. Otherwise you might feel like you have to give up on your introspective ambitions altogether.

To eliminate concepts from cognitive science demands that we bring in a replacement — namely a more accurate or detailed account of moment-to-moment contemplation that does not involve concepts. Just as cognitive science long ago eschewed consciousness as the driving force of all mental activity, recognizing the illusory nature of concepts is the next step in the development of the field. This recognition, and the subsequent change of perspective, are indispensable if our goal is not merely social facility or introspective comfort but rather progress towards a more productive and more mature stage of AI research.

¹ For a breakdown of why concepts are not necessary for logical thinking, see this series on the roots of logic.

² This is in fact the root of the word “concept” — it means to “take together”.

³ Whether or not “duplex” and “house” could be placed in a hierarchy of concepts is a later elaboration and invention; their connection is not immediately obvious from the words themselves.

⁴ Each interpretation is more like a transient affordance it reflects the momentary usefulness of that entity to your goals.

⁵ Also here and here.



From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.