Why is it so hard to know what thoughts are “about”?

Brentano’s intentionality confronts Hegelian self-identity

From Narrow To General AI
14 min readSep 15, 2023

In 1874, Brentano’s Psychology from an Empirical Standpoint proposed that every mental event — thought, belief, feeling, etc — is directed towards some object or content. This notion, which he labelled intentionality, has an intuitive appeal. A moment’s introspection seems to confirm that every one of your thoughts is “about” something, which suggests that “about-ness” may be a fundamental feature of mental activity. Yet the more you dig into this apparently simple observation, the more it reveals an unsuspected, even self-contradictory depth.

The goal of this post is to peel back the notion of “about-ness” layer by layer and get down to its roots. To focus the discussion we’ll address the most clearly intentional type of mental events: thoughts. We’ll start from a naive interpretation of what thoughts are “about” and develop it, with each layer addressing issues arising in the previous one.

Layer 1: Thoughts have a referent in real life

The simplest way to think about intentionality is to relate every thought to some referent in real life. That is, you assume that whenever you have a thought about some identifiable “thing”, there is, in the world, a real thing that somehow matches the thought; or at least it matches an experience you had of it.

The first and most obvious objection is that humans can imagine and invent things that don’t exist in any meaningful sense. Even in cases where there is an actual referent or experience, such as when you think of a bird you saw the day before, it is possible that it was an illusion or a misunderstanding. Perhaps it wasn’t a bird, but a plane that you interpreted as a bird.

These examples suggest that thoughts inhabit their own space separate from objective reality, and their connection to that reality is difficult to establish. The last four hundred years of epistemology, from Locke and Berkeley, through the post-modern era, has been an analysis of this discrepancy between mind and reality. Given that, at best, a thought can be correlated with an experience — not with reality itself — perhaps we shouldn’t try to look for a real-world correlate for every thought.

Layer 2: Thoughts have explicit content

To address the above issues we can update what “about-ness” means. The notion that thoughts are “about” something may not be a statement of their connection to objective truth, but simply that they are directed towards some content, whether taken directly from experience or invented.

Take a look through your own thoughts. Are there any that don’t involve some content? Even thoughts of vague images and feelings are “about” those images and feelings. For every thought you have you should be able to identify what each one is “about”. So when you think of a bird, you might communicate that to others using the English word “bird”. You can do the same for parts of the bird, like the wing that is pointing down and to the left, the three feathers you imagine on it, etc.

Unfortunately there is no single word or phrase for the specific combination of features you are imagining. Words are always generalizations. It wouldn’t make sense to come up with a word for something that will only be experienced once, and by one person. It is their recurrence across experiences, such as the many birds we see every day, that makes a word useful. If you try to describe a particular experience, its time and details, you can only do it using words that are common, general, and universal. As Hegel observed in the Science of Logic:

By “this” we mean to express something completely determinate, overlooking the fact that language, as a work of the understanding, only expresses the universal

If you say your mental image is about a “yellow bird”, the image your listeners will imagine will be either a misunderstanding or incomplete. Perhaps they are lacking some detail that is vital to what the thought is truly about. Even if you were to try to draw your mental image on paper, your physical drawing wouldn’t match the vague, nebulous thought-content exactly — artists know this well. So you can never completely and comprehensively express the content of your thought to others. But perhaps this is only a limitation of language and of drawings. Surely you yourself know, when you look into your mind what the exact content of your thought is.

Layer 3: Thoughts have phenomenal content

You may never be able to communicate your thoughts to others exactly, but no matter — you can look into your own mind and see the content yourself, and maybe that’s enough to know what it’s about. For example, I can inspect the image of a yellow bird in my mind and study its details, the direction of the feathers, the angle of the beak, the colour of the body…

Or can I? I’ll be honest, I don’t know exactly what colour the bird in my mind is. I couldn’t, say, give you an RGB number like #F9E801. When I look at the mental image, all I can say is that parts of it are somewhat yellow. I can’t identify or name it any more precisely than that, even just for myself.

It feels strange to say that I can’t precisely identify the details of my own thoughts. It’s kind of like not knowing my own birthday. We generally think we have a pretty strong grasp of what our thoughts are about (I mean, it’s right there, I can see it), yet this simple act is more complicated than originally supposed. I guess I shouldn’t be surprised; introspection — looking into my own mind and analyzing it — doesn’t grant perfect knowledge of its contents.

The act of identifying anything, whether in your mind or in the world, is not just the act of sensing it and confirming that something is there, it requires you to assign it some identity, which is to say a common idea such as yellow. Kant, in his Critique of Pure Reason, proposed that to have an experience requires that you subsume and unify it with other perceptions in a way that “understands” it. Otherwise, he argued, you can’t be said to have “cognized” anything:

All cognition requires a concept, however imperfect or obscure it may be […] the latter is always something general, and something that serves as a rule

If I were to learn more precise hues like canary yellow or egg-yolk yellow, I might be able to interpret my thoughts more precisely. Yet it will never be complete. I couldn’t be sure that the colour I matched the yellow to is exactly the one in my mental image, only that it’s close. Nor can I confirm the angle of the wing to an accuracy of 0.1 degrees, I can only assign it to a rough category, like 45 degrees down-left. The mental image itself is too imprecise to do any better.

In this sense, the content of my thoughts are inaccessible to me except by way of roughly similar ideas to which I connect them. To say that a thought is “about” something is actually to assign it to a similar or related thought, like the words “canary yellow” or “45 degrees”. So regardless of what I believe the thought is about, my answer will either be imprecise, or cannot be proven to be correct. This makes it more difficult to assert that thoughts in and of themselves are about anything.

Layer 4: Thoughts are stimuli

None of the above, however, detracts from the fact that the thought itself is specific and definite. All it implies is that every thought is merely a set of stimuli or inputs. Its “thing-ness”, which is required for a thought to be about something, is a later addition or interpretation. When I say my thought was about a “bird”, “bird” is a subsequent label that I add onto the original stimuli. The thought itself simply exists. At any one moment it may have specific sensory details, but to identify these is, as Hegel pointed out, to unite them with something other than what they are, to re-interpret the content as something that is not identical to itself. No matter how similar the thoughts I connect to the image are, I can never capture its identity.

Thoughts as sensible points in space.

Is the case really that hopeless? Perhaps if I try very hard and focus intently, in the limit, I could somehow precisely identify what the thought contains, either as a whole or in parts. Surely there is something there, even if I have difficulty identifying it. I may not know the precise shade of yellow, but I know that it isn’t blue. Something about the mental image is driving me closer and closer to identifying the actual colour. So let’s follow this trail, and see where it takes us.

Suppose I tried to “inspect” the details of the bird in my mind, to know exactly what it is. I may focus or zoom in on the wing in the image, and there see three feathers on it, or the outline of some feathers. If I zoom in on the feathers, I may see individual clumps, and so on. I could keep going, to the level of strands, and even atoms. However, the original image didn’t contain that much detail. This implies that what I’m “seeing” is actually an ad hoc invention. It’s what I might expect to see if I were studying a real object. The mental image is being altered by the act of inspecting it.

Details of the mental image are invented.

The same is true of its colour. When I try to focus on its colour, I am either adding a word (“canary yellow”) or a new thought of a patch of yellow on top of the original thought. I know this because when I look at parts of the bird that I hadn’t explicitly given a colour, the act of looking makes a colour appear in my mind. Curiously, focusing on it further seems to brighten or darken its shade — it’s hard to keep it the same colour. Although it feels like I’m just observing what’s already there, there is nothing actually there for me to dig into — there is no real image that I can focus my eyes on.

[in] its subject matter, thinking […] is essentially elaborated within it; its concept is generated in the course of this elaboration and cannot therefore be given in advance — Hegel, The Science of Logic

Despite this, there is still something driving the specificity of the interpretations. When I focused on the wing, I saw feathers, not sports cars. Why? Because I had somehow identified it as a wing, and that is what I expect wings to have. By unconsciously assigning it to a type, I could then extract or invent its specifics. The more precise the type is, the more precise the subsequent invention will be — e.g. a canary vs general bird. Even if I decided to interpret the wing as being nebulous and blurry, I would be assigning it the idea of a nebulous yellow shape, and from there generate specific thoughts of blurry yellow lines and corners.

Nebulous and vague interpretation

Layer 5: Thoughts are about concepts

Perhaps I’ve been going about this all wrong; perhaps these groupings are what the thought is “about”. Maybe it doesn’t matter that my interpretation isn’t an exact recreation of the mental image itself. In fact, how useful would that be? To simply say that a thing is equal to itself is to copy and paste it a second time. What does that tell me about it? It is better to associate the original image with something different, like the word “bird” — that, at least is doing useful work.

Concepts let you generate specifics.

However, if I just say that my thought is about a “bird”, that word itself is only a set of sounds particular to my culture. So is every concrete thought I can think of. To associate concrete thoughts with other concrete thoughts does not necessarily identify what they are about. The latter could be an incidental association or memory, like associating the image of a canary with a thought of a trip you took to the zoo. There must be an underlying pattern that makes the connection meaningful. We need something else, something that has no specific content, but somehow defines what the content is about: we need a “concept”.

The concept of wing doesn’t have one specific image associated with it, rather it allows you to imagine a variety of wings as needed — a black wing, a yellow wing, a wing in flight, etc. It also lets you generate words like “wing”, “Flügel”, and other related, concrete thoughts. “About-ness”, then, seems to be the act of assigning to every thought one or more concepts, concepts which precede the thought and come from outside it. Once assigned, you can interpret a plethora of new thoughts that go beyond the original stimuli and give it meaning. The lack of specificity that we considered a problem is really a boon.

As you learn new concepts, like canary yellow, or light canary yellow, your interpretation of a given mental image may become more refined and more precise. The stimuli that make up the thought may not change, yet your interpretation of them will. This means that “about-ness” is only a momentary determination, since it can shift as you acquire new and better concepts.

However, there’s a trade-off: the more precisely you match the concept to the image, the less meaningful or useful it becomes, since it is connected to fewer and fewer outside thoughts. Which end of the spectrum is the right interpretation? Is it good enough to say that my thought is about a bird in general, or do I need to be precise to truly define what a thought is “about”?

Perhaps it’s the whole spectrum. Perhaps there are an infinite number of concepts that a given thought can potentially be “about”; bird, canary, flying thing, yellow thing, pet etc. Which of them is connected at a given time depends on your current catalogue of concepts. This may be all well and good, however Brentano’s notion of intentionality requires that there be a definite identity for what every thought is about, at least in a given moment. The only conclusion, then, is that a thought is not in itself “about” any concept until you make it about that concept.

Layer 6: About-ness is a stimulus too

It may feel intuitively satisfying to end the investigation here. Unfortunately, there is just one more layer left to unwrap. Suppose I accept that concepts are ultimately what a thought is about. How can I find out which concepts a given thought, say, that of the yellow bird, is connected to? This is not easy, since concepts are not themselves visible in the mind, you can only perceive their concrete interpretations, like the word “bird” or the image of a wing. The fact that I have a single word “bird” doesn’t necessarily mean that I have only one concept related to bird — language and concepts don’t line up perfectly. I might call it a “bird” when I mean “budgie”, since I haven’t learned the latter word yet.

Moreover, every time I look at a particular mental image, the concordant thoughts, including the words, will be slightly different. This might mean that the concept I connected it to is different as well. How would I even know? Perhaps concepts are fluid and blend together dynamically. In fact, the more I mull over this particular image of a yellow bird, the more it feels like I’ve ended up creating a new, specific concept of such a bird with those features —i.e. a yellow canary-like bird looking to the right.

I may create a new concept as I think of the bird.

Asking what a thought is “about” can only lead my mind to connect that thought to other, varying thoughts and images: e.g. the words “bird” or “canary”, or an image of a wing. To identify the concept behind these new thoughts I must now ask what the new thought is “about”, e.g. what concept is the word “bird” referring to? That only raises new and different thoughts, and new questions. This is troubling, since it seems I can never identify the concept; I can only have new thoughts related to it.

A rolling process of thoughts leading to thoughts.

Thoughts can only lead to other thoughts, and trying to find their stable basis only leads to even more thoughts. There is no clear or definitive moment — to use a Hegelian term — at which its “about-ness” can rest. There is no fixed structure to discover, only an ongoing process of thinking of new images and sounds. The act of trying to identify what a thought is “about”, and the answer you come up with, are both ephemeral moments along the way.

You can see how the proposition that thoughts are “about” something hides a lot of complexity. When I tell you that my thought is about a “bird” all I’ve done is to attach a word to it which you then take up and interpret how you wish, in a way that is likely wildly different from what I had in mind. Nor can I identify what a thought is for myself in a way that gets at it specifically — every act of perception is an interpretation as something different than what it is. This interpretation is also invisible to me until it becomes a concrete sound or image, like “bird”, or a swatch of yellow. That makes it a new thought, the “about-ness” of which must now be investigated.

Between Kant and Hegel, much of this analysis has already been done. The distinction between the two philosophers can be seen at roughly the two ends of this post. Which of them you side with depends on how much you value stability when investigating your own mind. People find it uncomfortable to think of the mind as fluid, or to admit that we lack a clear and stable way to identify what thoughts are about. Intentionality attempts to give a stable foundation on which to build an understanding of your own mind. Without stable identifiers like concepts, the structures and relations between varying mental events are exposed as mercurial and illusory.

Although my blog generally addresses the topic of AGI, this foray into philosophy is a critical one. Modern thinking about AI centres around creating stable ontological structures — physically consistent spaces, task definitions, logical relationships, semantic networks, etc— rather than fluid processes. The notion of intentionality therefore finds a welcome home in AI, since it gives clarity to the researcher, and lets them test whether a given model is working correctly.

Even Brentano and those that followed in his wake concerned themselves with the logical relations and truth values of the objects of thought — as singular or general, real or fictitious, etc. But all that is irrelevant if we can’t even identify that something definite is being referred to. It would be like trying to figure out the blood type of a fictional character. If, on the other hand you interpret “about-ness” as a protean, unstable flow of possibly contradictory moments, that may more accurately reflect what the human mind is about, albeit at the expense of an easy set of categories.

--

--

From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.