Thinking is Motivated

Optimistic world-modelling is useful when planning

From Narrow To General AI
8 min readJun 14, 2023

This is the eighth entry in a series on AGI. See previous post here. See a list of all posts here.

Summary of posts 1 to 7: To develop a theory of Artificial General Intelligence (AGI), begin by focusing on invisible, automatic mental behaviours that drive higher-level, complex ones. One such behaviour is “becoming aware of something”, either something external or within one’s own mind. This results in the formation of concrete memories/thoughts (sights and sounds). Re-experiencing such stimuli as thoughts, the mind may once again remember them as if they were really experienced, thus allowing you to compose thoughts together.

So far we’ve discussed concrete thoughts, such as when you think of an image of a bear, or the sound of a tune. These are recreations of experiences that, at some prior time, you became aware of. As they are re-experienced in the mind, they continue to be a type of sensory experience, similar to a mild hallucination. This means you can become aware of them once again. You can even think of two concrete thoughts in series, and notice their combination as a new, composed thought. Or you can combine current stimuli from the external world together with self-generated thoughts.

Put this way, the faculty of introspection is the same as that used for external perception. The same set of mechanisms by which you perceive and remember a car in front of your eyes is used when you perceive a memory of a car recreated for your senses by your mind.

There is a separate aspect of introspection, namely perceiving your own mind while it learns, which we’ll put to one side for now. And we continue to defer the discussion of complex or abstract thinking until a later post. At this stage I’d like to pick up a question that was raised in A Layer of Self-Generated Experiences, namely why these concrete thoughts are created in the first place.

“Becoming aware” is a momentary action. It has time-bound, specific content. It is not about compressing a universe’s worth of information into some compact format. It is instead selective. It appears to be driven by some immediate curiosity or attention-grabbing event.

Look around you and find some red object. When you spot it, i.e. you become aware of it, you will form a concrete memory about it. Later, if I asked you to describe what you found, you would recreate this memory and describe what you see (self-generated) in your mind. The moment you engaged with the original request to find a red object, your “attention” was drawn to the world around you. It was looking for something. The new thought or memory of the red object that you learned solved the problem in a way that nothing else until then did. Attention, in this sense, had criteria. It was a problem looking for a solution.

The same is true of self-generated experiences. Think of a brand of desktop computer. Any number of thoughts may pass through your mind before a brand-name shows up, but only a brand name will solve the problem and create a memory. You will “become aware of” your thoughts of the brand name in the same way you became aware of the red object.

Sometimes the trigger for paying attention comes from your own thoughts, at other times it comes from outside. You hear a loud bang (problem: “what was that?”). Then you look around to find an explanation, and spot a book on the floor. Seeing the book triggers a memory, one that learned from a prior experience— you think of a book falling from above and making a noise. This thought explains the noise and solves the original problem. You now become aware of that thought. When you think of the noise again, the explanatory memory of a falling book pops into your mind again. A new thought has been created that wasn’t there before.

The key point in these experiments is not the solution (which we’ll address in a later post) but rather the trigger: each of these was started by need. Attention was looking for something; it had a goal.

Everyday thinking is often highly motivated. Your mind is constantly searching for answers, solving riddles of life, planning for eventualities, figuring out what the right choice is, and so on. Even when learning about the world, the more motivated you are, the more attention you pay, and the the better you learn. For example, when looking at a bookshelf, you don’t memorize all the books, only the ones you notice, i.e. become aware of.

Despite all this, we rarely consider knowledge, such as learning how physical objects fall, to be driven by motivations. The assertion feels counter-intuitive. The idea that “I did X so that I could achieve Y” is well understood. But the notion “I thought X so that I could achieve Y” sounds awkward. And even if learning certain types of thoughts were motivated, it may not be true that all such learning is. It is possible that other thoughts are instead based on, say, frequency, or truth; in whatever way “truth” is defined.

The common theory of cognition¹ splits thoughts into (at least) two parts — those dealing with facts, and those dealing with desires and plans. In the first stage, the mind attempts to statistically model and reflect the way the world works. Memories are one example of this pattern. The purpose of such world-modelling is to set up enough prior understanding so that in the second planning stage you can make good choices. This second stage is where motivations start to influence thinking.

If you dig into this theory a bit more, you unfortunately run into a host of practical problems. The universe is far too vast and complex to model it all comprehensively. Since there is so much to learn, the next best option is to learn only as much of the “truth” as is useful for making good plans: immediate factors, critical occurrences, noteworthy events. But how can the mind know beforehand what thoughts will be “useful” for planning and what won’t be? How can you pay attention in a way that will be relevant to a future you don’t yet know?

The solution is both unexpected, and at the same time has been staring us in the face. We already mentioned it above: you learn to think of things which solved a problem for you. A toddler who is hungry may hear the word “food” right before food comes. The word appears to “cause” a solution, so he remembers it. This sound pattern, “food”, now becomes both a thought and an intention. Next time he is in a similar context², the word “food” will appear in his mind as a solution, a desired outcome. All that’s left is to learn how to produce it vocally. By taking this approach you combine the two stages, world-modelling and planning, into one step: you learn to think of what you have found to be useful.

In a previous post we showed how the many so-called types of thoughts, such as facts or plans, generally overlap one another and are difficult to disentangle. For example, when you see a cow and think the word “cow”, that thought could be both a fact (part of the world-model) and a plan (what should I call this when asked what I saw?) The approach described in this post treats those types of thoughts as one; namely as self-generated, imagined solutions to problems. It asks: “What would I want to see or hear in this context? What has solved my problem in the past?” This now appears in my mind once more.

There are a few apparent gaps in this approach. To begin with, why only learn to think of solutions? Wouldn’t it be equally useful to learn to think of an upcoming negative event in order to avoid it? If you were in the midst of having an experience that you considered a problem (for example, you went to a website and it installed a virus), your main goal at the time would not be to remember the problem, but to solve it and remember the solution. The mind refuses to come to rest on a bad or unpleasant experience; it immediately tries to think of a solution. Indeed, part of the definition of “bad” is that the mind innately tries to avoid or overcome it.

It does sometimes seem like your mind is fond of imagining problems. Pessimists, “whiners”, or “negative Nellys” are often faulted for doing so. In such cases the person is communicating to others something they see as a potential problem which needs to be addressed. They are imagining the words they can use to warn others and mobilize a solution. “We’ll lose money!” they proclaim, with the hope that saying so will change the course of others’ actions. Naming a problem using language is therefore a type of solution.

Another possible issue with this theory is quite simply that you can think of problems, so the suggestion that most or even all thoughts in the mind are of solutions seems inherently nonsensical. This is attributed to the fact that the mind has many different problems, and the solution to one may be a problem for another. You can think about the fact that by eating cake you are becoming unhealthy (problem), but this is because eating the cake is itself appealing. Your mind is driving you into these imagined problems because there is another reason to do so. For example, you may want to know what would happen in a dangerous setting (say, in a dark alley). This knowledge may give you some control over the situation, help you warn others, or avoid the alley altogether. But you may not like the answer if you are currently in said alley.

Because motivations are nuanced and intertwined, much of the above may still seem inadequate. The assertions made in this post seem to demand that we delineate what exactly constitutes a “problem” and a “solution” for the mind. How do these two opposing forces come into existence? How do they interact? This will be the topic of the next post.

Next post: Mental Tensions

¹ Namely, model-based AI.

² The phrase “in a similar context” may need clarification. In a few posts we’ll talk about how solutions — both actions and thoughts — that are created by certain problems— referred to as tensions — are more likely to be elicited the next time you are in the same problem situation. In the example given, the problem context is the toddler’s hunger, the solution is the sound-thought of the word “food”. There are two components that together make up a context: the problem, and the current sensory inputs.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.