The moral obligation to have a memory

How you invent a memory to fit in and collaborate with other people

From Narrow To General AI
13 min readOct 25, 2024

This very animal [human] who finds it necessary to be forgetful, in whom, in fact, forgetfulness represents a force and a form of robust health, has reared for himself an opposition-power, a memory, with whose help forgetfulness is, in certain instances, kept in check — Nietzsche, Genealogy of Morals

Human memory is a tricky thing. Everyone believes they have one, and few doubt that it is a core function of our species’ psychology. The evidence for its existence is undeniable; without it it seems you could know nothing. You also know what it means to lose one’s memory — you see it in others and maybe yourself. On the other hand, the existence of a memory in itself is not something you can sense directly; you cannot see or touch memory like you can see or touch a ball. As with all psychological functions, you can only infer you have a memory, or identify some cognitive entities as constituting memories. This raises the curious question of how exactly you came to believe that your memory exists.

The question is not as simple as it sounds. It is surprisingly difficult to decide which of your cognitive experiences is or is not a memory. You have no direct indicator or label for memory like you do for, say, the colour red. And the term itself is ambiguous. If “memory” is defined as any persistent change in the mind, then any thought that goes through your mind could potentially be labelled a “memory”. Even what we call “sensory experiences” or “experiences of feelings” are only identified as being so in retrospect, through memories of them. For example, when you say “that was a sensory experience of a chair” what is being designated as “sensory” is not the experience itself, since that has gone by before you chose to label it. If you later ask “what was that?” the reference must be to a memory of the experience, not to the experience itself.

There are many other cases where it is difficult to tell if something counts as a memory: e.g. remembering thoughts, generating plans, forming opinions or beliefs, creating fantasies, concocting lies, intermediate steps in a mental calculation, logical conclusions, etc. All of these are thoughts, and all of them can be made to reappear in your mind long after they were originally present, days or even years later¹; so they all seem to be memories of a sort. Even the act of deciding that a particular thought counts as a “memory” itself qualifies as forming a new memory — you could recall that decision later on. Yet these are rarely called “memories”; and when they are, the designation frequently depends on the frame of reference from which you approach the question.

Clearly, memory is a tricky thing to classify, and separating it from other types of cognitive events is not entirely straightforward. And yet you do have a notion of memory, and feel pretty confident that you know what it is — at least you did before you read this post. Where did that belief come from? And if the term is not clearly defined, what exactly is it that you think you have?

To answer the question we should first note that our understanding of memory is heavily influenced by having a single, discrete word for it. This despite the fact that, as we saw above, you have no clear, singular understanding of memory underpinning the word. In practice, as you try to appropriately analyze and interpret your diverse mental events, all you see is a flurry of thoughts running around your mind, blending into each other. Occasionally you may find that the term “memory” is provisionally suited to a situation. I say “provisionally”, since with further investigation you may change your mind and update the label — perhaps it wasn’t a memory after all.

Yet it was not mere happenstance that made you label a thought as a “memory”. There must have been some reason. And though that decision may change over time, in any given moment the reason is quite specific. So what makes a memory stand out for you among its peers as a special type of mental event? One feature you might point to is that memory has a sense of being representative, or somehow influenced or shaped by reality. When you say that a person has a “good” memory, you mean that they faithfully recall events as we all objectively agree they happened. Any injection of preference, reinterpretation, or exogenous elaboration arguably detracts from a memory being “good”.

Of course, even the most accurate memory is still just a memory of subjective experiences, and not of reality itself. You can only remember colours and sounds your brain actually sees and hears, and these will be different for every individual, even of the same event. When a memory is judged as accurate, its accuracy is only praised insofar as two people can agree on an equivalence in their subjective experiences. If two people standing at different positions assert that they saw the victim leave the bar — and this is compared later to video footage — then both are judged as having good memories despite the difference in their vantage points and the vantage point of the camera. If you then dig in and find they diverge on the details, and this difference is caused by interference from their subjective perceptions, then one party may be said to have a better memory than the other. This makes the accuracy of a memory contingent on the reason for which it is being evaluated.

The idea that memory is a form of objective representation raises another thorny issue: can you remember your own thoughts? In practice you clearly can, as in “I remember thinking that Tim was trying to attack me”. Unfortunately such facts cannot be confirmed by anyone else. In such cases the rest of us may decide to provisionally accept that the speaker is able to remember his thoughts, as is necessary to evaluate a particular situation; e.g. if he argues that he preemptively struck Tim because he thought he was under attack.

Were we to accept this latter proposition, then by implication we also accept that thoughts themselves are objective entities — things that exist — and that they can be remembered more or less accurately. The above justification for the attack could only be accepted if you assume the person is able to accurately remember their own thoughts. The same argument may be made for fantasies: e.g. “I remember coming up with that fairy tale based on my own childhood experiences, and not by any similarity to another author’s work”; and also lies: e.g. “I remember telling him the deadline was tomorrow, even though I also remember thinking it was a lie. The fact that it turned out to be true was incidental”.

In summary, it seems that people generally feel comfortable designating something as a “memory” to the degree that its contents represent something objectively “real”. And it is only a legitimate memory insofar as it mirrors the corresponding event. If the initial memory is altered over time, then the altered parts cease to be a memory, and become “fabrications” (though until your error is shown to you, you will continue to call your fabrication a “memory”²).

Any time you identify the source of your memory as being in reality, however, you are not actually referring to reality itself. When I say that my memory of a sunset was of a real event, I am referring one of my thoughts (the thought of the sunset) to another thought that was supposedly of the time I saw the sunset in real life. I can’t refer the memory to reality itself since that happened years ago. The separation of memory from reality is thus an artificial split between two different sets of thoughts. The relation between a memory and its real object is something I assert after the fact. Crucially, it is not an inherent feature of the thought itself, i.e. a feature that causes me to immediately recognize it as a memory. The property of being a memory is appended to the thought later, as a way of establishing its representative nature.

The artificial nature of the split between memories and reality is especially notable when memories are of abstract entities. The veracity of expressions like “I remember you were angry” or “I remember when our government was competent” is tricky to evaluate since they involve abstract interpretations (anger, government, competence) that can’t be verified except by communal agreement — e.g. that the government was indeed competent. Memory, here, rather than aligning individuals on concrete sensations like the colour of a ball, is being used to align subjective interpretations. This makes the definition of memory fairly broad, one which is indistinguishable from “shared opinions”. The only thing the notion of memory is good for in such cases is for building a consensus with other people about what happened, based on the content inside one’s head.

And here we finally get to the roots of the invention of memory. We’ve seen so far that to label any mental entity a “memory” is always a judgment call — it can never be concretely verified, nor can we refer to any inherent features to classify it. The duality between a memory and its real object is also an artificial one; underneath it all, you have only thoughts indiscriminately taken from empirical experiences, some of which are supposed to be of reality itself. And finally we’ve seen that a memory does not have to represent any objective set of stimuli or events — thoughts, interpretations, opinions are also included under its umbrella. In fact, the verifiable content of a memory is never the “pixels” within it, but the interpretation of those colours and sounds you experienced that can be compared between two moments or two people. The experiences in themselves may be subtly or entirely different, and so arriving at a consensus always involves a negotiation of terms.

The only thing these various examples have in common is that when you designate a thought as a “memory” you are asserting that it is a reflection of some real event that you and other parties agree happened. “I remember that you said you were going to the store, so I didn’t go myself” you may say. Any such judgment could always be wrong, of course, but the inherent veracity of this claim is not a causal factor in designating it a memory — as long as you wished to assert that it was correct at the time. This is why the term “memory” often becomes an argumentative tool — you quibble over who’s is more accurate.

What is important is that you are establishing some thought content as a representation of something real, meaning everyone — including you — should agree to its validity and act according to it. And you only need to assert that a thought is a memory to the degree that you want everyone to accept the point — the rest of your memory’s contents may be grossly in error, and it wouldn’t detract from the veracity of the part that was designated as true.

All this may be difficult to stomach. Until now, you probably felt you had a good grasp of what memory was. Even still you might have a prototypical example that you deem to be a solid case of memory, e.g.: “I can think of that time I went to the seaside and watched the sunset, and my detailed thoughts regarding it are certainly memories. If they were not a reflection of reality, where did these images come from?” And to be clear, nothing in this post is meant to suggest that you can’t have thoughts that reflect empirical experiences — clearly you can. But since you never really know how much of those thoughts is an addendum to the actual events, or reflects what you were thinking at the time and not what you were experiencing, your belief that something is a memory cannot come from the relationship to reality itself. Some other force compelled you to decide that it was.

The question, then, turns to why you come to believe a particular memory is a reflection of reality. What are the factors that push you in that direction? One reason may be that you’ve personally confirmed it by checking for its coherence against other experiences. But as discussed in another post, apparent contradictions in experiences need not be taken as disconfirmation, since they can always be rationalized as exceptions or misunderstandings. There is not, and indeed cannot be any automatic cognitive mechanism that ensures consistency, since consistency has no definition where any experience is possible — and any experience is possible through the senses.

Nor can we point to the individual memory itself as its own source truth. Even when they are of external events, human memories, unlike computer memory, are liable to evolve every time you recall them, to the point that two people may end up with incompatible memories of the same experience. The sources of these modifications are difficult to trace, but they are certainly not aimed at increasing the veracity of their content. Your mind doesn’t inherently seem to care how accurate a memory is, as evidenced by the fact that it is so easily willing to let your memories be altered over time. Rather, your mind cares about the utility of the thought, that is, it is looking for an interpretation that serves your aims.

Your mind doesn’t seem to care how accurate a memory is, as evidenced by the fact that it is so easily willing to let your memories be altered over time.

Increasing the accuracy of one’s memory is usually a conscious activity and aspiration. You may at some point in life decide you need to study various skills (e.g. mnemonics), and apply them when you think it is important to buttress your mind’s natural retention capabilities. The fact that you must learn these skills implies that such improvements wouldn’t have happened naturally without your intentional effort. Such effort also implies the presence of a driving motive which is specific to the circumstances — school, important social encounters, examinations, etc.³ Without these motives, the effort would not have been undertaken.

For example, there are cases where your social credibility rests on the accuracy of your memories. This is, incidentally, one of the reasons why it is difficult to let go of the notion of memory as an impartial reflection of reality: to doubt that your thought of the seaside was a memory is to cast aspersions on your credibility, which is uncomfortable. The term “memory” is used here to assert to others that those thoughts are a clear and reliable representation of truth — even if they manifestly aren’t and indeed never could be.

There are other situations where an inaccurate memory may actually be a benefit. It is a common error to assume that having a good memory is ipso facto useful for survival. Many people throughout history have been put at a social disadvantage by remembering what everyone else would prefer they forget or misremember. Indeed if credibility is what is at stake, then “memory” as a reflection of objective reality is only useful if it matches your society’s selective construction of reality. To have a good memory is to pay attention to what others deem to be of value, to interpret and remember experiences as others deem to be correct, and to disregard all else⁴.

If we accept these arguments, then the whole notion of memory, as we have come to understand it, collapses. It doesn’t feel like it’s “memory” anymore if it can be arbitrarily altered based on what is useful to you. Again, this is not to say that your mind does not contain impressions of experiences from your senses — clearly it does. It’s just that if the criterion by which we define the term “memory” is that it is an accurate reflection of those experiences, then it wouldn’t be going too far to say that humans don’t have a natural memory, but only a learned one. More importantly, your belief in the existence of a memory cannot have arisen from an impartial evaluation of the verisimilitude of your thoughts; rather it came about because at some point it became useful for you to align your thoughts with those of others.

This conclusion would likely be rejected by many as being simply ridiculous. It is hard to overcome millennia of prejudice about what the mind does, to accept that our desire to be accurate recorders of reality was never built into our neurology, and is more a personal preference than a natural function. Some may even feel that to argue the latter is to cast unnecessary doubts on the communal edifice of truth that we have erected.

Nor do most people even want be asked how they know their memories are reflections of something true — they brush it off as philosophical navel-gazing. And in practice we live (and should live) as though memories are roughly accurate, despite all the aforementioned issues. However, when we are building AI systems, systems that are designed to learn about the world, we must question even these basic epistemological assumptions. Those who refuse to do so as being a “waste of time” are simply admitting they don’t care enough about the nature of intelligence to dig into the details. They prefer to rely on their lay intuitions of how the mind works, and build their architecture on such shifting sands.

This is more or less the case throughout the field of AI. Nearly all Machine Learning paradigms are strictly based on the assumption that the purpose of memory is to be an accurate reflection of empirical experience. This principle is so foundational to the field as to be beyond dispute. It is a part of AI’s legacy as an offshoot of data science. For example, Autoencoders are designed to replicate their input dataset as closely as possible in a compressed format. Compressing the source distribution — by discovering and exploiting underlying patterns — is merely a convenience, however, a means to an end; and the end is always accuracy. Generator models — of which large language models (LLMs) and image generators are a subset — are also trained to replicate their source material, and are evaluated by their adherence to the originals (e.g. by Fréchet Inception Distance). And finally, adherence to an imagined underlying distribution of a dataset (e.g. by KL-divergence) is the gold standard of all supervised and semi-supervised Machine Learning.

Accuracy is a goal, and an imposition we have placed on ML models, just as we have placed it on one another. An accurate memory is lauded by society as being profitable to its aims; namely, progress through collaboration. It is no wonder, then, that we each invent our own concept of memory out of a desire to build consensus between ourselves and others regarding what is real, and that we subsequently use the word as a means of enforcing that shared agreement. Memory becomes a moral injunction, as in: “because we value consensus, you should have an accurate memory”. It is similar to blame — not an concrete mental function, but a subjective interpretation that is useful when enabling some purpose. The phrase “you should have a good memory” is a variation of “you should be a good person”.

¹ The boundary between long-term and short-term memory is not an easy one to draw.

² And vice versa — if someone could convince you that your accurate memory is false, then you would assign a memory to be a “fabrication”.

³ The degree of accuracy deemed sufficient is also a contextual decision; the evaluation criteria are defined by the underlying motive.

⁴ As Nietzsche pointed out in the Genealogy of Morals, memory is the glue that holds society together.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.