The long tail of AI failures, and how to address it
A schema for mapping the space of all skills
Introduction
It is well-known among Machine Learning developers that many apparently cutting-edge demos are only staged to look impressive, by training a model to perform well on a controlled task. If anything unexpected happens during the demonstration, the agent reveals its underlying brittleness. This brittleness is the primary obstacle to wider AI adoption in industry: the technology’s pervasive inability to address the so-called long tail of exceptional scenarios. The long tail is the indefinable, open-ended space of unexpected situations that arise in real life, to which agents are unable to adapt on the fly. Many such scenarios are unforeseen even by the developers themselves, who are then forced to write ad hoc, special-case code to account for the growing number of failures they observe.
Consider a commonplace task regularly performed in warehouses: picking, scanning, and placing goods. The core flow is relatively straightforward — pick an item, scan the barcode, place it in a slot. But a number of things can go wrong along the way:
- The robot’s gripper may have covered up the barcode, making it impossible to scan.
- The gripper may drop the item, or part of the item, halfway through the movement.
- The robot may pick up more than one item.
- The gripper may damage the item it is holding.
- The arm may be jostled out of its route by a collision.
- The package may be empty due to an error upstream.
- The robot arm itself may be misaligned, miscalibrated, or damaged.
- Etc.
As a developer, you may not always predict that such cases will arise because the task of picking and placing items (aka pick-and-place) feels so intuitive and simple. You habitually deal with unexpected scenarios without realizing they are exceptional. For example, you may automatically readjust your grip on a box if your fingers are slipping, or unconsciously rotate an item slightly to fit into a tight shelf-space. You could even discover and apply these behaviours in real time. Once you’ve learned to address a particular challenge, repeating the same sequence becomes a thoughtless habit added to your ever-growing repertoire.
When you later conceptualize these diverse activities using words, you tend to forget the number and subtlety of skills involved, or how often you made minor adjustments on the spot. Rather you draw them all together under a single concept — picking and placing. Yet every skill you have a name for — business skill, artistic skill, basketball skill, etc. — is the sum total of dozens or perhaps thousands of overlapping pockets of individually learned problem-solving behaviours; a fact you may overlook when you begin training a model.
If you could understand how humans learn these networks of behaviours, and how they evolve over time, it would be a huge step in the direction of robust AI, one that can handle the long tail — beyond what is currently possible through Behavioural Cloning or Reinforcement Learning. It would also move AI towards being a true knowledge worker, the equivalent of its human counterpart in breadth and adaptability.
To claim that the human mind is boundlessly creative and adaptable, of course, would be going too far. You are still driven to meet your underlying needs within the constraints of reality and what you have learned so far. And even the best human (or AI) cannot solve a problem it is unaware of. Nevertheless, what ultimately differentiates you from a robot is your ability to proactively address a problem by stepping outside the immediate task domain and bringing in new abilities. As a human picker, if a damaged item arrives in your hands and you don’t know what to do, you might signal the floor manager and get their advice. Or you may study the topic independently, if the opportunity is available, and ask clarifying questions. Each of these is a separately acquired skill, conscripted to your immediate need. The goal of this post is to describe how your overall space of skills grows, adapts, and is applied in practice, and how the process may be replicated in a robot.
How to recognize a problem
Situations in the long tail initially arise out of a discrepancy between what humans recognize to be problems, and what the AI does. As a developer, it is often disheartening to watch your robot blithely veer off-course with no awareness of the chaos it’s sowing. The robot is like a child who can’t grasp why putting cash in a toilet bowl might be fiscally problematic. When you subsequently tell the child: “don’t put money in the toilet” you are hard-coding rules, so to speak; you are explicitly enforcing commands without waiting for him to first learn the reasoning and justification behind them. How much better would it be if the child were naturally driven to update his behaviour by his own recognition of the issue?
Every failure of an AI to address some edge-case scenario always begins with a failure to recognize certain situations as problems. For example, a simple routine to scan barcodes can be easily learned via Behavioural Cloning, by training an agent to copy a set of expert demonstrations. But the model will not be able to react appropriately when something unexpected happens and it is unable to finish the task. To do so it must first recognize that there was a failure, and somehow relate it to the need to scan. Otherwise it will continue to replicate what it has been taught to the best of its abilities, with no awareness that something has gone wrong.
When that happens, your only remaining option is to inject your own broader understanding regarding a problem’s detection and resolution into the robot. You may do so as a supporting model, a custom reward structure, or even as a plain-old software routine. You may also have to set up intricate monitoring systems to spot what the AI itself does not see. You’d prefer, of course, if your robot just “understood”, meaning you wish it shared your concerns without you having to explicitly enforce them.
What makes you notice and address such problems, whereas the AI itself remains oblivious? That something has gone wrong must ultimately be signalled by some input. As you, the human observer, watch the precipitating events, they seem to trigger a tension in your mind, one that the AI is apparently lacking. This input is salient because you know, from past experience, that it presages higher-level issues if unchecked. For example, covering the barcode with the gripper is a problem because you require that the barcode be scanned — or to phrase it another way, it is a problem for you when it isn’t scanned.
The fact that you recognize a situation — e.g. a box that won’t fit in a slot — is a problem points to the influence of a higher-level motivator that is driving that interpretation; it signals in what way the outcome was bad. Even a task as routine as picking-and-placing serves a higher-level purpose from which it gains its direction, corrective force, and validity. Only by reference to this base motivator can an agent resolve ambiguities and conflicts that may arise during the task. While picking items, for example, if a package has split into two parts, a naive robot trained to pick-and-place would refer to what its education has imparted; it will try to pick one of the pieces as best it can. You, on the other hand, move up one level and consider more holistically what you are trying to achieve. You are hoping to satisfy a customer, and they would want a complete product. So you unite the two parts. All justification for your decisions comes from higher-level motives. These provide the context and reference to guide you aright.
Consider how difficult it would be to explain to a socially unconcerned AI that you are trying to satisfy your customers. How could you describe to an AI what a satisfied customer is, or what the definition of an intact package is, and how these should influence its actions? Anytime an AI fails to spot a problem, it is because it lacks this “correct” relationship to a higher-level motivation. This gap in understanding is the reason the long tail is a problem.
This is what it means to recognize something as a problem. By interpreting the sight of covering the barcode as an issue (an intermediate tension), your mind has the opportunity to avoid a bad situation down the line, a situation from which it may not recover. It is trying to prevent the more serious failure by avoiding its causal precursor. It is being proactive: not just responding to circumstances based on what it has learned, but also preempting issues that would arise before they actually do.
To preempt a problem always involves creating a new proxy task — in this case, learning to avoid covering barcodes. A new task means that a new skill forms around it — your goal is now to avoid covering the barcode, and you make efforts to improve on that outcome. You could even preempt this intermediate task with another task — you could learn to avoid grabbing packages where the barcode is facing up, thus reducing the chances of covering a barcode. That would create a third-level task to support the second-level one. The chain of preempting tasks can go on indefinitely.
Note how this differs from defining “failure” as a deviation from historical examples — as being “out of distribution” (OOD) or unexpected. Any number of deviations from expectation may occur during the random course of events — e.g. lighting changes, wobbling cables on the robot arm. Most of these are inconsequential to the outcome. In fact, robustness in the face of such immaterial variations is a critical requirement of AI systems. The robot must not panic at every flutter or bit of noise. The AI must only seek to change its behaviour when the variation is significant. This, of course, begs the question of what “significant” means. We have already given a definition: an event is significant if it anticipates — i.e. is a precursor to — a problem with respect to its goals. This is true regardless of the degree of the deviation; even a small flickering light may be significant.
To see, in the sense of to be aware, is to address something as a problem; what you perceive depends on what you need to find. Each new tension forces you to pay attention and look for its resolution; it is the reason you address the situation as a problem in the first place. For example, the agent will not improve the efficiency of its motions unless it is aware of the timing and cost of its actions, and considers their reduction a going concern. Safety cannot be improved unless the agent can allay the causes of damage and harm, which means it can first spot damage and harm. As new needs cause you to see new things, you create new concepts like safety and efficiency around these.
Is it possible for a model to learn to build these conceptual structures on its own? Let’s not forget that we humans conceived of every concept through which we identify and define the problems and sub-problems of a task. We derived notions like damaged item, missing item, loose packaging, etc. As our experiences of failure forced us to discover/invent¹ new problem domains, our skills grew and became more comprehensive. So in principle at least these skills can be acquired, even self-generated by an AI. The question is: how?
Concepts as bookends
We hinted above that the concepts you recognize, like safety or damaged items, arise from a need which changes how you perceive the world. Note that the word “concept” as used in this post is much broader than just those we have English words for. For example, say you are picking items in a warehouse, and you have learned that you should reach for small packages with two fingers instead of, as is usual, your whole hand. This distinction originally entered your mind because of some experience of failure, when you couldn’t pick small items using your regular approach. The action you learned — nuanced, intricate picking — was the resolution. You may add variations of this behaviour as needed, such as particular ways of grabbing variously-shaped small objects with two fingers. This specialized activity may eventually come to constitute its own “skill”.
It may be difficult to think of the example above as constituting a “concept”. The behaviours you learned blend seamlessly into nearby activities, and any concept inherent in the group is only an implicit one, latent within the actions. By “latent”, I mean that a person can manipulate packages dexterously without coalescing or defining a concept to describe and categorize them. To become recognized as a concept, the hazy set of behaviours must first be solidified into identifying symbols or words. Until then, the concept remains latent.
The reason you might form explicit concepts around these behaviours is to communicate and align your actions with those of other people. If a robotics team worked on the above problem for long enough they would likely come up with a hyphenated term to designate the task, such as “fine-pick” or “nimble-pick”. The consistency of the term’s usage across team members is determined by how much they are willing to align and coordinate their definitions.
Given more time and social penetration, the concept could even acquire its own word in the English language. Every new word that has entered the vocabulary has done so based on a common need. A word serves as a banner around which to rally communal efforts and intentions. So the linguistic side of a concept is just how you use words, rather than other actions, to resolve the driving tension. There is no functional difference between an implicit concept and an explicit, verbal one. Concepts are ongoing behaviour-modifiers, and their verbal expression in words is just one of those modifications. Even your introspective self-observation, by which you discover which concepts exist in your mind, is such a modification (see footnote 2). Underneath it all, your motives draw out these “concepts” in continual activity and practice.
It is important to establish this distinction here, because as a written article, I can only describe the topic using common words; and these are distinct from actual instances of skills in people’s heads. So for the purposes of this post, concepts will be defined as the patterns of how you deal with sensory inputs on a subjective, needs-based level. For example, the concept of efficiency is how you address your dislike of expending energy. Safety arises from a fear of harming others or yourself. Fragility is rooted in your anxiety about damaging something beyond repair. Any internal unity that a concept displays arises from how effectively your mind transfers responses from one experience to another. (More on this later.)
For now, all I want to highlight is that concepts, in the sense used here, encapsulate a skill, because their meaning contains both the nature of a problem (a failure) and also its resolution³. Consider the concept denoted by the word empty. It implies that a container exists, and this container would otherwise have something in it. Being filled is part of the concept of a container, so this container can also be filled (the resolution). You cannot learn the concept of empty without also learning these implications. As another example, damaged items are not inherently distinct from non-damaged ones; whether or not an item is damaged is a subjective call. Damaged implies a form that you find less valuable, and that there was another version of the item in the past that was preferable (undamaged). Thus the concept points to an activity that draws the agent from the tension to its resolution.
Between these two book-ends (the tension and resolution) resides the skill-domain of the concept. The initial failure is the reason you pay attention, the reason you made space in your brain for responding to these particular experiences. The resolution is where you bring your involvement with the subject to a close; where it leaves your mind. Everything that enters and exits your mind does so as a consequence of these two events. The meaning of any concept is identical with the motive that drives it, as well as the skilled response you learn to address it. These three (concept, motive, skill) come into existence at the same time. What must now be explained is how a skill forms within these boundaries.
Abstractions lack concrete reality
Some readers may have noticed a possible issue in the theory above. For example, we mentioned that a concern for customers should be the driving force enabling AI to make decisions about the treatment of goods. However, concepts like customer and concern are abstract, as are damage, item, move, etc. By “abstract” I mean that there is no reliable pattern in sensory stimuli that can tell you what is and is not a customer, or damage⁴. Shipping boxes vary widely in appearance. A piece of tape stuck on a box may not count as damage, even though the box looks slightly odd. But in some cases it may indeed be damaged. To identify it as damaged is a judgment call, and a case for or against that designation may be argued after the fact. The answer may depend on predicted downstream effects — will the tape get caught on machinery later, and cause havoc?
So it is impossible to hard-code rules by which an AI can identify damage, rules that would apply in all cases now and in the future. This layer of recognition must be continually learned and refined through practice. Assuming we want an AI to do so without human supervision, that in turn requires other layers below it to guide its learning — e.g. customer satisfaction, or machine malfunction. These are also abstract, so they need other layers below them, and so on. No basic, atomic concepts exist to serve as a firm foothold. It’s turtles all the way down. The only solid foundation to be discovered anywhere is what we might call the hard-coded biological tensions: pain, hunger, etc. Everything built on top of those is mercurial and open to adaptation. Even the most fundamental cognitive constructs, such as the definition of food, or methods to treat pain have changed vastly over time; as has every other concept built on top of them.
This may cause readers some concern. It appears that an artificial agent may never fully be able to deal with a simple situation like shipping a product without being trained from the ground up as a complete human being with social needs and psychological competence. Its ignorance of human relationships may hinder it even in situations as routine as moving packages in a warehouse. And in a sense this is true; but it need not be. Our human perspective is not the only valuable one. A bricklayer doesn’t have to care about architecture to provide a useful service. Rather, this post will come at the problem from the other side: what breadth of skill can we expect from an agent that defines its own skill domains?
This post began with the goal of creating adaptive artificial intelligence; an agent that can learn many tasks in various environments, and could handle the long tail of emergent issues. One prerequisite is that we not presume to know the right approach to a given task beforehand, nor inject inductive biases prematurely. Instead this post will try to describe how individual domains of problem-solving emerge and evolve to fit the underlying needs. The goal is to allow the agent to discover for itself what the right conceptual approach is, and what counts as a problem or a solution. In the process, it may build up the concepts we take for granted; e.g. smooth motion, or efficiency, or safety, or fragile. It may later choose to align its own interpretations with those of other people around it, perhaps because it is useful to communicate with them; though this is not a necessity. All that is required of us is to give it the option to do so, and that requires that we tie skills and concepts to the motives that generate them.
The specificity/generality loop
We should pause here for a moment. At some point we have to render the machinery described above explicit. Given what was discussed, one would be forgiven for feeling a little lost as to where to go next. This is because the various modes of studying and interpreting “skills” frequently lead us in contradictory directions. Here’s why:
When we first think of skills that people (or AI) can have, we always start by abstracting them as generalities. This is a limitation of language; to schematize anything — even the subtle idiosyncrasies of an individual mind — the subject matter must be shoehorned into common words and symbols denoting entities we are all familiar with. So you may say “she has strong business skills, just like her father did”, as though business skill were a single, universal psychic entity shared by all who have it. But it is only universal insofar as we all try to converge on our shared use of the term. In practice, reality is specific; no two moments are the same. And you must address reality as it confronts you in all its devilish details.
As your mind tries to deal with specific situations and problems which arise, it also begins to find unity and commonality between dissimilar moments; that is part of its function. A response you learn for one case can often, though not always, transfer to new circumstances. A particular technique of readjusting your grip on a box, once mastered, will work for boxes of similar size. But if the box is too big, or the orientation of your grip is too different, a new routine may have to be learned. Your perception of a skill’s universality is your ability to transfer responses from one experience to another. The skill of having a good grip is ultimately the perceived sum total of these small, specific abilities; just like business skill, or any other skill.
This you may readily accept, until you start to discuss the details of these individual behaviours with other people. Even if you try to invent brand new words to describe them, you nevertheless must group similar instances under common category labels — otherwise, how would you describe them? It is useless to invent a word for something that will only be present once, in one person’s mind; all definitions require repetition and correlation across multiple examples. That is what it means to understand: to connect specific moments to general concepts. And so we come full circle to the need to use general terms.
To summarize, when analysing what skills are, there is a three-way tug-of-war between:
- The reality that skills are aggregates of unique, individual behaviours: e.g. you must individually learn how to carry boxes of specific shapes, and do so in the immediate moment,
- Skills as transferable trends in behaviour: e.g. you can transfer a learned routine for carrying one box to another box, if doing so is appropriate or useful,
- Interpreting skills as general concepts: e.g. you may discuss how a person is generally good at carrying boxes, regardless of their size and shape.
These three ways of interpreting skills often get confused for one another, which leads to equally confused theories of skill acquisition. Now that they are in front of us, however, we can bring them together to create a schema of how skills — all skills — are formed. We start, naturally, at the first step: dealing with specific situations. From there we can work upwards to skills as coordinated systems of learning.
An active workspace
Life must be dealt with in the moment. What causes you to pay attention, to engage, and to learn is immediate, specific, and always triggered by a perceived need. When you first start to build up any skill, you are responding to a particular event, which registers as a one-off interaction; i.e. solve this problem, now. You don’t yet know it is a skill. Categories, like movie genres, are rarely recognized when their first instances are being created.
Consider what might happen when a naive pick-and-place robot encounters its first fragile object. When the item breaks, the agent will have no sense of what happened, but can only judge that it was unable to continue towards its goal. It can perhaps guess that something about how events transpired must be tied to, or must have caused its failure to satisfy the underlying motive. Going forward, the appearance of this same item on the picking palette may now trigger it to be cautious — though it knows not for what reason, because to “know the reason” would be to attach a useful thought to the event; and that is currently lacking.
With no established path to resolve the concern it searches for some other behaviour that may lead to success. It now reaches into its storehouse of existing routines. Perhaps it discovers that moving such objects more slowly does not result in failure. This is what it would do with objects that are loosely gripped — moving slowly allows loosely gripped items to be held in hand, and thus successfully transported.
At this point, it is treating the fragile object as though it were a loosely gripped one. You and I know this is misguided, but lacking another alternative this is a fair hypothesis for the robot. It is not perfect though, and may occasionally fail. Only once it realizes that to consistently resolve the underlying issue it must lessen the force of the grip, not decrease the speed of movement, does it self-correct and create a new routine for the item. As more situations pile up for which a delicate touch is effective, these will borrow their responses from the latter (fragile) examples, and not the former (loose grip) ones, simply because that is what works best.
You can observe an equivalent process play out in language. Imagine you are watching a fellow employee reach brusquely for a fragile item, one that you have previously seen shatter. As you foresee an accident and experience the accompanying anxiety (tension), you have the opportunity to take action, this time using words, to avert disaster. But should you say “careful that’s fragile” or “careful that’s loosely gripped”? The choice of explicit linguistic category you assign to the object will be the choice of wording that would allow the other person to treat it the right way — i.e. you use the term that is effective. Words are no different from actions; both are merely the right response.
This example demonstrates two core features of building skills. The first is that a skill is always a reaction in a particular moment — you address some emergent situation because you foresee a bad consequence. All learning is a result of an immediately perceived reason or need, even if this underlying driver is invisible to you on introspection. A lack of any such motive only results in complacency and idleness. Its presence, on the other hand, triggers caution or attention, and carves out a temporary workspace to address it, with its own concepts and reasoning. Compare this to Reinforcement Learning, where actions are constantly being taken with no immediate propelling motive except that the agent must optimize a global loss function.
Every response you learn, you learned because the status quo, the existing response, proved ineffectual. Without that catalyst, your mind falls back to some prior default, since you have no reason to distinguish this situation and act differently; (the ultimate default response is no response at all). If the default works, you continue to do it; if it doesn’t — i.e. if the underlying need is not satisfied — then a new, more targeted routine must be searched for on the spot. As you discover new responses for new situations, they build on top of one another as incrementally fine-tuned ways of dealing with emergent experiences. Like sedimentary layers, later acquisitions supersede former ones.
Being responsive to an immediate tension also means you don’t wait for a regression event down the line (as in Reinforcement Learning), and hope you do better next time, since by then you will not be in the same situation, and have lost the opportunity to discover a better response. You experiment, both in thought and in reality, and don’t let up the effort until you find a resolution.⁵
The path you ultimately decide on, the one that resolves the issue, is the one you record as the appropriate response. Everything else is discarded. Ideally, the behaviour you learn helps consistently avoid the tension in the future, at which point it becomes an unconscious habit. This is adaptivity.
The need to compartmentalize
The range of time between recognizing a tension and the appearance of its resolution splits the agent’s time into chunks of learning. The learned behaviour is self-contained — it achieves a specific end, and it can be removed wholesale if needed. As discussed in a previous post, the only way to split up a stream of fluid experiences into chunks is through signals that are timed and targeted to an immediate need⁶.
Here we encounter the second aspect of learning a new skill: the separation of concerns. The individual granular units of learned responses are separate, distinct routines that can be transferred from one situation to another. Like a reusable phrase (e.g. “that’s a wrap!”) you may find many situations in which it is applicable. The segregation of responses also prevents any interference between what is learned at various times. This is an oversight that currently limits the complexity of what can be trained via Machine Learning (ML). Much of the instability inherent in training ML models can be traced to indiscriminately blending all learning into one large pot. Exploding and vanishing gradients, catastrophic forgetting, and other failures to converge are all caused by the cumulative errors of repeatedly adjusting the whole model⁷, when only one part needs to be involved. But to know which part, you must segregate the model into sections beforehand.
Consider how failure scenarios are addressed in supervised training models like Behavioural Cloning. When you discover a failure case, you retrain the whole model with additional samples that account for that scenario and indicate the right answer. You adjust the entire network to accommodate the new lesson. From the inside — the model’s point of view — the new case is in no way distinct from the others it has learned. But from the outside — from the trainer’s perspective — the exceptional case was distinguished by the motives that drove it. There was a particular set of experiences that triggered you to pay attention there and then, and a particular problem underneath those. By treating all behaviour as homogeneous, the AI is deprived of this extra information. The relation between what was learned and the motive that caused it is missing. To give the behaviour meaning, the model must maintain a record over time of the causes of what it has learned.
It would be equally incorrect, however, to attempt to introduce this information explicitly. When you, the developer, try to describe and address the issues in the long tail, you intuitively split them into what you perceive to be the correct problem categories (e.g. “this was a multi-pick, that was an empty-grasp”). These divisions are artificial inventions, and may not faithfully represent how you actually deal with the situation. Were you to enforce these explicit categories onto the behaviour of a robot, through custom code or accessory models, all the underlying nuance and flexibility would be erased, and the organic boundaries of learning would disappear.
As the author of the epistemological (ontological?) rupture you must now become the caretaker of that separation as well. You are constantly on-call to fix them whenever they fail. The AI abrogates all responsibility for maintaining a division that it was forced into accepting. This was why Connectionist approaches originally gave up the discrete logic of Good Old Fashioned AI (GOFAI); out of a fear of imposing too many of our poorly-informed opinions on what the agent does. They preferred the AI learn to differentiate these cases on its own, without a trainer enforcing it through custom code or corrective training.
In doing so they went too far in the opposite direction, merging all learning into one giant function. They left the question of what kind of structure would form within the black box of the neural network to the winds of fate. Transformer architectures are starting to reintroduce a separation of concerns through the method of attention, and this has been relatively successful. What is still missing, however, is the separation of motives. These latter comprise the skill domains.
Comparisons to Reinforcement Learning
Much of what is written above may sound like common practice when designing Reinforcement Learning (RL) agents. RL also begins with a set of basic rewards, out of which the agent builds its own “intrinsic” values about the world (Q-values). Q-values might be seen as analogous to sub-motivations. For example, a robot may place a low value on having a loose grip on an item, since it has learned that doing so is often on the path to dropping the package. And Q-values also preempt the higher-level issues by preceding them in time. However, the space of learning in the RL agent’s value model is homogeneous. The presence of a low-value situation, such as grabbing a box and covering its bar-code, does not trigger a “sub-problem” routine; that is, a recognition of an emergent concern with its own discrete domain that must be resolved on its own terms. No focused experimentation is possible, that could give rise to domains of skill; RL systems are always regressing towards a global reward.
In a giant “problem space” there is also no way to connect a learned value or behaviour to the parent value state(s) that created it. If you ever want to investigate why the robot decided to pick objects from the left side, you cannot trace the cause to, say, that time when it dropped an imbalanced item — that information is lost, deemed unimportant. All values are presumed to be caused by the main rewards, and there is only ever one task.
In order to resolve this issue, value states must be treated as hierarchies of distinct entities, each triggered by a concrete set of sensory inputs, where parent motives generate child motives and responses. That way, the destruction of a parent — the sudden recognition of the delusive nature of a particular value — can delete all its children, including values, thoughts, and actions. This allows the mind to create domains of understanding. Yet to do so requires that we disentangle and separately catalogue each piece of learning, each thought, each action chain. As you can imagine, that is entirely at odds with the fundamentals of Deep Learning Networks, (though it is more in line with logical AI). We must therefore reconsider what the concept of a Neural Network entails. Rather than adjusting the parameters of a single large function, we must catalogue a large collection of individually transferable behaviours.
Transference and generalization are widely acknowledged to be the keys to handling the problem of the long tail. In this post we have shown how individual skills are created as a hierarchy of specific problem-resolutions, and also how they can be applied to new situations on the basis of experiments proving their immediate efficacy.
Afterword: a schema for skills
A catalogue of skills is like an enormous tool-shed with room for millions of entries. Every time you add a new tool, you do so for a particular reason and in a particular moment — usually because none of the existing tools resolved the issue you were faced with. As new situations are discovered, new tools become necessary beyond the capabilities of the existing ones. You may sometimes find that existing tools are useful in many different situations, and so you transfer their application from past to present ones. This may happen due to the similarity of the problem circumstances, through trial and error, or else by reasoning through what the consequences of the transfer would be⁸. Once applied, if the tool is effective, using it becomes an automatic habit. Yet even then, the skill always carries within it the tension that drove its design⁹.
But the analogy above is imperfect, and must be amended. Unlike real tools, you cannot purchase skills ready-made from an external source. Although the raw materials out of which you build them are whatever is immediately at hand, you must nevertheless make each additional tool yourself, with features that serve your current purpose. The difference is a small one, but it is critical for understanding the ontology of categories. No skill or concept exists outside of what you invent; you must build it all up anew, piecemeal. This allows your mind to be adaptable to novel, unexpected circumstances, at the expense of a long educational period. The fact that your own inventions end up being similar in many ways to those in others’ tool-sheds is a consequence of the shared nature of the physical reality in which you both create them; that, and the utility of coming to common terms with other people when working on a shared project.
There is another way the tool-shed analogy must be adjusted to capture the essence of skill formation. There is no separation between the owner who identifies the problem and the tool to resolve it. Whereas real-life tools are only a solution to a problem, every skill is simultaneously the discovery and recognition of a problem and its solution. Those without a skill generally cannot even see the situation as a problem. In philosophical circles, we commonly tend to identify the one searching for solutions as the “conscious self” — the owner of the tool-shed in this analogy. Yet in reality the tool-shed has no owner. It is the tools themselves that are creating one another as they recognize their own shortcomings.
¹ To discover and to invent are identical when it comes to skills.
² Say you observe your own thoughts while watching someone handle a small, fiddly object. You may note that they demonstrate “dexterity”. This is a judgment made in the moment, by attaching a label to the experience. The thought of the label becomes an associated thought going forward. But it was not because the original experience somehow contained “dexterity skills” in itself. The fact that you group together many different situations by attaching the same thought-response to each of them in no way indicates a naturally occurring mental grouping, nor even a similarity on a sensory level; only that it momentarily proved useful to interpret them as the same. It was a synthetic, not an analytic judgment. And your judgment could change in the next moment. Anytime you perceive a concept in your mind, you are actually creating a communicative representative for it (usually a word) and over time attaching the latter to as many specific thoughts and experiences as you momentarily deem appropriate: you are making many separate judgment calls. So the apparent unity of such cases under one concept occurs not in space but in time, as you transfer a label from one experience to another. This may create the illusion that the space of skills itself is split into inherent groups — business skills, sports skills, etc.
³ Even identifying or naming a problem, e.g. a “catastrophe”, a “tragedy”, is a type of resolution, since it directs other people towards resolving it. Naming a problem is like a cry for help.
⁴ Indeed if we define “abstraction” this way, there is hardly a word that escapes being “abstract”.
⁵ Real-time experimentation is also necessary for learning causal relationships about the world. Causal learning requires intervention: you make a change, then observe the result. This means that learning (training) and inference must be united into one self-reinforcing process, where the immediate learning goal drives the experimentation.
⁶ Discrete divisions can only be created by discrete signals. Combined with footnote 2, we conclude that skill domains are discrete in time, but fluid in space. More generally, the subjectively experienced reality of stable, discrete categories is a time-bound illusion.
⁷ Data-hungriness is also an attempt to compensate for this inefficient use of datasets. As particular lessons get washed away, they need to be repeatedly reestablished.
⁸ You are also able to plan and think through a solution in imagination (reasoning), but this behaviour is supplementary and supports the higher-level goal.
⁹ If you have spare time, you may also choose to categorize your thousands of tools into groups. This helps you direct a visitor to the right section, or communicate the set’s virtues without going over each individually.