Moravec’s paradox is no paradox

Logical reasoning was always designed to be simple and consistent

8 min readApr 24, 2025

By Yervant Kulbashian. You can support me on Patreon here.

Since the early 1960s AI researchers have observed a recurring paradox pop up across their projects: that the challenges that seem easy for humans are nearly impossible for AI, and those that seem difficult for people are relatively easy for AI to accomplish. Excelling at chess is a rare and difficult feat compared to putting on a shirt and tie, yet the former was mastered in 1997 by Deep Blue, whereas the latter is still largely unsolved. Many researchers noted this counterintuitive result in their work, but its label has been given to Hans Moravec who articulated Moravec’s paradox in his book Mind Children (1988):

It has become clear that it is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

This paradox persists through to the modern AI age¹. Authors like Minsky and Pinker have offered their own hypotheses for the phenomenon, and Moravec himself proposed an evolutionary explanation in his book:

We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.

Put simply, we’ve had hundreds of millions of years for evolution to perfect our sensory-motor systems, yet when it comes to refining our abstract reasoning capabilities we are still novices. Had evolution better prepared us for higher thinking, we would presumably be more efficient at it. As an analogy, climbing a mountain is difficult for us but easy for mountain goats, who have evolved refined anatomy and skills to master the domain. According to Moravec, logical reasoning, of which mathematics is a subset, is a “domain of challenge” for which our skills have not yet been attuned — a future human would be a better evolutionary fit.

Though it has a certain intuitive appeal, there is a strange assumption behind his hypothesis. It implies that we stumbled upon mathematics in the same way a mountain goat’s evolutionary ancestors stumbled across mountains, and we have yet to acclimatize. Another way of framing it is that logical reasoning is an environmental “niche” that, unlike other animals, we have captured for ourselves, though imperfectly.

This proposal is strange because mathematics and logic are not environmental niches. The challenges they present to us exist only in our minds. Even a logical realist — one that believes logic is part of the real world — would say that numbers and logic exist everywhere all the time. We weren’t presented with them all of a sudden as an external problem to which we adapted; they were always there, merely undiscovered.

There is another issue as well. Logical reasoning is not itself a motivator of survival or competition, as you’d expect from any driver of evolutionary adaptation. It isn’t even a “problem” per se; logic cannot eat you, kill you, hinder you, or steal your resources. It only presents issues when you fail to put it to effective use, at least relative to some other person who uses it better. The more effective your application, the greater your advantage. Logic and reasoning are like tools we have invented, defined, and applied, as a form of power and mastery over the world. We invent and reinvent the space all the time. They are solutions.

In fact, calling logic a “tool” may be misleading since it implies, like an extra finger or tail, that it is embedded within one’s genetic code; whereas mathematical theorems and logical frameworks are all learned. When a new theorem is introduced, it spreads like a meme throughout its practitioners, usually without consideration for the subject’s genetic history. All that is necessary to capture it is to have the basic ability to comprehend it, and a bit of time and interest. Whether some people have a greater genetic ability to integrate and apply a logical principle cannot be known before they are given a chance to try, and that itself cannot be determined until someone comes up with it. For all we know, had the ancient Sumerians lived in modern times, they might have had an equal ability to comprehend modern symbolic logic as we do.

In Moravec’s defence, he would likely say that the various forms or details of logic are not themselves a set of evolutionary tools, as though they were built-in modules in the brain. Rather, they represent specific applications of fundamental reasoning skills that have evolved in humans more so than in other animals. Mathematics are one specific application, but it is the underlying ability that has really allowed us to master our world.

Although it has been difficult to pinpoint exactly what that underlying skill entails, we can at least enumerate some of its benefits. The first is that logic is abstract — its rules can be transferred and applied in a variety of specific cases without knowing the concrete details beforehand. 1 and 1 equal 2, regardless of if they involve apples, cars, or countries. As a rational abstraction, logic also provides precision and correctness — logical and mathematical conclusions are guaranteed. Reliable consistency has often been noted as its greatest feature:

Mathematics catalogues everything that is not self-contradictory— Greg Egan

Of course, one of the recurring problems of logical thinking is that although it is clean and abstracted, it often doesn’t match the messy realities of the world in which we actually live. The machinery of logic is only as useful as its inputs, and too often these inputs get oversimplified. Exploring the real world is not so straightforward as it appears in logical formulae.

This hints at the true explanation for Moravec’s paradox. When we derive new logical methods or mathematical frameworks, we are specifically trying to make it easier and more efficient for us to understand our world; just as the formulas of Newtonian physics were invented to make complex dynamic motion easier to predict and work with. This always comes at the expense of full accuracy, by ignoring many irritating details and consequences. But the overall benefits derive from the fact that, unlike the messy real world, logic and maths transpose our thinking into a simplified, consistent model, which makes it easier to address and extrapolate.

Much of this simplification and consistency also serves the needs of communication. As argued elsewhere, logic is primarily a social tool, and symbols are communicative constructs; together they facilitate collaboration. Like French or English, these systems provide a common language of terms and operations. Consider how business logistics are built around standardized procedures that make interoperation easier: barcodes and SKUs turn a variety of products into unique, discrete numbers; roads and logistic chains make it easy to plan when items will arrive; databases encode the diverse world into a homogeneous searchable/filterable index. In the process a lot of nuance is lost, but maintaining these simplifications helps clarify communication more often than it impedes it. William Kent discussed this in detail in his excellent book, Data and Reality (1978):

The data processing community has evolved a number of models in which to express descriptions of reality. These models are highly structured, rigid, and simplistic, being amenable to economic processing by computer.

Life and reality are at bottom amorphous, disordered, contradictory, inconsistent, non-rational, and non-objective….Rational views of the universe are idealized models that only approximate reality. The approximations are useful. The models are successful often enough in predicting the behavior of things that they provide a useful foundation for science and technology.

Through logic we have forced consistency onto the world. This is why logical reasoning in AI only works in simplified, self-consistent domains. Moravec even hints at this in his book, when he notes that Shakey (one of Stanford’s AI robots in the 60s-70s) only worked in dumbed-down environments:

[Shakey’s sensors] worked only when the scene consisted solely of simple, uniformly colored, flat-faced objects, so a special environment was constructed for the robot. It consisted of several rooms bounded by clean walls, containing a number of large, uniformly painted blocks and wedges.

And even today, Machine Learning models are far more likely to succeed if you narrow or constrain their task to clearly defined, regular entities and processes.

Even if Moravec’s hypothesis had been correct, the fact that evolution had not “prepared us” to perform logic still would not explain why computers and AI are so good at it. The answer is that logical thinking, and any kind of symbolic system, is an invented structure that makes the world easier to work with, more consistent in its extrapolations, etc. Thus the fact that computers and AI can do it easily is no surprise — is was always intended to be so. So when Moravec claims:

[logic] is not all that intrinsically difficult; it just seems so when we do it.

His argument assumes that AI can solve logical problems well because they are actually easy problems in “reality” — as opposed to artificially designed to be easier than dealing with the complexity of the real world.

“Easy” is a relative term based on the skills the agent brings to the task. Computers are machines that perform a specialized job — which is why they do it well. The same is true for all human technology. Planes can fly higher and for longer distances than birds, but they do little else that a bird can do, like heal or reproduce. Given any straightforward, finite task, e.g. grinding corn, we can always invent machines to do a that specified task very well, e.g. a mill.

Evolutionary development certainly explains why humans, unlike other animals, can reason symbolically, but it need not be invoked to explain why computers can also reason well. Exploring and extrapolating the possibilities of simplified, consistent rules and entities is precisely what we have built computers to do; as long as the base terms and operations remain finite and fixed. Much of modern logic theory was even shaped around making such machines more efficient at their task. It is not that we are evolutionarily unprepared, like the first mountain goats would have been, it is that computers are specialized machines designed to solve these simplified problems, using the very conceptual structures we use while reasoning.

A better analogy would have been to sports. It is notably easier for both humans and AI to learn a sport than it is for them to learn to live in general, because the terms, rules, and operations involved in sports are narrow and well-defined. They represent a subset of all activity, which makes their practice smoother and more straightforward for all involved. This is true of games in general, which is why AI have been successful in those highly clarified domains (e.g. Atari games). We designed sports, games, and logic to be simple. We did not design life.

Playing a narrow sport is easier than “general living”. In the video above you can see how the environment is kept relatively clear of distractions.

In the end, Moravec’s paradox isn’t a paradox. We have conceived of symbolic systems to simplify, regularize, and universalize the fluid complexity of the world, and then designed machines that specifically carry out those same select operations. That they do so well should not surprise us.

¹ The balance may have shifted a bit with large visual-language models (ChatGPT, etc), which make errors across both reasoning and perception. However, they are still not suited for those open-ended robotics applications to which Moravec was referring.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.

No responses yet