AI enthusiasm and the false promise of democratizing intelligence

Why we seem enamoured with unwarranted AI speculation

From Narrow To General AI
8 min readApr 6, 2024

By Yervant Kulbashian. You can support me on Patreon here.

Recently, Amazon announced it was winding down checkout-less services within its own Amazon Fresh stores, as the technology has apparently proven to be financially and technically impractical. Half a decade ago, these stores emerged to great media fanfare, promising a frictionless shopping experience built on cutting-edge AI. Branded as a “just walk out” service, participating stores would let customers save time by skipping checkout, while Amazon cut costs on human tellers. Whether or not the original demos involved selective editing or other technical theatrics — and thus misled the public to believe the tech was more ready than it actually was — is not important. The key point is that, despite the initial enthusiasm and optimistic projections, it proved to be financially infeasible:

it took a vast array of sensitive equipment and 1,000 people staring at video feeds to do the job of one or two people sitting behind cash registers at each store. — Engadget

This misfire is the most recent casualty in a spate of promising new AI businesses and tools that, at their launch, seemed poised to upend entire industries, but proved to be impractical once engineering teams dug into their devilish details. IBM’s Watson, which promised to replace doctors but ended up being sold for parts was another high-profile example. Autonomous vehicles, into which billions of dollars have been poured over the last decade, may soon be another victim, as seen by the scaling back of many high-profile self-driving projects. In each of these cases, the fatal setbacks involved some combination of poor technical performance, and bad cost structures.

Little of this is surprising to those within the technical field itself. AI has a widely recognized “long-tail” problem, meaning AI agents can perform a large portion of their tasks quite skilfully, but fail in an equally large number of edge cases and exceptional situations, which they are entirely unable to address. In those situations AI, unlike humans, have no capacity to adapt and resolve the issue organically, and require the building of alternate systems to spot and resolve such issues, which ultimately balloons the project’s costs. There are many possible reasons for this deficiency in AI, such as a lack of grounding or model brittleness. But whatever the underlying cause, the issue is clear: a superficial appearance of impending technological revolution can be deceiving.

Of course, misrepresenting the performance of a new offering is not unheard of in product marketing. Fudging the truth is a common industry tactic for maintaining investor confidence, even outside of tech. The question of this post is not why companies mislead their audiences — that fact is self-explanatory — but why the audiences want to believe it so badly.

It is not difficult these days to find, across both official institutions and independent observers, gushing encomiums of where AI will end up in the next five years. Even the CEO of NVIDIA is not immune to such starry-eyed projections. As Erik Larson argued in the Myth of Artificial Intelligence, the desire to trace a meteoric trajectory for artificial intelligence, and extrapolate some transcendent inflection point in the near-future, appears to be an unavoidable feature of AI, undeterred, apparently, by its numerous historical setbacks. Disappointment reliably arrives in the cold light of the morning, but by then a new set of AI theatrics has captured the limelight and the public’s imagination. A sober, level-headed belief in incremental, year-on-year improvements does not seem to be enough — we are addicted to dreaming big.

This is a strange cultural phenomenon. What benefit is there in announcing, today, a fantasy future of exponential growth? Why not simply wait for the event to happen, and only then celebrate humanity’s accomplishments? Instead of gleefully proclaiming that AI will put artists out of jobs in the next five years, or will provide medical care better than seasoned physicians, why not simply hold tight until that future is realised, and count our chickens then?

Why not simply wait for this AI future to happen, and only then celebrate our species’ accomplishments? Why do we repeatedly count our AI chickens long before they’ve hatched?

News outlets, of course, have their reasons for engaging in sensational hype and ebullient speculation: “AI kills cancer cells” or “AI drives cars more safely than humans” are better headlines than “AI helps technicians analyse data”.

Readers who only glimpse the headline will get an incorrect impression of the state of research. Source.

And NVIDIA’s CEO may simply have drunk too much of his own marketers’ Kool-Aid. But what about private individuals, so-called “AI bros” who have seemingly little to gain, and possibly a lot to lose by seismic shifts in the industrial landscape? These proponents come in all varieties, and you’ve likely encountered them in the comments section of some article or opinion post. As late inheritors of Kurzweil’s singularity prophetics — though perhaps more pragmatic than he was — they push technological hype with an almost religious zealotry, lambasting doubters as anti-progress Luddites.

Why expend such energy to defend a vision of AI’s impending, near-total domination of industry? What’s the motive? Is it fear? That doesn’t seem likely, since such commentators appear to relish the prospective changes instead of forewarning against fatal hubris. Is it an admonition to armchair stock investors, reminding them to bank on future paradigm shifts, instead of complacently relying on business continuing as usual? Again, unlikely, since there is no way to tell which specific organization will develop the next leap forward, so investment advice can only hint at vague trends. Are they perhaps inspired by a compelling vision of technological utopia, one that will shatter poverty and disease, and bring humankind to a euphoric oneness of joy and brotherhood? I hope not, since the technology is more likely to consolidate wealth in the hands of those organizations that wield the most computational power, which is (partly) why the rest of us look on such developments with wary trepidation.

Another plausible hypothesis is that they have simply swallowed corporate propaganda wholesale, driven by a desire to be part of something greater than themselves. This would be one more instance of where the pursuit of company profits is being sold to consumers as a fulfillment of their personal aspirations. Like fanatics of certain video games or movies, who aggressively defend an upcoming release based only on a hope and a trailer, they seem desperate for a product, one around which they can form a social identity. Corporations need only feed this hunger as part of a well-rounded social media strategy. Yet even this explanation seems lacking, since you don’t hear enthusiasts parroting company bylines, and they will just as easily switch allegiance from OpenAI to Elon Musk, or follow both, if they are so inclined.

So what’s left? Perhaps it is just a latent desire to be conversationally interesting, by being the “first” to herald, John-the-Baptist-like, the coming transcendence of mind and body. Since the technology hasn’t been fully developed yet, and it’s perpetually on the cusp of revelation, they can make any number of dreamy prognostications without worrying about being swiftly refuted. The attraction of being different, of being “in the know”, has always been a major draw of any sub-culture that lays claim to esoteric knowledge. Yet hidden behind every such movement there is always a deeper agenda, a hard kernel of social frustrations masquerading as privileged insight. This, I believe, is where the real clue is hidden.

Among the AI maximalists, it’s easy to notice a strain of us-vs-them acrimony, of putting those pretentious artists in their place, of cutting the legs out from under those expensive (in the US at least) doctors. AI, like many past technologies, makes emancipatory promises — false ones at that — about a coming equalization; but this time it equalizes human intelligence. When everyone can be an artist, there’s no sense in overvaluing individual artistic talent. When anyone can be a programmer, all those high-paying jobs for self-satisfied nerds disappear. And why shell out thousands in fees for doctors or lawyers when you can run a comparable piece of AI software on your laptop for pennies?

The world today is a marketplace of minds, and some people long for a redistribution of that wealth. That is the ideal — to level the playing field. They foresee a time when, as Isaiah prophesied, “every valley shall be raised up, and every mountain and hill made low”. It’s easy to see where their passion comes from, the ardour and zealous belief. Such a future can’t come soon enough for some people. Comparisons to Christian eschatology are apt here, as they dream of the day that AI will bring down the mighty and arrogant ones:

Source: https://www.reddit.com/r/aiwars/

Yet this ambition appears to me to be ultimately self-centred rather than egalitarian. They are not prophets of a better, post-capitalist future, or technological utopians dreaming of a world without work. They want a level playing field, a fair fight, because that finally gives them an opportunity to win. The tone of their commentary suggests that it is not even really about fulfilling their vision in the next five years, but about sticking it to their opponents right now, in this moment, through words and veiled threats. This is why they don’t wait for their dream to actually come true before announcing their righteous opprobrium against the world.

It’s a shame then, that history has shown over and over that technology doesn’t necessarily create fairer social conditions. It just as often gives existing power a new shape or a new tool. Nor can AI, in its current form, unseat human talent. Modern AI is built on oceans of human-generated data. Without this lifeblood its systems grind to a screeching halt; they are not (yet) self-sustaining, and there is no extant theory to get to that state. People within the field already know this. AI enthusiasm is instead the product of wishful thinking, not of research and careful evaluation. And as with the cryptocurrency craze, maximalist dreams are destined to fade away in the face of a more pragmatic, mundane reality.

It’s perhaps fitting then, at least for now, that the anticipation of a coming revolution is more captivating than the event itself will ultimately prove to be. Disappointment can wait till tomorrow; today has always belonged to the dreamers.

--

--

From Narrow To General AI
From Narrow To General AI

Written by From Narrow To General AI

The road from Narrow AI to AGI presents both technical and philosophical challenges. This blog explores novel approaches and addresses longstanding questions.

No responses yet