The other day, almost simultaneously, two colleagues emailed me with the same request, though worded differently — namely, to ask for a good argument to show that AI is no match, and indeed can be no match (however far it is developed), for human intelligence.
Here’s how my colleagues posed their questions.
COLLEAGUE 1: I was listening to [a podcast] on fears of AI raised by Bill Gates, Stephen Hawking, and Elon Musk. Apart from Goedel’s theorem, are there known information science barriers to AI being able to think like a human?
COLLEAGUE 2: What’s exactly wrong [with AI] philosophically? … Even to say “as machines become smarter” is simply wrong. We don’t have a scientific definition of intelligence itself… What, exactly, does that even mean? A simple calculator is a genius, because it multiplies large numbers effortlessly… What’s gone wrong with this most important modern debate?
Both these colleagues are skeptics of strong AI, the view that machines will catch up to and eventually exceed humans in intelligence. Moreover, given the current limitations of what AI has in fact accomplished (for instance, natural language processing and accurate translation between languages remain duds for AI), their skepticism has some justification. But the worry is what AI might accomplish given faster machines and more advanced programming — that given better hardware and software, machines will eventually surpass us.
What my colleagues and I would like (I’m also a skeptic of strong AI), then, is some good reason for thinking that however far AI is developed with advances in hardware and software, there will always remain a sharp discontinuity between machine and human intelligence, a discontinuity that cuts so deep and marks such a hard divide between the two that we can safely set aside the worry that machines will supplant us.
It would be an overstatement to say that I’m going to offer a strict impossibility argument, namely, an argument demonstrating once and for all that there’s no possibility for AI ever to match human intelligence. When such impossibility claims are made, skeptics are right to point to the long string of impossibilities that were subsequently shown to be eminently possible (human flight, travel to the moon, Trump’s election). Illusions of impossibility abound.
Nonetheless, I do want to argue here that holding to a sharp distinction between the abilities of machines and humans is not an illusion of impossibility. Quite the contrary. The drumbeat that says machines are just about ready to supplant humanity is itself an illusion of possibility, where, as I shall argue, the possibility deserves no credence and confuses imaginations with real possibilities (Hal 9000 in the Kubrick film is easily imagined and portrayed; getting a real Hal is another matter).
DIGRESSION:
Before we get into my actual argument, let me dispense with Goedel’s Incompleteness Theorem, which is sometimes invoked as justifying the impossibility of strong AI. Strong AI would say that any human intelligence is simply the operation of an algorithm (a super-complicated algorithm composed of numerous subalgorithms, no doubt). Goedel’s theorem shows that algorithms cannot identify their own Goedel sentence because that would require going outside the computational framework in which the algorithm resides and operates. But humans are able to identify Goedel sentences. Therefore humans can do things machines can’t.
I’ve never found this argument (even when not oversimplified, as I did above) persuasive. When humans identify Goedel sentences, it is for algorithmic systems that are separate from themselves, where they can see the entire logical structure and then use that structure against itself, as it were, to identify a Goedel sentence. But even if human intelligence is algorithmic, humans don’t have the capability of, so to speak, looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence.
Consider two things: (1) the nature of our physiology, even if it were running a super-complicated algorithm that equates with our intelligence; and (2) the problem of quantum interference effects, which would prevent getting a full “read” on the algorithm that (supposedly) constitutes our intelligence (reading the algorithm would, by quantum interference, change the algorithm). Together physiology and quantum interference would make it effectively impossible for us to identify our own Goedel sentence.
Goedel’s theorem thus becomes moot. Yes, we can identify Goedel sentences for formal systems external to ourselves. But computers can be programmed to find Goedel sentences as well for formal systems external to themselves (as becomes clear in any reasonably thorough book on computability and recursion theory). But we are not external to ourselves, so we can’t identify the Goedel sentences for ourselves. In consequence, in locating or failing to locate Goedel sentences, we don’t exhibit a capability lacking in computers.
So how can we see that AI is not, and will likely never be, a match for human intelligence? The argument is simple and straightforward. AI, and that includes everything from classical expert systems to contemporary machine learning, always comes down to solving specific problems. This can be readily reconceptualized in terms of search (for the reconceptualization, see here): There’s a well-defined search space and a target to be found and the task of AI is to find that target efficiently and reliably.
In chess, the target is to find good moves. In Jeopardy!, the target is to find the right answer (or question, as the case may be). In medical expert systems, the target is to find the disease associated with certain symptoms. Etc.
If intelligence were simply a matter of finding targets in well-defined search spaces, then AI could, with some justification, be regarded as subsuming intelligence generally. For instance, its success at coming up with chess playing programs that dominate human players might be regarded as evidence that machines are well on the way to becoming fully intelligent. And indeed, that view was widely advertised in the late 1990s when IBM’s Deep Blue defeated then world champion Garry Kasparov. Deep Blue was a veritable “genius” at chess. But computers had been “geniuses” at arithmetic before that.
Even to use the word “genius” for such specific tasks should give us pause. Yes, we talk about idiot savants or people who are “geniuses” at some one task that often a computer is able to do just as well or often better (e.g., determining the day of the week of some arbitrary date). But real genius presupposes a nimbleness of cognition in the ability to move freely among different problem areas and to respond with the appropriate solutions to each. Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation.
Imagine, for instance, a library of every well-defined search along with the algorithm that solves it (I’m using “solve” here loosely — computer scientists often talk about “satisficing” solutions — solutions that are “good enough,” not necessarily optimal in some global sense).
AI has a long string of accomplishments that are part of this library: chess playing programs, Go playing programs, Jeopardy! playing programs just scratch the surface. Consider Google’s PageRank, FaceBook’s news filters, and the robotics industry. But in every case, what one finds are well-defined search spaces with specifically adapted algorithms (often requiring inordinate amounts of human input and know-how) for conducting a successful search.
Now the point to realize is that this huge library of algorithms is not itself intelligent, to say nothing of being a genius. At best, such a library would pay homage to the programmers who wrote the algorithms and the people whose intelligent behaviors served to train them (a la machine learning). But a kludge of all these algorithms would not be intelligent. What would be required for true intelligence is a master algorithm that coordinates all the algorithms in this library. Or we might say, what’s needed is a homunculus.
A homunculus fallacy is most commonly associated with the study of perception. When we think about perception as physical stimuli that affect our senses and then are conducted as nerve signals to be interpreted by the brain, it’s natural (and wrong) to think of the nerve signals deposited in the brain as constituting a sensorium that a homunculus (a little version of ourselves) then in turn senses, observes, and interprets. This, of course, then leads to a regress in which perception in the homunculus needs to be accounted for, and so on ad infinitum.
It seems that a similar problem arises for AI. Successful AI would need to be able to solve all sorts of specific problems. But successful AI would also need to adapt or match the capability of solving those specific problems to the actual problems as they arise under the highly contingent circumstances. It’s this adapting of AI’s ability to solve all sorts of specific or “first order” problems (doing arithmetic, playing chess, etc.) to the different circumstances where those problems needing solution surface that I call AI’s homunculus problem.
AI needs a homunculus to work, if you will a “higher order” problem solving capability capable of harnessing the first-order problem-solving capabilities and adapting them to widely varying contingent circumstances. In other words, a successful homunculus would require some knowledge of all the different first-order problem-solving algorithms at its disposal, and then know the right circumstances in which to apply them. This means that all AI’s first-order successes, however impressive and elaborate, will do nothing to advance the higher-order success of its homunculus.
What sort of program can adequately match algorithmic problem-solving capabilities to the problems that the algorithms are capable of solving? Advocates of strong AI largely ignore this question, but reflection indicates that it is huge and totally unresolved. It would do no good, for instance, to use a chess playing algorithm in a problem situation similar to chess, but in which the knight moves like a bishop and a bishop like a knight. Many of the moves that the chess playing program would make would be invalid.
Here, then, is the nub. AI, as an enterprise, attempts to construct algorithms that solve problems or, if you will, conduct successful searches. But what AI needs to be successful is to construct an algorithm that conducts a higher-level search, finding the algorithm that’s needed to solve a given problem or, equivalently, successfully search a given search space. In other words, AI, to be successful, needs to perform a search for a search: it needs, in confronting contingent and unpredictable circumstances, to find the right algorithm to deal with a given circumstance (solving the problem it raises, performing the search demanded).
This is a task on which AI’s practitioners have made no headway. The library of search algorithms described above is a kludge — it simply brings together all the algorithms. No doubt there will be hierarchical connections in this library, so that an endgame playing program in chess would be part of a full chess playing program, which might be part of a still broader algorithm covering common board games (checkers, Go, etc.).
But what’s needed is not a kludge but a coordination of all these algorithms. A master algorithm, or what I’m calling homunculus, that achieves such coordination is the holy grail of AI. But there’s no reason to think it exists. Certainly, work on AI to date provides no evidence for it, with AI, even at its grandest (automated vehicles?), still focused on narrow problems, or equivalently, well-defined searches.
Absence of evidence for such a “search for the search” master algorithm (homunculus) might prompt a response by strong AI’s valiant defenders: give us more time and effort to devote to this problem of finding a successful search for the search, and we’ll solve it. There is a deeper conceptual problem here, however, which is that the search-for-the-search type algorithm required for the master algorithm (homunculus) must be a very different animal from the ordinary search algorithms that make up AI as we know it. Rather, we need an algorithm capable of directing the algorithms in our hypothetical library and applying them appropriately to given problem circumstances.
Now it might be argued that what I’m calling this search-for-the-search master algorithm (homunculus) is a red herring because it depends on a hypothetical library representing past achievements of AI researchers, and that when the algorithmic homunculus capable of human intelligence, with its nimbleness to solve a vast range of problems, comes, it will not be built over such an existing library. But if not, what then? Will it be an algorithm that has the power to create its own capabilities of resolving the vast array of problem situations into which humans, and thus the newly intelligent machines, are invariably thrust?
In any case, whether as an addendum to my hypothetical library or as its own super-algorithm capable of adapting itself to a multitude of problem situations, we have no precedent or idea what such a homunculus would look like. And obviously, it would do no good to see the homunculus itself as a library of AI algorithms controlled by a still more deeply embedded homunculus. Essentially, to resolve AI’s homunculus problem, strong AI supporters would need to come up with a radically new approach to programming (perhaps building machines in analogy with humans in some form of machine embryological development). The search for a search, in which any lower-level search algorithm that might be needed to resolve a given search situation could be the outcome of an effective general-purpose higher-level search is nowhere in the offing.
Successful search for the search remains unresolved and appears unresolvable. Results from the computational literature on No Free Lunch Theorems and Conservation of Information (see the work of David Wolpert and Bill Macready on the former as well as that of Bob Marks and me on the latter) provide further evidence that this problem is computationally intractable — indeed, that it has no general solution, but must be adapted to specific circumstances, which is precisely the point at issue. AI always seems to end up solving specific problems.
Intelligence is, by contrast, a general faculty capable of matching wide-ranging diverse abilities for solving specific problems and doing so to the actual and multifarious problems that arise in practice. Descartes put it this way in this Discourse on Method:
While intelligence is a universal instrument that can serve for all contingencies, [machines] have need of some special adaptation for every particular action. From this it follows that it is impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our intelligence causes us to act.
Just to be clear, I’m no fan of Descartes (my own philosophical sensibilities have always been Platonic) and I regard much of his philosophy as misguided (for example, his view that animals are machines). Even so, I do think this quote hits the nail on the head. Indeed, it is perhaps the best and most concise statement of what I’m calling AI’s homunculus problem, namely, the total lack of insight and progress on the part of the computer science community to construct a homunculus (master algorithm, or what Descartes calls a “universal instrument”) that can harness the first-order solutions of AI and match them with the problem situations in which those solutions apply.
Good luck with that! I’m not saying it’s impossible. I am saying that there’s no evidence of any progress to date. Until then, there’s no reason to hold our breath or feel threatened by AI. The only worry really should be that we buy into the illusion of possibility that we are machines and thus denigrate our own humanity. Our humanity itself remains secure.