The other day, almost simultaneously, two colleagues emailed me with the same request, though worded differently — namely, to ask for a good argument to show that AI is no match, and indeed can be no match (however far it is developed), for human intelligence.
Here’s how my colleagues posed their questions.
COLLEAGUE 1: I was listening to [a podcast] on fears of AI raised by Bill Gates, Stephen Hawking, and Elon Musk. Apart from Goedel’s theorem, are there known information science barriers to AI being able to think like a human?
COLLEAGUE 2: What’s exactly wrong [with AI] philosophically? … Even to say “as machines become smarter” is simply wrong. We don’t have a scientific definition of intelligence itself… What, exactly, does that even mean? A simple calculator is a genius, because it multiplies large numbers effortlessly… What’s gone wrong with this most important modern debate?
Both these colleagues are skeptics of strong AI, the view that machines will catch up to and eventually exceed humans in intelligence. Moreover, given the current limitations of what AI has in fact accomplished (for instance, natural language processing and accurate translation between languages remain duds for AI), their skepticism has some justification. But the worry is what AI might accomplish given faster machines and more advanced programming — that given better hardware and software, machines will eventually surpass us.
What my colleagues and I would like (I’m also a skeptic of strong AI), then, is some good reason for thinking that however far AI is developed with advances in hardware and software, there will always remain a sharp discontinuity between machine and human intelligence, a discontinuity that cuts so deep and marks such a hard divide between the two that we can safely set aside the worry that machines will supplant us. [Read more…] about Artificial Intelligence’s Homunculus Problem: Why AI Is Unlikely Ever to Match Human Intelligence