AI (artificial intelligence) poses a challenge to human work, threatening to usurp many human jobs in coming years. But a related question that’s too often ignored and needs to be addressed is whether this challenge will come from AI in fact being able to match and exceed human capabilities in the environments in which humans currently exercise those capabilities, or whether it will come from AI also manipulating our environments so that machines thrive where otherwise they could not.
AI never operates in a vaccuum. Rather, any instance of AI operates in an environment. We often think that AI will leave an environment untouched and simply supersede human capability as it operates in that environment. But what if the success of AI depends not so much on being able to rival human capabilities as in “changing the game” so that AI has an easier job of it. The mathematician George Polya used to quip that if you can’t solve a problem, find an easier problem and solve it. Might AI in the end not so much supersede humans as rather impoverish the environments in which humans find themselves so that machines can thrive at their expense?
To see what’s at stake with this line of concern over AI, consider the prospect of automated vehicles. Automated vehicles, we are told, are poised to take over all of human driving. Once the machine learning for the automation of driving matures just a bit more, we are assured that human drivers will be out of a job — the machines will drive so much better than us that it would in fact be unethical for humans to continue to drive. And, of course, as an unfortunate side effect, humans whose jobs depend on driving will all be put out of work (truck drivers, taxi drivers, etc.).
Now the actual progress of automated vehicles has not reflected this rosy picture (rosy if you’re an AI advocate, dismal if you’re out of a job). Recent fatalities with automated vehicles have undercut this picture to the public at large. But the deeper conceptual problem with the automation of driving is that there appear just too many contingencies on the road, contingencies that human drivers can handle without difficulty but for which machines require specialized training.
The worry, in other words, is that the list of such contingencies may be so long and indefinite that fully automated driving never becomes a reality, and that the best we can hope for is some form of hybrid automation in which the machine driver does routine tasks but then hands the reins over to the human driver when things get dicey. But such a solution is no solution at all: automated driving that requires human intervention at crucial points is like a god of the gaps, unacceptable to science and unacceptable to the technology of AI.
Following Polya’s dictum about transforming hard problems into easier problems, AI engineers tasked with developing automated driving but finding it intractable on the roads currently driven by humans might then resolve their dilemma as follows: just reconfigure the driving environment so that dicey situations in which human drivers are needed never arise! Indeed, just set up roads with uniformly spaced lanes, perfectly predictable access, and electronic sensors that give vehicles feedback and monitor for mishaps.
My colleague Robert Marks dubs such a reconfiguration of the environment a “virtual railroad.” His metaphor is spot on. Without such a virtual railroad, fully automated vehicles simply face too many unpredictable dangers and are apt to “go off the rails.” Marks, who hails from West Virginia, especially appreciates the dangers. Indeed, the West Virginia back roads are particularly treacherous and give no indication of ever submitting to automated driving.
Just to be clear: I’m not wishing that automated driving fail and that human drivers thereby keep their jobs. I would feel bad for the drivers who lost their jobs if fully automated driving did succeed. But at the same time I think it would be very interesting, as an advance for AI, if driving — in fully human environments — could be fully automated.
My worry, however, is that what will happen instead is that AI engineers will, with political approval, reconfigure our driving environments, making them so much simpler and machine friendly, that full automation of driving happens, but with little semblance to human driving capability. Just as a train on a rail requires minimal, or indeed no, human intervention, so cars driving on virtual railroads might readily dispense with the human element.
But at what cost? Certainly, virtual railroads would require considerable expenditures in modifying the environments where AI operates — in the present example, the roads where fully automated driving takes place. But would it not also come at the cost of impoverishing our driving environment, especially if human drivers are prohibited from roads that have been reconfigured as virtual railroads to accommodate fully automated vehicles? And what about those West Virginia back roads? Would they be off limits to driving, period, because we no longer trust human drivers, and fully automated drivers are unable to handle them?
In his Introduction to Mathematical Philosophy, Bertrand Russell described how in mathematics one can introduce axioms that shift the burden of what in fact needs to be proven and thereby call forth “the advantages of theft over honest toil.” To this he rightly exhorted, “Let us leave them [i.e., the advantages] to others and proceed with our honest toil.”
I would pose a similar exhortation to the AI community: If you are intent on inventing a technology that promises to match or exceed human capability, then do so in a way that doesn’t at the same time impoverish the environment in which that capability is currently exercised by humans.
AI has many successes where humans have been bettered without compromising the human environment (e.g., chess playing programs). But it also has many failures in which machines have been substituted for humans and do worse (anybody prefer the automated order takers at McDonald’s over humans? anybody prefer the automated voices in customer service over real human voices?). But the latter occurs by putting machines into existing human environments.
The worry is that our environments will soon be altered to accommodate AI. It goes without saying who here is going to get the short end of the stick, and it won’t be the machines!