The Future of Meaningful Work

Here’s a reprint of an article I published at TheBestSchools.org in April 2017 on the challenge of artificial intelligence to meaningful human work:

—————-

In the last days of the Obama administration, on December 20, 2016, the White House published a white paper (no pun intended) titled “Artificial Intelligence, Automation, and the Economy.” Listed among the contributors were John Holdren, the President’s science advisor, as well as world-class experts in economics, technology, and domestic policy. This paper was a big deal.

Its upshot was that advances in artificial intelligence (especially machine learning) threaten to phase out as many as 50 percent of U.S. jobs over the next 20 years, and that this occurrence will so disrupt the economy that it will require a vigorous multi-pronged governmental response, ranging from improved education to increased social safety nets. MIT Technology Review as well as Wired picked up on this white paper and underscored its concerns. The title of the Wiredpiece captured the worry for the future: “The AI Threat Isn’t Skynet. It’s the End of the Middle Class.” (Skynet is, of course, the malevolent conscious artificial intelligence intent on enslaving and destroying humanity in the Terminator films.)

Threats to jobs from technology have been with us since the Industrial Revolution. But in reading the White House paper, one gets the sense “this time is different.” An extended box on page 20 titled “The End of Work?” (the question mark is especially troubling) raises the possibility of AI not just phasing out some jobs but instead phasing out all human work period, rendering human work passé because, if the promise of AI comes true, everything that we can do machines will eventually do better. But even leaving aside such a grand dystopian vision (which these days is promoted with a smile by singularity and transhumanist enthusiasts), the paper argues convincingly that many jobs as we know them now won’t be around much longer. The case study it presents about AVs (automated vehicles) and the human drivers soon to be displaced is hard to discount. 

At TBS Magazine [now The Quad], Erik Larson wrote a series of insightful articles questioning the more extreme view of AI, which sees AI as superseding human intelligence and rendering humans (and thus human work) obsolete (cf. his general critique of AI as well as his history of computer chess and account of IBM Watson’s computer Jeopardy! player). Against this “strong AI” view, he shows that AI (especially its machine- and deep-learning offspring) have not only failed to prove themselves a match for human intelligence but also seem intrinsically incapable of matching human intelligence (human intelligence considered broadly and not just for narrow tasks, such as playing chess or Jeopardy!).

AI advocates tend to treat expressions of such skepticism as lame attempts by technological unsophisticates to preserve a humanistic vision of the world that is destined to founder against the onslaught of technological advance. But Larson’s argument, and a similar one that I’ve been advancing for over 25 years, is motivated not by a desperate need to preserve human exceptionalism over machines but rather by a sober assessment of the nature of computation and its limitations.

This is not the place to rehearse such fundamental debates over computation in any depth (though I’ll offer a few observations here and there). Rather, my point in this article is to offer some constructive recommendations on how to think about human work in light of the challenges that AI is bringing to it and, above all, why we should think that meaningful human work not only has a future but is the future!

The White House report raises important concerns, and not just for the economy in broad terms but for individual human lives and families as they must negotiate a job market that will be radically reconfigured over the next two decades, phasing out many existing jobs, and without offering a clear blueprint for the types of jobs, if any, that may supplant them. Yet the most maddening thing about the White House paper is its lack of specificity about what new types of jobs may become available once many existing jobs are phased out.

Granted, the paper treats certain general themes, such as a negative correlation between level of education and likelihood of job getting phased out by AI (decreased education greatly increases the probability of one’s job being phased out by AI). There’s also a negative correlation between hourly wage and the likelihood of your job getting phased out by AI (the higher the pay, the smaller the likelihood of your job standing in jeopardy). Thus, for people earning minimal wages and for people with less than a high school diploma, the probability of one’s job being phased out by AI is, according to the white paper, better than 80 percent (what does this mean for minimum-wage laws?).

But short of recommending that we obtain higher-paying jobs ($40-an-hour jobs tend to be immune to AI-phase-outs) and that we get at least a bachelor’s degree (the jobs held by people with at least a bachelor’s degree tend to be less automatable), the White House paper offers no concrete proposals for how to handle the coming AI-induced jobs crunch. At best, one finds vague assurances that as technologies displace jobs, the technologies will require new human minders and thus open up new opportunities for working with those technologies, and that this will provide additional jobs. But how many new jobs will thus be created, and will the new jobs keep pace with the old jobs that were rendered obsolete?

Such assurances (“build the technology, and new jobs will materialize”—never mind the old jobs that dematerialized) sound especially hollow against the specter, raised in this article and cited earlier, of the end of work, period. Certainly, it is disconcerting to see this paper question the very possibility of ongoing productive human work. More troubling is to see the paper end with a call to “modernize and strengthen the social safety net.” Included here are sections titled “Strengthen Unemployment Insurance” and “Give Workers Improved Guidance to Navigate Job Transitions.”

There’s nothing wrong with safety nets as such, but, clearly, it’s always better if safety nets don’t need to be used. If we were confident in human creative ability to overcome the challenge of machines, finding new ways of interacting with them so that meaningful human work, far from contracting, can instead be expected to expand, there would be no need for this white paper to focus on social safety nets. Safety nets may be necessary but they are never ideal: clearly people in a society are doing better the less such protections are needed.

The White House paper leaves the reader with a sense of inevitability: lots and lots of people are going to lose their jobs to AI-automation, we don’t know what new jobs will arise in place of these lost jobs, and we can offer, at best, vague glimmers for moving forward, such as improved and more extensive education (specifics please?) and also better social safety nets.

This assessment is certainly bleak. But is it warranted? I would say no. To justify my optimism (or, perhaps better, protest against pessimism), I offer four recommendations for moving our society constructively forward in the face of massive AI-induced disruptions to the job market:

(1) Societal Commitment to Meaningful Work

As a society, we must be absolutely committed to people finding employment in productive and meaningful full-time jobs. Jim Clifton, the CEO of Gallup, published in 2011 The Coming Jobs War. As the Gallup organization investigated the topic of jobs, it found that productive and meaningful full-time jobs were essential to human satisfaction and happiness. Small mystery here. People need a reason to wake up in the morning. They need to be doing things with their lives that they think will make the world a better place, for themselves, their families, and their communities.

How they attempt to realize that end may be misguided (criminal gangs presumably see themselves as doing meaningful work), but the impulse will always be there. To deny that impulse because AI has rendered human work obsolete can only have an adverse effect on the human spirit. Note that it’s not enough to have a modicum of productive and meaningful work. We want to throw ourselves into that work, to be “all in,” to do it full-time to create maximal benefit for ourselves and others. That’s been one of the problems with the aftermath of the 2008 economic crisis, in which many of the jobs created have been part-time, requiring workers often to cobble together a number of poorly paying jobs just to make ends meet.

(2) Machines as Servants Rather Than Masters

As a society, we need to see machines (and that includes our most sophisticated technologies, such as those that use AI) not as masters but as servants. Unfortunately, we are increasingly tempted to let machines determine our ends because we are continually adapting to what the latest technologies can and can’t do. Texting, for instance, is great for many purposes, but as a substitute for the richness of full face-to-face interactions, it represents an impoverishment.

We need to keep a tight rein on technologies, not to limit what they can do for us but rather to make sure that human intelligence is getting the most out of them and not artificially limiting itself in the bargain. But that means treating machines and the technologies they use as servants to help us accomplish our humanly chosen ends. What’s at stake here is less a specific act than an attitude that says “my humanity is precious and my goals are important, and machines are here as an instrumental good to help me realize my dreams.”

Suppose, for instance, that automation renders driving as we know it obsolete (this, by the way, is not a done deal — see the Appendix below). Suppose AI can make AVs (automated vehicles) drive more quickly and safely than humans. Perhaps AVs will drive so much better than we do as to reduce vehicular accidents by 99 percent, so that instead of over 30,000 people dying each year on American roads through traffic accidents, this number goes down to 300. That would be great. But where are those vehicles going to be driving? What useful tasks will they be doing for us? Machines won’t decide that. We will.

(3) The Primacy of Human- Over Machine-Intelligence

If the previous recommendations were more about public policy, this one is about maintaining the right philosophy of technology. Without such a philosophy, we’ll find ourselves increasingly adrift in a technology-bloated world. At issue is this: Do we seriously entertain the prospect that machines can be smarter than us and eventually replace every aspect of our work? Are real flesh-and-blood humans who refuse to upgrade themselves into cyborgs or upload themselves onto computers on the wrong side of history?

Let’s be clear that all the machines to date, even those with the most sophisticated machine-learning algorithms, are no match for the language abilities of a four-year old child. The concern here, however, is not with what machines are capable of now or even of the disruptions they are likely to bring in coming decades (as with automated vehicles, say), but in their potential long-term.

When Darwin first publicized his theory of natural selection and the implication that humans are descended from primates, the wife of a British cleric is reported to have remarked, “My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray that it will not become generally known.” Her point, it seems, is that even if our primate ancestry were true, it would be best if we did not know it to be true.

Is something similar the case with AI, namely, even if machines can in principle achieve and exceed human intelligence, wouldn’t it be better to postpone that realization as long as possible, keeping humans thinking that they’re smarter than machines until the machines eventually rise up and prove otherwise?

I do think it serves humanity better to regard human intelligence as qualitatively superior to artificial intelligence. Yet such a view is destined to be overturned if artificial intelligence is inherently capable of matching and superseding human intelligence. But how do we decide, from our present vantage, if AI can or can’t supersede HI (human intelligence)? Proponents of Strong AI, who hold that HI must in the end give way to AI, regard such skepticism as a failure of imagination, charging skeptics like me with incredulity. But credulity—the impulse to believe too much—can be as culpable as incredulity—the impulse to believe too little.

It all depends on what ultimately is true. Strong AI proponents charge skeptics with promoting the illusion of impossibility, and point to seeming impossibilities of the past that were subsequently proven to be eminently possible (human flight, space travel, Donald Trump becoming president). But the illusion of possibility can be equally problematic (paranoia and conspiracy theories depend on such illusions).

Though I’m no fan of Descartes (my own philosophical sensibilities have always been Platonic) and though I regard much of his philosophy as misguided (e.g., his view that animals are machines), I do think he hit the nail on the head when he urged in his Discourse on Method that human intelligence can never be adequately captured by machines:

While intelligence is a universal instrument that can serve for all contingencies, [machines] have need of some special adaptation for every particular action. From this it follows that it is impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our intelligence causes us to act.

A very brief history of AI confirms Descartes’ point. Back in the 1980s, when I was doing Lisp programming to help develop an expert system for modeling professional statisticians, the idea was to take an extensive database of statistical subroutines and place them under a set of rules and heuristics that had to be explicitly coded, with these rules and heuristics then serving as the expert. Expert systems worked well for some things (medical diagnosis) but not others (statistics, among other things—my efforts were ill-fated).

The revolution in AI of the last few decades has taken a different approach. It was originally called computational intelligence (neural nets, fuzzy sets, evolutionary computing) and now is typically called machine learning. Here, one takes big data sets that embody successful (human) activities and then trains general purposes algorithms (usually neural nets) on these data sets so that they can also be successful on new data.

Basically, the problem is like fitting a line to points on a plane. Each point represents an input on the x-axis and an output on the y-axis, and the goal is to find a line that not only fits the existing points as well as possible (according to some criterion, such as least squares) but also extrapolates helpfully to novel inputs and outputs.

Only, with machine learning, the programs are much more complicated, not lines or linear functions but neural nets or evolutionary algorithms that determine outputs for much more complicated inputs (e.g., forming text from speech by taking .wav files and computationally outputting written transcripts). I’m skipping a lot here, not least the distinction between supervised and unsupervised learning (all such learning, it turns out, however, falls under search).

The take-home lesson here is that machine learning is always done in narrow contexts for solving specific problems, and there’s invariably little or no transference of machine intelligence from one problem situation to another. In a sense, any machine that learns starts each learning task as a baby that must be brought to maturity by the programmers carefully feeding it data and/or objective functions by which it can adequately handle the narrow task for which it is being created. Descartes’ criticism that machines are not universal instruments of reason thus is proving itself in practice.

(4) Education That Stresses Human-Human Interaction and Learning to Learn

Suppose I’m right about the last three points. Suppose we’re convinced that human intelligence (HI) is, or at least can be, qualitatively superior to artificial intelligence (AI) (humans can act with less intelligence than they’re capable of, so let’s discount that possibility). In that case, what should education look like, especially with an eye toward preparing people for meaningful work?

If HI is indeed superior to AI, then we can expect AI to increase, but at the hands of HI, and so STEM (Science, Technology, Engineering, Math) fields will continue to be in hot demand. People with STEM skill-sets will keep pushing forward what AI can do, and in this way keep their jobs by staying ahead of AI.

Yet, leaving aside STEM, it seems that education can focus productively on two other areas: human-human interactions and learning to learn. If humans are qualitatively superior to machines, machines will never in the end be satisfying partners for them. We see this even in sales and customer support: when was the last time that a mechanical voice successfully sold you something or answered a challenging customer support question?

To the extent that education better prepares people to engage with other people, to that extent education will create and enhance jobs centered on human-human interactions. Such an education will cover everything from increasing emotional intelligence to cultivating empathy and communication abilities to optimizing collaboration in teams (cf. Scrum) to enhancing leadership and negotiation skills. Business schoolscounseling psychology programssocial work programs, and even pastoral care programs may thus be expected all to have swelling numbers.

When an AI machine-learning program learns how to perform some task, it typically learns from big data that results from the activity of many humans leaving tracks on the Internet. The learning task in these cases tends to be precisely circumscribed, and performance on other tasks will require going back to the drawing board and running the same drill over, teaching the neural net to perform, with a certain level of competence, the next task to be performed.

Where humans thus have the advantage is not just in being able to learn given tasks but in gaining the ability to learn for themselves. Most doctoral programs in graduate schools, for instance, are reluctant or even refuse to admit someone who already has a PhD in a different area on the assumption that such a person already knows how to do research, having earned one PhD, and thus can learn whatever he or she needs to retool and change fields.

So much of education today spoon-feeds students with discrete regurgitatable items of information, which are to be memorized, demonstrated on an exam, and then promptly forgotten. But what if education focused less on learning isolated items of information and more on the skill of learning to learn. Imagine learning a foreign language not as a bunch of grammatical rules and vocabulary words, but as a skill in which you learn enough of the language so that you can ask native speakers to explain to you in their language what you are missing.

In other words, this is acquiring the skill to determine what you don’t know and then also to figure out how to remove your ignorance. This is being able to seek out the right teachers. This is knowing what you don’t know and being able to ask the right questions so that at the end of the day you know what you need to know. This is a habit of mind that says I can figure things out for myself.

In It Takes Ganas, a book about the celebrated math teacher Jaime Escalante, my coauthor Alex Thomas and I put it this way (emphasis added):

Education should empower us to enjoy the beauty and good things of life, not as a form of self-indulgence, but for our mutual benefit—to make the world a better place for ourselves and others. Yes, education is a key to financial security. But success is more than money in the bank; it is also a sense of contentment; a curiosity about the world we live in; the satisfaction of learning how to learn so that any knowledge is within reach; and above all the confidence that whatever the world throws at us and however it changes, we have the nimbleness of mind and fortitude of spirit to deal with it. The ability to make money and a career thus becomes a byproduct of education, whose aim is to prepare students for a life in which no good thing need be withheld, in which all good things become possible.

Bottom line: The White House paper on automation rightly draws our attention to the challenges society faces from the coming disruptions to the job market on account of AI, and machine learning in particular. A real and imminent threat exists here, in which the middle-class could get severely hurt. But this threat can be averted if we rise to the occasion, demanding more of ourselves and of our educational system, focusing on those areas where human intelligence has primacy. Simply put, we’re smarter than machines, and we need to play to our strengths where the superiority of human over machine intelligence is palpable.

——————–

APPENDIX: Will automated vehicles indeed replace human drivers?

Early on, as I was writing this essay, I was inclined simply to accept that automated vehicles were on their way to replacing human drivers. And then I came across this report from the professional organization probably most intimately connected to the rise of AI, namely, the IEEE.

“Humans aren’t necessarily perfect at doing very precise things,” says Smith [Tory Smith, the program manager of Drive.ai, which does cutting-edge work in AVs]. “But they’re great at improvising and dealing with ambiguity, and that’s where the traditional robotics approach breaks down, is when you have ambiguous situations like the one we just saw. The nice thing about developing a system in a deep-learning framework is when you encounter difficult situations like that, we just have to collect the data, annotate the data, and then build that module into our deep-learning brain to allow the system to be able to compensate for that in the future. It’s a much more intuitive way to solve a problem like that than the rules-based approach, where you’d basically have to anticipate everything that could ever happen.”

But in context, it’s precisely because the automated vehicle in the situation described could NOT improvise that a human handler had to intervene and relieve the robot of driving. This raises the obvious question, namely, whether robots will ever be able to drive freely instead of only in carefully controlled settings. Most of the deep-learning people seem convinced that fully automated vehicles are just around the corner. Perhaps they are. But perhaps the problem will prove intractable. I could be happy either way, but it will be interesting to see how long improvisation of the sort humans can do behind the wheel is going to be a problem for automated vehicles.

As it is, self-driving cars have been operational (more or less) for at least a couple of decades, but they haven’t gotten even close to human performance until big data and deep learning, which is just human information left up to the Web and the ability of computers, especially by getting faster and faster according to Moore’s Law, to mine those data.

So, what are the real prospects for self-driving vehicles? Machine learning is always taking a snapshot of something and then “learning” the features in the snapshot. As the snapshot changes, humans annotate and re-annotate, train (or more likely refine) a model, and release it to production again. Thus, any unexpected contextual issues that arise and were not seen in that snapshot will pose new challenges—in the form of false negative or positives.

If the task is driving, then, of course, this presents potentially life-threatening possibilities to the designers of the system. This is why we may be unlikely to see fully autonomous driver-less cars anytime soon. Some “revolution” in computing may figure out a way to connect contextual dots together in real-time given a background of ever-changing events. But, at this time, no one has the slightest idea what that revolution would be.

Hybrid systems with efficient and safe “hands-off” features (from machine back to person) already exist and will no doubt get better. But who wants to be “handed-off” to drive when sitting there, reading a book, or what have you?!

And so, I invite the reader to share my skepticism on completing this particular AI task anytime soon, namely, the full realization of fully automated AVs. I also invite the reader to marvel at how facts and fiction are largely irrelevant in popular discussions of AI!