The Walter Bradley Center for Natural and Artificial Intelligence was launched in Seattle the evening of July 11, 2018. The launch was streamed live, and here is the YouTube record of it:
The new center has two websites:
Here is the text of my remarks, delivered in my stead by Robert Marks because I was unable to attend on account of my Mom’s impending death (she passed away July 15):
Good evening. Thank you for attending this launch of the Walter Bradley Center for Natural and Artificial Intelligence. In my talk tonight, I’m going to address three points: (1) why the work of this center is important, (2) what its impact is likely to be, and (3) why it is appropriately named after Walter Bradley.
First, however, I want to thank friends and colleagues of Seattle’s Discovery Institute for their vision in forming this center and providing a secure home for it. Thanks go especially to Bruce Chapman and Steven Buri for making this center a full-fledged program of Discovery; to John West for working through the many crucial details that an initiative like this entails; to Robert Marks for his towering presence in the field of computational intelligence and his willingness to lead the center; and finally to Walter Bradley for giving us not only his name but also his example and inspiration (more on this later).
The Walter Bradley Center, to the degree that it succeeds, will not merely demonstrate a qualitative difference between human and machine intelligence; more so, it will chart how humans can thrive in a world of increasing automation. Such a vision ought to be praiseworthy and non-controversial. But in an age of computational reductionism inspired by scientific materialism, where so much of the mainstream academy views our humanity as unexceptional and even obsolete, such a vision is anything but.
Such a vision requires a home whose residents take a principled stance for humanity over and above machines, and who won’t be cowed by a materialist culture and worldview that regards computational reductionism as mandated for all right thinking people. Happily, Discovery Institute provides such a home.
Yet the Walter Bradley Center exists not merely to argue that we are not machines. Yes, singularity theorists and advocates of strong AI continue to vociferate, inflating the prospects and accomplishments of artificial intelligence. They need a response, if only to ensure that silence is not interpreted as complicity or tacit assent. But if arguing, even persuasively, with a Ray Kurzweil or Nick Bostrom that machines will never supersede humans is the best we can do, then this center will have fallen short of its promise.
The point is not merely to refute strong AI, the view that machines will catch up to and eventually exceed human intelligence. Rather, the point is to show society a positive way forward in adapting to machines, putting machines in service of rather than contrary to humanity’s higher aspirations. The twelfth century theologian Hugh of Saint Victor, writing in his Didascalicon, argued that the aim of technology is to improve life and thereby aid in humanity’s restoration and ultimate salvation. His insight applies to the preeminent technology of our day, AI.
Unfortunately, rather than use AI to enhance our humanity, computational reductionists increasingly use it as a club to beat our humanity, suggesting that we are well on the way to being replaced by machines. Such predictions of human obsolescence are sheer hype. Machines have come nowhere near attaining human intelligence, and show zero prospects of ever doing so. I want to linger on this dim view of AI’s grand pretensions because it flies in the face of the propaganda about an AI takeover that constantly bombards us.
It is straightforward to see that zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry or fear that they will. So how do we see that? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence, and not letting it be confused with artificial intelligence.
What has artificial intelligence actually accomplished to date? AI has, no doubt, an impressive string of accomplishments: chess playing programs, Go playing programs, Jeopardy playing programs just scratch the surface. Consider Google’s search business, Facebook’s tracking and filtering technology, and the robotics industry. Automated cars seem just around the corner. In every case, however, what one finds with a successful application of AI is a specifically adapted algorithmic solution to a well-defined and narrowly conceived problem.
Nothing wrong with any of this. The engineers and programmers who produce these AI systems are to be commended for their insight and creativity. They are, if you will, building a library of AI applications. But all such AI applications, even when considered collectively and extrapolated in light of an ever increasing army of programmers working on ever more powerful computers, still don’t get us any closer to computers achieving, much less exceeding, human intelligence.
For a full-fledged AI takeover (think Skynet or HAL 9000) to become a reality, AI needs more than a library of algorithms that solve specific problems. To date, AI has done nothing more than build such a library. But that’s hardly sufficient. Instead, an AI takeover needs a higher-order master algorithm with a general-purpose problem-solving capability, able to harness the first-order problem-solving capabilities of the specific algorithms in this library and adapt them to the widely varying contingent circumstances of life.
Building such a master algorithm is a task on which AI’s practitioners have made zero headway. The library of search algorithms just described is a kludge — it simply brings together all existing AI algorithms, each narrowly focused on solving specific problems. Yet what’s needed is not a kludge but a coordination of all these algorithms, appropriately matching algorithm to problem across a vast array of problem situations. A master algorithm that achieves such coordination is the holy grail of AI. But there’s no reason to think it exists. Certainly, work on AI to date provides no evidence for it, with AI, even at its current outer reaches (automated vehicles?), still focused on narrow well-defined problems.
Absence of evidence for such a master algorithm might prompt defenders of strong AI to dig in their heels: give us more time, effort, and computational power to devote to this problem of finding such a master algorithm, and we’ll solve it. But why should we take such protestations seriously? We simply have no precedent or idea of what such a master algorithm would look like. Essentially, to resolve AI’s master algorithm problem, supporters of strong AI would need to come up with a radically new approach to programming, perhaps building machines by analogy with humans in some form of machine embryological development. But such possibilities remain for now pure speculation.
Results from the computational literature on No Free Lunch Theorems and Conservation of Information (see the work of David Wolpert and Bill Macready on the former as well as that of Robert Marks and me on the latter) imply that all problem-solving algorithms, including such a master algorithm, would need to be adapted to specific problems. But such a master algorithm would have to be perfectly general, transforming AI into a universal problem solver. No Free Lunch and Conservation of Information demonstrate that no such universal problem solvers exist.
Yet what algorithms can’t do, humans can. True intelligence, as exhibited by humans, is a general faculty capable of taking wide-ranging diverse abilities for solving specific problems and matching them to the actual and multifarious problems that arise in practice. Such a distinction between true versus machine intelligence is nothing new. Descartes and Leibniz understood it. Descartes put it this way in his Discourse on Method:
While intelligence is a universal instrument that can serve for all contingencies, [machines] have need of some special adaptation for every particular action. From this it follows that it is impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our intelligence causes us to act.
Just to be clear, I’m no fan of Descartes (my own philosophical sensibilities have always been Platonic) and I regard much of his philosophy as misguided (for example, his undue emphasis on philosophical doubt and the havoc it created for metaphysics). Even so, I do think this quote by Descartes hits the nail on the head. Indeed, it is perhaps the best and most concise statement of what may be called AI’s master algorithm problem, namely, the total lack of insight and progress on the part of the computer science community to construct a master algorithm (or what Descartes calls a “universal instrument”) that can harness the algorithms AI is able to produce and match them with the problem situations to which those algorithms apply.
Good luck with that! I’m not saying it’s impossible. I am saying that there’s no evidence of any progress to date. Until then, there’s no reason to hold our breath or feel threatened by AI. The only worry really should be that we embrace the illusion that we are machines and thereby denigrate our humanity. In other words, the worry is not that we’ll raise machines to our level, but rather that we’ll lower our humanity to the level of machines.
For instance, go to a McDonald’s these days, and you’ll find that orders are taken not by humans but by responsive automated displays. Rather than view this change from human to machine order-takers as a triumph of AI, it should be viewed as a case of human intelligence being minimally taxed and thus being replaced, in a given instance, by a machine. Instead of lamenting that machines are encroaching on our work, the lesson rather should be to make our work more challenging and to provide us with the skills and education so that we can handle more challenging work.
The Walter Bradley Center needs to be a place where the best philosophical and scientific arguments against the reduction of human to machine intelligence are aired. I’ve sketched one such argument here in terms of the obstacles to producing a master algorithm of the sort that strong AI seems to demand. But other arguments against this reduction can be made as well, whether from metaphysics, consciousness, language, quantum mechanics, Chinese rooms, Gödel’s theorem, etc. I don’t regard all these arguments of equal merit, but they are worth exploring, articulating, and bringing into conversation with the supporters of a materialist reduction of mind.
Yet as I’ve emphasized, the Walter Bradley Center needs to do more than simply argue for a qualitative difference between human and machine intelligence. Indeed, if we are right and such a qualitative difference exists, then people already understand at some deep level of the soul that they are more than machines. The bigger challenge for the Walter Bradley Center, then, is to help us live effectively with machines. In that regard, we are not Luddites. Failures of AI give us no reason to celebrate. We want to encourage AI’s full development, albeit within the bounds of a life-affirming ethics. AI, like any technology, can be abused (as putting it in service of the porn industry).
Even so, at the same time that we want to encourage AI’s full development, we also want to encourage humanity’s full development. Ours is not a cyborg vision, in which humanity and technology meld into an indistinguishable whole. Rather, the point is to maintain our full humanity in the face of technological progress. Machines must be and ever remain our servants. Interestingly, the impulse to make machines our masters comes less from strong AI than from totalitarians who see in a machine takeover a means of social control (a control exercised not by machines but through machines, as in the surveillance state).
The importance of the Walter Bradley Center for Natural and Artificial Intelligence then is this: it is to clarify the limits of machine intelligence, to understand intelligence as it exists in nature (preeminently among humans), and above all to chart fruitful paths for humans to thrive in a world of automation brought on by AI. It’s really this latter aspect of the center that will define its success and impact. It’s one thing to exchange arguments and critiques with the defenders of strong AI. But the real challenge for this center is to help build an educational and social infrastructure conducive to productive human-machine interaction. The point is not simply to talk and critique; it is to do and build.
Accordingly, the Walter Bradley Center will need to emphasize the following themes:
**digital wellness: how can we maintain our peace of mind as machines become more and more a part of our lives, an issue already of grave concern as social media compete with face-to-face human interactions?
**education: how should we be educated, what topics do we need to study, what skills do we need to attain, and what are the most effective modalities for delivering education so that we can stay ahead of machines, living lives engaged in meaningful full-time work despite the continuing rise of automation?
**appropriate technologies: how do we ensure that people have the technologies they need (technologies increasingly affected by AI) that will allow them to thrive individually and in community rather than as cogs in an impersonal mechanized organizational system?
**entrepreneurship: how do we harness technologies to build wealth-creating enterprises (businesses) so that people, especially in the developing world, can escape poverty and become self-sufficient?
The impact of the Walter Bradley Center will depend on the degree to which it can effectively advance these themes. Each of these themes, taken individually, is significant, but jointly they define the unique focus of the center and how it can help make the world a better place. I’m optimistic that the center’s impact will be substantial and even groundbreaking. We certainly have a great team in place. But we also have the example of Walter Bradley himself, which brings me to the last topic of this talk.
It’s hard for me to speak of Walter Bradley in less than hagiographic terms. He has been an inspiration to me personally over the years, though I continue to fall short of his example in so many areas. But beyond that, in every area where this center named in his honor promises to make a difference, he has made a signal contribution.
On the question of natural and artificial intelligence, he has for decades argued in print and by lecture that the universe as a whole and life in particular gives evidence of intelligence not reducible to the motions and modifications of matter. He spearheaded the most important book on the origin of life in the 1980s, The Mystery of Life’s Origin, laying out the information-theoretic barriers to life arising from stochastic chemistry. And his work as a materials scientist focusing on polymer chemistry gave him further insights into the distinction between the natural and the artificial.
As a long-time engineering professor, both at Texas A&M and then at Baylor, he understands the importance of education to the full growth and flowering of the human person. But he was never merely a professor imparting knowledge of his field to his students. He was always concerned about the wellbeing of his students, taking a personal interest in them, inviting them to conversations about the larger issues of life. While at A&M, he even offered a non-credit minicourse to students on how they could study and learn more effectively by improving their reading skills, memory, etc. When living in Texas, I continually ran into people whose lives had been transformed because they knew Walter.
Even so, Walter’s reach goes beyond research, teaching, and looking out for the people in his backyard. While at Baylor, he helped organize a trip to Africa with fellow students and faculty from the engineering school. There, over the course of two weeks, they built a bridge that saved residents a daily twenty-mile trek around the river. This got Walter thinking not just about the benefits of technology in general, but how appropriate technologies might be used to put people in the developing world into business for themselves. And this in turn led to him setting up coconut farmers in business, using coconut husks to replace synthetic fibers, as needed in automotive interiors.
Thus, in Walter Bradley, we find someone who has reflected deeply on the intersection of natural and artificial intelligence, who has done seminal research in this area, who has been an educator, who has ever been concerned about the wellbeing of fellow students and faculty (digital wellness now being part of that), who has advanced appropriate technologies, and who has harnessed these technologies to help people in the developing world make a living.
Add to that that Walter has been fearless and uncompromising in standing against the materialist currents of our age, and you have a worthy namesake for this center. And so, I commend to you the Walter Bradley Center for Natural and Artificial Intelligence!