1.
Cards on the Table
In the movie Dream Team starring Michael Keaton, Keaton plays a
psychiatric patient who must feign sanity to save his psychiatrist from being
murdered. In protesting his sanity, Keaton informs two New York City
policemen that he doesn’t wear women’s clothing, that he’s never danced
around Times Square naked, and that he doesn’t talk to Elvis. The two police
officers are much relieved. Likewise, I hope with this essay to reassure our
culture’s guardians of scientific correctness that they have nothing to fear
from intelligent design. I expect to be just as successful as Keaton.
First off, let me come clean about my own views on intelligent design. Am I a
creationist? As a Christian, I am a theist and believe that God created the
world. For hardcore atheists this is enough to classify me as a creationist.
Yet for most people, creationism is not identical with the Christian doctrine
of creation, or for that matter with the doctrine of creation as understood
by Judaism or Islam. By creationism one typically understands what is also
called “young earth creationism,” and what advocates of that position refer
to alternately as “creation science” or “scientific creationism.” According
to this view the opening chapters of Genesis are to be read literally as a
scientifically accurate account of the world’s origin and subsequent
formation. What’s more, it is the creation scientist’s task to harmonize
science with Scripture.
Given this account of creationism, am I a creationist? No. I do not regard
Genesis as a scientific text. I have no vested theological interest in the age
of the earth or the universe. I find the arguments of geologists persuasive
when they argue for an earth that is 4.5 billion years old. What’s more, I
find the arguments of astrophysicists persuasive when they argue for a
universe that is approximately 14 billion years old. I believe they got it
right. Even so, I refuse to be dogmatic here. I’m willing to listen to
arguments to the contrary. Yet to date I’ve found none of the arguments for a
young earth or a young universe convincing. Nature, as far as I’m concerned,
has an integrity that enables it to be understood without recourse to
revelatory texts. That said, I believe that nature points beyond itself to a
transcendent reality, and that that reality is simultaneously reflected in a
different idiom by the Scriptures of the Old and New Testaments.
So far I’m not saying anything different from standard complementarianism,
the view that science and Scripture point to the same reality, albeit from
different vantages. Where I part company with complementarianism is in
arguing that when science points to a transcendent reality, it can do so as
science and not merely as religion. In particular, I argue that design in
nature is empirically detectable and that the claim that natural systems
exhibit design can have empirical content.
I’ll come back to what it means for design in nature to have empirical
content, but I want for the moment to stay with the worry that intelligent
design is but a disguised form of creationism. Ask any leader in the design
movement whether intelligent design is stealth creationism, and they’ll deny
it. All of us agree that intelligent design is a much broader scientific
program and intellectual project. Theists of all stripes are to be sure
welcome. But the boundaries of intelligent design are not limited to theism.
I personally have found an enthusiastic reception for my ideas not only among
traditional theists like Jews, Christians, and Muslims, but also among
pantheists, New-Agers, and agnostics who don’t hold their agnosticism dogmatically.
Indeed, proponents of intelligent design are willing to sit across the table
from anyone willing to have us.
That willingness, however, means that some of the people at the table with us
will also be young earth creationists. Throughout my brief tenure as director
of Baylor’s Michael Polanyi Center, adversaries as well as supporters of my
work constantly pointed to my unsavory associates. I was treated like a
political figure who is unwilling to renounce ties to organized crime. It was
often put to me: “Dembski, you’ve done some respectable work, but look at the
disreputable company you keep.” Repeatedly I’ve been asked to distance myself
not only from the obstreperous likes of Phillip Johnson but especially from
the even more scandalous young earth creationists.
I’m prepared to do neither. That said, let me stress that loyalty and
friendship are not principally what’s keeping me from dumping my unsavory
associates. Actually, I rather like having unsavory associates, regardless of
friendship or loyalty. The advantage of unsavory associates is that they tend
to be cultural pariahs (Phillip Johnson is a notable exception, who has
managed to upset countless people and still move freely among the culture’s
elite). Cultural pariahs can keep you honest in ways that the respectable
elements of society never do (John Stuart Mill would no doubt have approved).
Or as it’s been put, “You’re never so free as when you have nothing to lose.”
Cultural pariahs have nothing to lose.
Even so, there’s a deeper issue underlying my unwillingness to renounce
unsavory associates, and that concerns how one chooses conversation partners
and rejects others as cranks. Throughout my last ten years as a public
advocate for intelligent design, I’ve encountered a pervasive dogmatism in
the academy. In my case, this dogmatism has led fellow academicians (I
hesitate to call them “colleagues” since they’ve made it clear that I’m no
colleague of theirs) to trash my entire academic record and accomplishments
simply because I have doubts about Darwinism, because I don’t think the rules
of science are inviolable, and because I think that there can be good
scientific reasons for thinking that certain natural systems are designed.
These are my academic sins, no more and no less. And the academy has been
merciless in punishing me for these sins.
Now, I resolutely refuse to engage in this same form of dogmatism (or any
other form of dogmatism, God willing). To be sure, I think I am right about
the weaknesses of Darwinism, the provisional nature of the rules of science,
and the detectability of design in nature. But I’m also willing to
acknowledge that I may be wrong. Yet precisely because I’m willing to
acknowledge that I might be wrong, I also want to give other people who I
think are wrong, and thus with whom I disagree, a fair chance--something I’ve
too often been denied. What’s more, just because people are wrong about some
things doesn’t mean they are wrong about other things. Granted, a valid
argument from true premises leads to a true conclusion. But a valid argument
from false premises can also lead to a true conclusion. Just because people
have false beliefs is no reason to dismiss their work.
One of the most insightful philosophers of science I know as well as one of
my best conversation partners over the last decade is Paul Nelson, whose book
On Common Descent is now in press with the University of Chicago’s
Evolutionary Monographs Series. Nelson’s young earth creationism has been a
matter of public record since the mid eighties. I disagree with Nelson about
his views on a young earth. But I refuse to let that disagreement cast a pall
over his scholarly work. A person’s presuppositions are far less important
than what he or she does with them. Indeed, a person is not a crank for
holding crazy ideas (I suspect all of us hold crazy ideas), but because his
or her best scholarly efforts are themselves crazy.
If someone can prove the Goldbach conjecture (i.e., that every even number
greater than two is the sum of two primes), then it doesn’t matter how many
crazy ideas and hair-brained schemes he or she entertains--that person will
win a Fields Medal, the mathematical equivalent of the Nobel Prize. On the
other hand, if someone claims to have proven that pi is a rational number (it’s
been known for over a century that pi is not only an irrational number but
also a transcendental number, thus satisfying no polynomial equation with
integer coefficients), then that person is a crank regardless how mainstream
he or she is otherwise. Kepler had a lot of crazy ideas about embedding the
solar system within nested regular geometric solids. A full half of Newton’s
writings were devoted to theology and alchemy. Yesterday’s geniuses in almost
every instance become today’s cranks if we refuse to separate their best work
from their presuppositions.
I challenge anyone to read Paul Nelson’s On Common Descent, which
critiques Darwin’s idea of common descent from the vantage of developmental
biology, and show why it alone among all the volumes in the University of
Chicago’s Evolutionary Monographs Series does not belong there (of course I’m
refusing here to countenance an ad hominem argument, which rejects the book
simply because of Nelson’s creationist views). I don’t distance myself from
creationists because I’ve learned much from them. So too, I don’t distance
myself from Darwinists because I’ve learned much from them as well. I commend
Darwinists like Michael Ruse, Will Provine, and Elliott Sober for their
willingness to engage the intelligent design community and challenge us to
make our arguments better.
Unlike Stephen Jay Gould’s NOMA (“Non-Overlapping Magisteria”) principle,
which separates science and religion into tight compartments and which Todd
Moody has rightly called a gag-order masquerading as a principle of
tolerance, intelligent design theorists desire genuine tolerance. Now the
problem with genuine tolerance is that it requires being willing to engage
the views of people with whom we disagree and whom in some cases we find
repugnant. Unfortunately, the only alternative to the classical liberalism of
John Stuart Mill, which advocates genuine tolerance, is the hypocritical
liberalism of today’s political correctness.
In place of Gould’s NOMA, design theorists advocate a very different principle
of interdisciplinary dialogue, namely, COMA: Completely Open Magisteria. It
is not the business of magisteria to assert authority by drawing disciplinary
boundaries. Rather, it is their business to open up inquiry so that knowledge
may grow and life may be enriched (which, by the way, is the motto of the
University of Chicago). Within the culture of rational discourse, authority
derives from one source and one source alone--excellence. Within the culture
of rational discourse, authority never needs to be asserted, much less
legislated.
But is intelligent design properly part of the culture of rational discourse?
At every turn opponents of design want to deny its place at the table. For
instance, Eugenie Scott, director of the National Center for Science
Education, claims intelligent design is even less reputable than young earth
creationism because at least the creationists are up front about who the
designer is and what they are trying to accomplish. Howard Van Till for the
last several years has been claiming that design theorists have not defined
what they mean by design with sufficient clarity so that their views can be
properly critiqued. And most recently Larry Arnhart, writing in the current
issue of First Things (Nov. 2000, p. 31), complains: “Do they [i.e.,
design theorists] believe that the ‘intelligent designer’ must miraculously
intervene to separately create every species of life and every ‘irreducibly
complex’ mechanism in the living world? If so, exactly when and how does that
happen? By what observable causal mechanisms does the ‘intelligent designer’
execute these miraculous acts? How would one formulate falsifiable tests for
such a theory? Proponents of ‘intelligent design theory’ refuse to answer
such questions, because it is rhetorically advantageous for them to take a
purely negative position in which they criticize Darwinian theory without
defending a positive theory of their own. That is why they are not taken
seriously in the scientific community.”
2. Situating Intelligent Design in the Contemporary Debate
Let me now respond to these concerns. I’ll start with Eugenie Scott. Design
theorists have hardly been reticent about their program. I’ve certainly laid
it out as I see it both in the introduction to Mere Creation and in
chapter four of Intelligent Design. What Scott is complaining about
has less to do with the forthrightness of design theorists about their
intellectual program than with the increased challenge that intelligent
design presents to defenders of Darwinism as compared with creationism.
Creationism offers critics like Eugenie Scott a huge fixed target.
Creationism takes the Bible literally and makes the debate over Darwinism
into a Bible-science controversy. In a culture where the Bible has been
almost universally rejected by the cultural elite, creationism is therefore a
non-starter.
But isn’t it true that design theorists are largely Bible-believers and that
their reason for not casting intelligent design as a Bible-science
controversy is pure expedience and not principle? In other words, isn’t it
just the case that we realize creationism hasn’t been working, and so we
decided to recast it and salvage as much of it as we can? This criticism
seems to me completely backwards. For one thing, most of the leaders in the
intelligent design movement did not start out as creationists and then turn
to design. Rather, we started squarely in the Darwinian camp and then had to
work our way out of it. The intellectual journey of most design theorists is
therefore quite different from the intellectual journey of many erstwhile
creationists, who in getting educated renounced their creationism (cf. Ron
Number’s The Creationists in which Numbers argues that the correlation
between increased education and loss of confidence in creationism is near
perfect).
In my own case, I was raised in a home where my father had a D.Sc. in biology
(from the University of Erlangen in Germany), taught evolutionary biology at
the college level, and never questioned Darwinian orthodoxy during my years
growing up. My story is not atypical. Biologists Michael Behe, Jonathan
Wells, and Dean Kenyon all started out adhering to Darwinism and felt no
religious pull to renounce it. In Behe’s case, as a Roman Catholic, there was
simply no religious reason to question Darwin. In so many of our cases, what
led us out of Darwinism was its inadequacies as a scientific theory as well
as the prospect of making design scientifically tractable.
It’s worth noting that the effort to make the design of natural systems
scientifically tractable has at best been a peripheral concern of young earth
creationists historically. There have been exceptions, like A. E.
Wilder-Smith, who sought to identify the information in biological systems
and connect it with a designer/creator. But the principal texts of the
Institute for Creation Research, for instance, typically took a very
different line from trying to make design a program of scientific research.
Instead of admitting that Darwinian theory properly belonged to science and
then trying to formulate design as a replacement theory, young earth
creationists typically claimed that neither Darwinism nor design could
properly be regarded as scientific (after all, so the argument went, no one
was there to observe what either natural selection or a designer did in
natural history).
Intelligent design’s historical roots do not ramify through young earth
creationism. Rather, our roots go back to the tradition of British natural
theology (which took design to have actual scientific content), to the
tradition of Scottish common sense realism (notably the work of Thomas Reid),
and to the informed critiques of Darwinism that have consistently appeared
ever since Darwin published his Origin (e.g., Louis Agassiz, St.
George Mivart, Richard Goldschmidt, Pierre Grassé, Gerald Kerkut, Michael
Polanyi, Marcel Schützenberger, and Michael Denton).
Why then are so many of us in the intelligent design movement Christians? I
don’t think it is because intelligent design is intrinsically Christian or
even theistic. Rather, I think it has to do with the Christian evangelical
community for now providing the safest haven for intelligent design--which is
not to say that the haven is particularly safe by any absolute standard.
Anyone who has followed the recent events of Baylor’s Michael Polanyi Center,
the first intelligent design think-tank at a research university, will
realize just how intense the opposition to intelligent design is even among
Christians. Baylor is a Baptist institution that prides itself as being the
flagship of evangelical colleges and universities (which includes schools
like Wheaton College and Valparaiso University). Although an independent peer
review committee validated intelligent design as a legitimate form of
academic inquiry, the committee changed the center’s name and took the
center’s focus off intelligent design. What’s more, after months of
censorship by the Baylor administration and vilification by Baylor faculty, I
was finally removed as director of the center.
Now my treatment at Baylor is hardly unique among my compatriots in the
design movement. Dean Kenyon, despite being a world leader in the study of
chemical evolution, was barred by the biology department at San Francisco
State University from critiquing the very ideas that earlier he had
formulated and that subsequently he found defective. Refusing to have his
academic freedom abridged, he was then removed from teaching introductory
biology courses, despite being a very senior and well-published member of the
department. Only after the Wall Street Journal exposed San Francisco State
University’s blatant violation of Kenyon’s academic freedom was the biology
department forced to back down. I am frequently asked what is the latest
research that supports intelligent design, and I find myself having to be
reticent about who is doing what precisely because of enormous pressure that
opponents of design employ to discredit these researchers, undermine their
position, and cause them to lose their funding (upon request, I’m willing to
name names of people and groups that engage in these tactics--though not the
names of researchers likely to be on the receiving end of these tactics).
To sum up, intelligent design faces tremendous opposition from our culture’s
elite, who in many instances are desperate to discredit it. What’s more,
within the United States the Christian evangelical world has thusfar been the
most hospitable place for intelligent design (and this despite opposition
like at Baylor). Also relevant is that Christianity remains the majority
worldview for Americans. Thus on purely statistical grounds one would expect
most proponents of intelligent design to be Christians. But not all of them.
David Berlinski is a notable counterexample. I could name other counterexamples,
but to spare them from harassment by opponents of design, I won’t. (By the
way, if you think I’m being paranoid, please pick up a copy of the November
issue of the American Spectator, which has an article about Baylor’s
Michael Polanyi Center and my then imminent removal as its director; I think
you’ll find that my suspicions are justified and that it’s the dogmatic
opponents of design who are paranoid.)
Well, what then is this intelligent design research program that Eugenie
Scott regards as even more disreputable than that of the young earth
creationists? Because intelligent design is a fledgling science, it is still
growing and developing and thus cannot be characterized in complete detail.
Nonetheless, its broad outlines are clear enough. I place the start of the
intelligent design movement with the publication in 1984 of The Mystery of
Life’s Origin by Charles Thaxton, Walter Bradley, and Roger Olsen. The
volume is significant in two ways. First, though written by three Christians
and critiquing origin-of-life scenarios, it focused purely on the scientific
case for and against abiogenesis. Thus it consciously avoided casting its
critique as part of a Bible-science controversy. Second, though highly
critical of non-telic naturalistic origin-of-life scenarios and thus a ready
target for anti-creationists, the book managed to get published with a
secular publisher. It took well over 100 manuscript submissions to get it
published. MIT Press, for instance, had accepted it, subsequently went through
a shake-up of its editorial board, and then turned it down. The book was
finally published by Philosophical Library, which had published books by
eight Nobel laureates.
The next key texts in the design movement were Michael Denton’s Evolution:
A Theory in Crisis, Dean Kenyon and Percival Davis’s Of Pandas and
People, and Phillip Johnson’s Darwin on Trial, which appeared over
the next seven years. Like The Mystery of Life’s Origin, these were
principally critiques of naturalistic evolutionary theories, though each of
them also raised the possibility of intelligent design. The critiques took
two forms, one a scientific critique focusing on weaknesses of naturalistic
theories, the other a philosophical critique examining the role of naturalism
as both a metaphysical and methodological principle in propping up the
naturalistic theories, and especially neo-Darwinism.
Except for The Mystery of Life’s Origin, which in some ways was a
research monograph, the strength of these texts lay not in their novelty. Many
of the criticisms had been raised before. A. E. Wilder-Smith had raised such
criticisms within the creationist context, though in a correspondence I had
with him in the late 80s he lamented that the Institute for Creation Research
would no longer publish his works. Michael Polanyi had raised questions about
the sufficiency of natural laws to account for biological complexity in the
late 60s, and I know from conversations with Charles Thaxton that this work
greatly influenced his thinking and made its way into The Mystery of
Life’s Origin. Gerald Kerkut about a decade earlier had asked one of his
students in England for the evidence in favor of Darwinian evolution and
received a ready answer; but when he asked for the evidence against Darwinian
evolution, all he met was silence. This exchange prompted his 1960 text Implications
of Evolution, whose criticisms also influenced the early design
theorists.
Nonetheless, compared to previous critics of Darwinism, the early design
theorists had a significant advantage: Unlike previous critics, who were
either isolated (cf. Marcel Schützenberger, who although a world-class
mathematician, was ostracized in the European community for his
anti-Darwinian views) or confined to a ghetto subculture (cf. the young earth
creationists with their in-house publishing companies), the early design
theorists were united, organized, and fully cognizant of the necessary means
for engaging both mass and high culture. As a consequence, criticism of
Darwinism and scientific naturalism could at last reach a critical mass. In
the past, criticism had been too sporadic and isolated, and thus could
readily be ignored. Not any longer.
3. Intelligent Design as a Positive Research Program
Criticism, however, is never enough. I’m fond of quoting the statement by
Napoleon III that one never destroys a thing until one has replaced it.
Although it is not a requirement of logic that scientific theories can only
be rejected once a better alternative has been found, this does seem to be a
fact about the sociology of science--to wit, scientific theories give way not
to criticism but to new, improved theories. Concerted criticism of Darwinism
within the growing community of design theorists was therefore only the first
step. To be sure, it was a necessary first step since confidence in Darwinism
and especially the power of natural selection needed first to be undermined
before people could take seriously the need for an alternative theory (this
is entirely in line with Thomas Kuhn’s stages in a scientific revolution).
Once that confidence was undermined, the next step was to develop a positive
scientific research program as an alternative to Darwinism and more generally
to naturalistic approaches to the origin and subsequent development of life.
In broad strokes, the positive research program of the intelligent design
movement looks as follows (here I’m going to do a conceptual rather than a
historical reconstruction):
(1) Much as Darwin began with the commonsense recognition that artificial
selection in animal and plant breeding experiments is capable of directing
organismal variation (which he then bootstrapped into a general mechanism to
account for all organismal variation), so too the intelligent design research
program begins with the commonsense recognition that humans draw design
inferences routinely in ordinary life, explaining some things in terms of
purely natural causes and other things in terms of intelligence or design
(cf. archeologists attributing rock formations in one case to erosion and in
another to design--as with the megaliths at Stonehenge).
(2) Just as Darwin formalized and extended our commonsense understanding of
artificial selection to natural selection, the intelligent design research
program next attempts to formalize and extend our commonsense understanding
of design inferences so that they can be rigorously applied in scientific
investigation. At present, my codification of design inferences as an
extension of Fisherian hypothesis testing has attracted the most attention. It
is now being vigorously debated whether my approach is valid and sustainable
(the only alternative on the table at this point is a likelihood approach,
which in forthcoming publications I have argued is utterly inadequate).
Interestingly, my most severe critics have been philosophers (e.g., Elliott
Sober and Robin Collins). Mathematicians and statisticians have been far more
receptive to my codification of design inferences (cf. the positive notice of
my book The Design Inference in the May 1999 issue of the American
Mathematical Monthly as well as mathematician Keith Devlin’s appreciative
remarks about my work in the July/August 2000 issue of The Scientist:
“Dembski’s theory has made an important contribution to the understanding of
randomness--if only by highlighting how hard it can be to differentiate the
fingerprints of design from the whorls of chance”). My most obnoxious critics
have been Internet stalkers (e.g., Wesley Elsberry and Richard Wein), who
seem to monitor my every move and as a service to the Internet community make
sure that every aspect of my work receives their bad housekeeping seal of
disapproval. As a rule I don’t respond to them over the Internet since it
seems to me that the Internet is an unreliable forum for settling technical issues
in statistics and the philosophy of science. Consequently, I have now
responded to critics in the following three forums: Philosophy of Science
(under submission), Christian Scholar’s Review (accepted for
publication), and Books & Culture (accepted for publication). I shall
also be responding to critics at length in my forthcoming book No Free
Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence
(Rowman & Littlefield) as well as offering there a simplification of my
concept of specification. Yet regardless how things fall out with my
codification of design inferences, the question whether design is discernible
in nature is now squarely on the table for discussion. This itself is
significant progress.
(3) At the heart of my codification of design inferences is the notion of
specified complexity, which is a statistical and complexity-theoretic
concept. Provided this concept is well-defined and can effectively be applied
in practice, the next question is whether specified complexity is exhibited
in actual physical systems where no evolved, reified, or embodied
intelligence was involved. In other words, the next step is to apply the
codification of design inferences in (2) to natural systems and see whether
it properly leads us to infer design. The most exciting area of application
is of course biology, with Michael Behe’s irreducibly complex biochemical
systems, like the bacterial flagellum, having thus far attracted the most
attention. In my view, however, the most promising research in this area is
now being done at the level of individual proteins (i.e., certain enzymes) to
determine just how sparsely populated island(s) of a given functional enzyme
type are within the greater sea of non-functional polypeptides. Preliminary
indications are that they are very sparsely populated indeed, making them an
instance of specified complexity. I expect this work to be published in the
next two years. I am withholding name(s) of the researcher(s) for their own
protection.
(4) Once it is settled that certain biological systems are designed, the door
is open to a new set of research problems. Here are some of the key problems:
*****Detectability Problem--Is an object designed?
An affirmative answer to this question is needed before we can answer the
remaining questions. The whole point of (2) and (3) was to make an
affirmative answer possible.
*****Functionality Problem--What is the designed object’s function?
This problem is separate from the detectability problem. For instance,
archeologists have discovered many tools which they recognize as tools but
don’t understand what their function is.
*****Transmission Problem--What is the causal history of a designed object?
Just as with Darwinism, intelligent design seeks historical narratives (though
not the just-so stories of Darwinists).
*****Construction Problem--How was the designed object constructed?
Given enough information about the causal history of an object, this question
may admit an answer.
*****Reverse-Engineering Problem--In the absence of a reasonably detailed
causal history, how could the object have come about?
*****Constraints Problem--What are the constraints within which the designed
object functions optimally?
*****Perturbation Problem--How has the original design been modified and what
factors have modified it?
This requires an account of both the natural and the intelligent causes that
have modified the object over its causal history.
*****Variability Problem--What degree of perturbation allows continued
functioning? Alternatively, what is the range of variability within which the
designed object functions and outside of which it breaks down?
*****Restoration Problem--Once perturbed, how can the original design be
recovered?
Art restorers, textual critics, and archeologists know all about this.
*****Optimality Problem--In what sense is the designed object optimal?
*****Separation of Causes Problem--How does one tease apart the effects of
intelligent causes from natural causes, both of which could have affected the
object in question?
For instance, a rusted old Cadillac exhibits the effects of both design and
weathering?
*****Ethical Problem--Is the design morally right?
*****Aesthetics Problem--Is the design beautiful?
*****Intentionality Problem--What was the intention of the designer in
producing a given designed object?
*****Identity Problem--Who is the designer?
To be sure, the last four questions are not questions of science, but they
arise very quickly once design is back on the table for serious discussion.
As for the other questions, they are strictly scientific (indeed, many
special sciences, like archeology or SETI, already raise them). Now it’s true
that some of these questions have analogues within a naturalistic framework
(e.g., the functionality problem). But others clearly do not. For instance,
in the separation of causes problem, teasing apart the effects of intelligent
causes from natural causes has no analogue within a naturalistic framework.
4. Nature’s Formational Economy
Now from the design theorist’s perspective, there is plenty here to work on,
and certainly enough to turn intelligent design into a fruitful and exciting
scientific research program. Even so, many disagree. I want next to address
some of their worries. Let me begin with the concerns of Howard Van Till. Van
Till and I have known each other since the mid 90s, and have been
corresponding about the coherence of intelligent design as an intellectual
project for about the last three years. Van Till’s unchanging refrain has
been to ask for clarification about what design theorists mean by the term
“design.”
The point at issue for him is this: Design is unproblematic when it refers to
something being conceptualization by a mind to accomplish a purpose; but when
one attempts to attribute design to natural objects that could not have been
formed by an embodied intelligence, design must imply not just
conceptualization but also extra-natural assembly. It’s the possibility that
intelligent design requires extra-natural assembly that Van Till regards as
especially problematic (most recently he has even turned the tables on design
theorists, charging them with “punctuated naturalism”--the idea being that
for the most part natural processes rule the day, but then intermittently need
to be “punctuated” by interventions from a designing intelligence). Van Till
likes to put his concern to the intelligent design community this way: Design
can have two senses, a “mind-like” sense (referring merely to
conceptualization) and a “hand-like” sense (referring also to the mode of
assembly); is intelligent design using design strictly in the mind-like sense
or also in the hand-like sense? And if the latter, are design theorists
willing to come clean and openly admit that their position commits them to
extra-natural assembly?
Although Van Till purports to ask these questions simply as an aid to
clarity, it is important to understand how Van Till’s own theological and
philosophical presuppositions condition the way he poses these questions.
Indeed, these presuppositions must themselves be clarified. For instance,
what is “extra-natural assembly” (the term is Van Till’s)? It is not what is
customarily meant by miracle or supernatural intervention. Miracles typically
connote a violation or suspension or overriding of natural laws. To attribute
a miracle is to say that a natural cause was all set to make X happen, but
instead Y happened. As I’ve argued throughout my work, design doesn’t require
this sort of counterfactual substitution (cf. chapters 2 and 3 of my book Intelligent
Design). When humans, for instance, act as intelligent agents, there is
no reason to think that any natural law is broken. Likewise, should a
designer, who for both Van Till and me is God, act to bring about a bacterial
flagellum, there is no reason prima facie to suppose that this designer did
not act consistently with natural laws. It is, for instance, a logical
possibility that the design in the bacterial flagellum was front-loaded into
the universe at the Big Bang and subsequently expressed itself in the course
of natural history as a miniature outboard motor on the back of E. Coli.
Whether this is what actually happened is another question (more on this
later), but it is certainly a live possibility and one that gets around the
usual charge of miracles.
Nonetheless, even though intelligent design requires no contradiction of
natural laws, it does impose a limitation on natural laws, namely, it
purports that they are incomplete. Think of it this way. There are lots and
lots of things that happen in the world. For many of these things we can find
causal antecedents that account for them in terms of natural laws.
Specifically, the account can be given in the form of a set of natural laws
(typically supplemented by some auxiliary hypotheses) that relates causal
antecedents to some consequent (i.e., the thing we’re trying to explain). Now
why should it be that everything that happens in the world should submit to
this sort of causal analysis? It’s certainly a logical possibility that we
live in such a world. But it’s hardly self-evident that we do. For instance,
we have no evidence whatsoever that there is a set of natural laws, auxiliary
hypotheses, and antecedent conditions that account for the writing of this
essay. If we did have such an account, we would be well on the way to
reducing mind to body. But no such reduction is in the offing, and cognitive
science is to this day treading water when it comes to the really big
question of how brain enables mind.
Intelligent design regards intelligence as an irreducible feature of reality.
Consequently it regards any attempt to subsume intelligent agency within
natural causes as fundamentally misguided and regards the natural laws that
characterize natural causes as fundamentally incomplete. This is not to deny
derived intentionality, in which artifacts, though functioning according to
natural laws and operating by natural causes, nonetheless accomplish the aims
of their designers and thus exhibit design. Yet whenever anything exhibits
design in this way, the chain of natural causes leading up to it is
incomplete and must presuppose the activity of a designing intelligence.
I’ll come back to what it means for a designing intelligence to act in the
physical world, but for now I want to focus on the claim by design theorists
that natural causes and the natural laws that characterize them are
incomplete. It’s precisely here that Van Till objects most strenuously to
intelligent design and that his own theological and philosophical interests
come to light. “Extra-natural assembly” for Howard Van Till does not mean a
miracle in the customary sense, but rather that natural causes were
insufficient to account for the assembly in question. Van Till holds to what
he calls a Robust Formational Economy Principle (RFEP--“formational economy”
refers to the capacities or causal powers in nature for bringing about the
events that occur in nature). This is a theological and metaphysical
principle. According to this principle God endowed nature with all the
(natural) causal powers it ever needs to accomplish all the things that
happen in nature. Thus in Van Till’s manner of speaking, it is within
nature’s formational economy for water to freeze when its temperature is
lowered sufficiently. Natural causal powers are completely sufficient to
account for liquid water turning to ice. What makes Van Till’s formational
economy robust is that everything that happens in nature is like
this--even the origin and subsequent history of life. In other words, the
formational economy is complete.
But how does Van Till know that the formational economy is complete? Van Till
was kind enough to speak at a seminar I conducted this summer (2000) at
Calvin College in which he made clear that he holds this principle for
theological reasons. According to him, for natural causes to lack the power
to effect some aspect of nature would mean that the creator had not fully
gifted the creation. Conversely, a creator or designer who must act in
addition to natural causes to produce certain effects has denied the creation
benefits it might otherwise possess. Van Till portrays his God as supremely
generous whereas the God of the design theorists comes off looking like a
miser. Van Till even refers to intelligent design as a “celebration of gifts
withheld.”
Though rhetorically shrewd, Van Till’s criticism is hardly the only way to
spin intelligent design theologically. Granted, if the universe is like a
clockwork (cf. the design arguments of the British natural theologians), then
it would be inappropriate for God, who presumably is a consummate designer,
to intervene periodically to adjust the clock. Instead of periodically giving
the universe the gift of “clock-winding and clock-setting,” God should simply
have created a universe that never needed winding or setting. But what if
instead the universe is like a musical instrument (cf. the design arguments
of the Church Fathers, like Gregory of Nazianzus, who compared the universe
to a lute--in this respect I much prefer the design arguments of the early
Church to the design arguments of the British natural theologians)? Then it
is entirely appropriate for God to interact with the universe by introducing
design (or in this analogy, by skillfully playing a musical instrument).
Change the metaphor from a clockwork to a musical instrument, and the charge
of “withholding gifts” dissolves. So long as there are consummate pianists
and composers, player-pianos will always remain inferior to real pianos. The
incompleteness of the real piano taken by itself is therefore irrelevant
here. Musical instruments require a musician to complete them. Thus, if the
universe is more like a musical instrument than a clock, it is appropriate
for a designer to interact with it in ways that affect its physical state.
Leaving aside which metaphor best captures our universe (a clockwork
mechanism or a musical instrument), I want next to examine Van Till’s charge
that intelligent design commits one to a designer who withholds gifts. This
charge is itself highly problematic. Consider, for instance, what it would
mean for me to withhold gifts from my baby daughter. Now it’s certainly true
that I withhold things from my baby daughter, but when I do it is for her
benefit because at this stage in her life she is unable to appreciate them
and might actually come to harm if I gave them to her now. The things I am
withholding from her are not properly even called gifts at this time. They
become gifts when it is appropriate to give them. Nor is it the case that if
I am a good father, I must have all the gifts I might ever give my daughter
potentially available or in some sense in reserve now (thus making the
economy of my gift giving robust in Van Till’s sense). It’s not yet clear
what gifts are going to be appropriate for my daughter--indeed, deciding what
are the appropriate gifts to give my daughter will be situation-specific. So
too, Judeo-Christian theism has traditionally regarded many of God’s actions
in the world (though certainly not all--there’s also general providence) as carefully
adapted to specific situations at particular times and places.
Van Till’s Robust Formational Economy Principle is entirely consistent with
the methodological naturalism embraced by most scientists (the view that the
natural sciences must limit themselves to naturalistic explanations and must
scrupulously avoid assigning any scientific meaning to intelligence,
teleology, or actual design). What is unclear is whether Van Till’s Robust
Formational Economy Principle is consistent with traditional Christian views
of divine providence, especially in regard to salvation history. Van Till
claims to hold to the RFEP on theological grounds, thinking it theologically
preferable for God to endow creation with natural causal powers fully
sufficient to account for every occurrence in the natural world. Let’s
therefore grant that it’s an open question for generic theism whether for God
to deliver gifts all at once is in some way preferable to God delivering them
over time. The question remains whether this is an open question for
specifically Christian theism. Van Till after all is not merely a generic
theist but, at least until his recent retirement from Calvin College, was
required to belong to the Christian Reformed Church (or some other
denomination squarely in the Reformed tradition). Consequently, Van Till was
required to subscribe to confessional standards that reflect a traditional
Christian view of divine providence.
Now it’s not at all clear how the RFEP can be squared with traditional
Christian theology. Please understand that I’m not saying it can’t. But it
seems that Van Till needs to be more forthcoming about how it can. In his
older writings (those from the mid 80s where he attempted to defend the
integrity of science against attacks by young earth creationists--unfortunately,
Van Till was himself brutally attacked by creationists for his efforts), Van
Till seemed content to distinguish between natural history and salvation
history. Within salvation history God could act miraculously to procure humanity’s
redemption. On the other, within natural history God acted only through
natural causes. I no longer see this distinction in Van Till’s writings and I
would like to know why. Does Van Till still subscribe to this distinction? If
so, it severely undercuts his RFEP.
The RFEP casts God as the supreme gift giver who never withholds from nature
any capacity it might eventually need. According to Van Till, nature has all
the causal powers it needs to account for the events, objects, and structures
scientists confront in their investigations. Why shouldn’t God also endow
nature with sufficient causal powers to accomplish humanity’s redemption?
Human beings after all belong to nature. Throughout the Scriptures we find
God answering specific prayers of individuals, performing miracles like the
resurrection of Jesus, and speaking directly to individuals about their
specific situations. These are all instances of what theologians call particular
providence. The problem with the RFEP from the vantage of Christian theology
is that it seems to allow no room whatsoever for particular providence. Yes,
it can account for God sending the rain on the just and the unjust, or what
is known as general providence. But the RFEP carried to its logical
conclusion ends in a thorough-going Pelagianism in which redemption is built
directly into nature, in which Jesus is but an exemplar, and in which humans
have a natural capacity to procure their own salvation. I’m not saying that
Van Till has taken the RFEP to this conclusion, but if not, Van Till needs to
make clear why he stops short of assimilating the redemption in Jesus Christ
to his robust formational economy.
Van Till’s Robust Formational Economy Principle provides a theological
justification for science to stay committed to naturalism. Indeed, the RFEP
encourages science to continue business as usual by restricting itself solely
to natural causes and the natural laws that describe them. But this
immediately raises the question why we should want science to continue business
as usual. Indeed, how do we know that the formational economy of the world is
robust in Van Till’s sense? How do we know that natural causes (whether
instituted by God as Van Till holds or self-subsistent as the atheist holds)
can account for everything that happens in nature? Clearly the only way to
answer this question scientifically is to go to nature and see whether nature
exhibits things that natural causes could not have produced.
5. Can Specified Complexity Even Have a Mechanism?
What are the candidates here for something in nature that is nonetheless
beyond nature? In my view the most promising candidate is specified
complexity. The term “specified complexity” has been in use for about 30
years. The first reference to it with which I’m familiar is from Leslie
Orgel’s 1973 book The Origins of Life, where specified complexity is
treated as a feature of biological systems distinct from inorganic systems.
Richard Dawkins also employs the notion in The Blind Watchmaker,
though he doesn’t use the actual term (he refers to complex systems that are
independently specified). In his most recent book, The Fifth Miracle,
Paul Davies (p. 112) claims that life isn’t mysterious because of its
complexity per se but because of its “tightly specified complexity.” Stuart
Kauffman in his just published Investigations (October 2000) proposes
a “fourth law” of thermodynamics to account for specified complexity.
Specified complexity is a form of information, though one richer than Shannon
information, which focuses exclusively on the complexity of information
without reference to its specification. A repetitive sequence of bits is
specified without being complex. A random sequence of bits is complex without
being specified. A sequence of bits representing, say, a progression of prime
numbers will be both complex and specified. In The Design Inference I
show how inferring design is equivalent to identifying specified complexity
(significantly, this means that intelligent design can be conceived as a
branch of information theory).
Most scientists familiar with specified complexity think that the Darwinian
mechanism is adequate to account for it once one has differential
reproduction and survival (in No Free Lunch I’ll show that the
Darwinian mechanism has no such power, though for now let’s let it ride). But
outside a context that includes replicators, no one has a clue how specified
complexity occurs by naturalistic means. This is not to say there hasn’t been
plenty of speculation (e.g., clay templates, hydrothermic vents, and
hypercycles), but none of this speculation has come close to solving the
problem. Unfortunately for naturalistic origin-of-life researchers, this
problem seems not to be eliminable since the simplest replicators we know
require specified complexity. Consequently Paul Davies suggests that the
explanation of specified complexity will require some fundamentally new kinds
of natural laws. But so far these laws are completely unknown. Kauffman’s
reference to a “fourth law,” for instance, merely cloaks the scientific
community’s ignorance about the naturalistic mechanisms supposedly
responsible for the specified complexity in nature.
Van Till agrees that specified complexity is an open problem for science. At
a recent symposium on intelligent design at the University of New Brunswick
sponsored by the Center for Theology and the Natural Sciences (15-16
September 2000), Van Till and I took part in a panel discussion. When I asked
him how he accounts for specified complexity in nature, he called it a
mystery that he hopes further scientific inquiry will resolve. But resolve in
what sense? On Van Till’s Robust Formation Economy Principle, there must be
some causal mechanism in nature that accounts for any instance of specified
complexity. We may not know it and we may never know it, but surely it is
there. For the design theorist to invoke a non-natural intelligence is
therefore out of bounds. But what happens once some causal mechanism is found
that accounts for a given instance of specified complexity? Something that’s
specified and complex is by definition highly improbable with respect to all
causal mechanisms currently known. Consequently, for a causal mechanism to
come along and explain something that previously was regarded as specified
and complex means that the item in question is in fact no longer specified
and complex with respect to the newly found causal mechanism. The task of
causal mechanisms is to render probable what otherwise seems highly
improbable. Consequently, the way naturalism explains specified complexity is
by dissolving it. Intelligent design makes specified complexity a starting
point for inquiry. Naturalism regards it as a problem to be eliminated.
(That’s why, for instance, Richard Dawkins wrote Climbing Mount Improbable.
To climb Mount Improbable one needs to find a gradual route that breaks a
horrendous improbability into a sequence manageable probabilities each one of
which is easily bridged by a natural mechanism.)
Lord Kelvin once remarked, “If I can make a mechanical model, then I can
understand; if I cannot make one, I do not understand.” Repeatedly, critics
of design have asked design theorists to provide a causal mechanism whereby a
non-natural designer inputs specified complexity into the world. This
question presupposes a self-defeating conception of design and tries to force
design onto a Procrustean bed sure to kill it. Intelligent design is not a
mechanistic theory! Intelligent design regards Lord Kelvin’s dictum about
mechanical models not as a sound regulative principle for science but as a
straitjacket that artificially constricts science. SETI researchers are not
invoking a mechanism when they explain a radio transmission from outer space
as the result of an extraterrestrial intelligence. To ask for a mechanism to
explain the effect of an intelligence (leaving aside derived intentionality)
is like Aristotelians asking Newton what it is that keeps bodies in
rectilinear motion at a constant velocity (for Aristotle the crucial
distinction was between motion and rest; for Newton it was between
accelerated and unaccelerated motion). This is simply not a question that
arises within Newtonian mechanics. Newtonian mechanics proposes an entirely
different problematic from Aristotelian physics. Similarly, intelligent
design proposes a far richer problematic than science committed to
naturalism. Intelligent design is fully capable of accommodating mechanistic
explanations. Intelligent design has no interest in dismissing mechanistic
explanations. Such explanations are wonderful as far as they go. But they
only go so far, and they are incapable of accounting for specified
complexity.
In rejecting mechanical accounts of specified complexity, design theorists
are not arguing from ignorance. Arguments from ignorance have the form “Not
X, therefore Y.” Design theorists are not saying that for a given natural
object exhibiting specified complexity, all the natural causal mechanisms so
far considered have failed to account for it and therefore it had to be
designed. Rather they are saying that the specified complexity exhibited by a
natural object can be such that there are compelling reasons to think that no
natural causal mechanism is capable of producing it. Usually these
“compelling reasons” take the form of an argument from contingency in which
the object exhibiting specified complexity is compatible with but in no way
determined by the natural laws relevant to its occurrence. For instance, for
polynucleotides and polypeptides there are no physical laws that account for
why one nucleotide base is next to another or one amino acid is next to
another. The laws of chemistry allow any possible sequence of nucleotide
bases (joined along a sugar-phosphate backbone) as well as any possible
sequence of L-amino acids (joined by peptide bonds).
Design theorists are attempting to make the same sort of argument against
mechanistic accounts of specified complexity that modern chemistry makes
against alchemy. Alchemy sought to transform base into precious metals using
very limited means like furnaces and potions (though not particle
accelerators). Now we rightly do not regard the contemporary rejection of
alchemy as an argument from ignorance. For instance, we don’t charge the
National Science Foundation with committing an argument from ignorance for refusing
to fund alchemical research. Now it’s evident that not every combination of
furnaces and potions has been tried to transform lead into gold. But that’s
no reason to think that some combination of furnaces and potions might still
constitute a promising avenue for effecting the desired transformation. We
now know enough about atomic physics to preclude this transformation. So too,
we are fast approaching the place where the transformation of a biological
system that doesn’t exhibit an instance of specified complexity (say a
bacterium without a flagellum) to one that does (say a bacterium with a
flagellum) cannot be accomplished by purely natural means but also requires
intelligence.
There are a lot of details to be filled in, and design theorists are working
overtime to fill them in. What I’m offering here is not the details but an
overview of the design research program as it tries to justify the inability
of natural mechanisms to account for specified complexity. This part of its
program is properly viewed as belonging to science. Science is in the
business of establishing not only the causal mechanisms capable of accounting
for an object having certain characteristics but also the inability of causal
mechanisms to account for such an object, or what Stephen Meyer calls
“proscriptive generalizations.” There are no causal mechanisms that can
account for perpetual motion machines. This is a proscriptive generalization.
Perpetual motion machines violate the second law of thermodynamics and can
thus on theoretical grounds be eliminated. Design theorists are likewise
offering in principle theoretical objections for why the specified complexity
in biological systems cannot be accounted for in terms of purely natural
causal mechanisms. They are seeking to establish proscriptive
generalizations. Proscriptive generalizations are not arguments from
ignorance.
Assuming such an in-principle argument can be made (and for the sequel I will
assume it can), the design theorist’s inference to design can no longer be
considered an argument from ignorance. With such an in-principle argument in
hand, not only has the design theorist excluded all natural causal mechanisms
that might account for the specified complexity of a natural object, but the
design theorist has also excluded all explanations that might in turn exclude
design. The design inference is therefore not purely an eliminative argument,
as is so frequently charged. Specified complexity presupposes that the entire
set of relevant chance hypotheses has first been identified. This takes
considerable background knowledge. What’s more, it takes considerable
background knowledge to come up with the right pattern (i.e., specification)
for eliminating all those chance hypotheses and thus for inferring design.
Design inferences that infer design by identifying specified complexity are
therefore not purely eliminative. They do not merely exclude, but they
exclude from an exhaustive set of hypotheses in which design is all that
remains once the inference has done its work (this is not to say that the set
is logically exhaustive; rather it is exhaustive with respect to the inquiry
in question--that’s all we can ever do in science).
It follows that contrary to the frequently-leveled charge that design is
untestable, design is in fact eminently testable. Indeed, specified
complexity tests for design. Specified complexity is a well-defined
statistical notion. The only question is whether an object in the real world
exhibits specified complexity. Does it correspond to an independently given
pattern and is the event delimited by that pattern highly improbable (i.e.,
complex)? These questions admit a rigorous mathematical formulation and are
readily applicable in practice. Not only is design eminently testable, but to
deny that design is testable commits the fallacy of petitio principii,
that is, begging the question or arguing in a circle (Robert Larmer developed
this criticism effectively at the New Brunswick symposium adverted to
earlier). It may well be that the evidence to justify that a designer acted
to bring about a given natural structure may be insufficient. But to claim
that there could never be enough evidence to justify that a designer acted to
bring about a given natural structure is insupportable. The only way to justify
the latter claim is by imposing on science a methodological principle that
deliberately excludes design from natural systems, to wit, methodological
naturalism. But to say that design is not testable because we’ve defined it
out of existence is hardly satisfying or legitimate. Darwin claimed to have
tested for design in biology and found it wanting. Design theorists are now
testing for design in biology afresh and finding that biology is chock-full
of design.
Specified complexity is only a mystery so long as it must be explained
mechanistically. But the fact is that we attribute specified complexity to
intelligences (and therefore to entities that are not mechanisms) all the
time. The reason that attributing specified complexity to intelligence for
biological systems is regarded as problematic is because such an intelligence
would in all likelihood have to be unembodied (though strictly speaking this
is not required of intelligent design--the designer could in principle be an
embodied intelligence, as with the panspermia theories). But how does an
unembodied intelligence interact with natural objects and get them to exhibit
specified complexity. We are back to Van Till’s problem of extra-natural
assembly.
6. How Can an Unembodied Intelligence Interact with the Natural World?
There is in fact no conceptual difficulty for an unembodied intelligence to
interact coherently with the natural world. We are not in the situation of
Descartes seeking a point of contact between the material and the spiritual
at the pineal gland. For Descartes the physical world consisted of extended
bodies that interacted only via direct contact. Thus for a spiritual
dimension to interact with the physical world could only mean that the
spiritual caused the physical to move. In arguing for a substance dualism in
which human beings consist of both spirit and matter, Descartes therefore had
to argue for a point of contact between spirit and matter. He settled on the
pineal gland because it was the one place in the brain where symmetry was
broken and where everything seemed to converge (most parts of the brain have
right and left counterparts).
Although Descartes’s argument doesn’t work, the problem it tries to solve is
still with us. When I attended a Santa Fe symposium sponsored by the
Templeton Foundation in October 1999, Paul Davies expressed his doubts about
intelligent design this way: “At some point God has to move the particles.”
The physical world consists of physical stuff, and for a designer to
influence the arrangement of physical stuff seems to require that the
designer intervene in, meddle with, or in some way coerce this physical
stuff. What’s wrong with this picture of supernatural action by a designer?
The problem is not a flat contradiction with the results of modern science.
Take for instance the law of conservation of energy. Although the law is
often stated in the form “energy can neither be created nor destroyed,” in
fact all we have empirical evidence for is the much weaker claim that “in an
isolated system energy remains constant.” Thus a supernatural action that
moves particles or creates new ones is beyond the power of science to
disprove because one can always claim that the system under consideration was
not isolated.
There is no logical contradiction here. Nor is there necessarily a
god-of-the-gaps problem here. It’s certainly conceivable that a supernatural
agent could act in the world by moving particles so that the resulting
discontinuity in the chain of physical causality could never be removed by
appealing to purely physical forces. The “gaps” in the god-of-the-gaps
objection are meant to denote gaps of ignorance about underlying physical
mechanisms. But there’s no reason to think that all gaps must give way to
ordinary physical explanations once we know enough about the underlying
physical mechanisms. The mechanisms may simply not exist. Some gaps might
constitute ontic discontinuities in the chain of physical causes and thus
remain forever beyond the capacity of physical mechanisms.
Although a non-physical designer who “moves particles” is not logically
incoherent, such a designer nonetheless remains problematic for science. The
problem is that natural causes are fully capable of moving particles. Thus
for a designer also to move particles can only seem like an arbitrary
intrusion. The designer is merely doing something that nature is already
doing, and even if the designer is doing it better, why didn’t the designer
make nature better in the first place so that it can move the particles
better? We are back to Van Till’s Robust Formational Economy Principle.
But what if the designer is not in the business of moving particles but of
imparting information? In that case nature moves its own particles, but an
intelligence nonetheless guides the arrangement which those particles take. A
designer in the business of moving particles accords with the following world
picture: The world is a giant billiard table with balls in motion, and the
designer arbitrarily alters the motion of those balls, or even creates new
balls and then interposes them among the balls already present. On the other
hand, a designer in the business of imparting information accords with a very
different world picture: In that case the world becomes an information
processing system that is responsive to novel information. Now the
interesting thing about information is that it can lead to massive effects
even though the energy needed to represent and impart the information can
become infinitesimal (Frank Tipler and Freeman Dyson have made precisely such
arguments, namely, that arbitrarily small amounts of energy are capable of
information processing--in fact capable of sustaining information processing
indefinitely). For instance, the energy requirements to store and transmit a
launch code are minuscule, though getting the right code can make the
difference between starting World War III and maintaining peace.
When a system is responsive to information, the dynamics of that system will
vary sharply with the information imparted and will largely be immune to
purely physical factors (e.g., mass, charge, or kinetic energy). A medical
doctor who utters the words “Your son is going to die” might trigger a heart
attack in a troubled father whereas uttering the words “Your son is going to
live” might prevent it. Moreover, it doesn’t much matter how loudly the
doctor utters one sentence or the other or what bodily gestures accompany the
utterance. Such physical factors are largely irrelevant. Consider another
example. After killing the Minotaur on Crete and setting sail back for
Athens, Theseus forgot to substitute a white flag for a black flag. Theseus
and his father Aegeus had agreed that a black flag would signify that Theseus
had been killed by the Minotaur whereas a white flag would signify his success
in destroying it. Seeing the black flag hoisted on the ship at a distance,
Aegeus committed suicide. Or consider yet another nautical example, in this
case a steersman who guides a ship by controlling its rudder. The energy
imparted to the rudder is minuscule compared to the energy inherent in the
ship’s motion, and yet the rudder guides its motion. It was this analogy that
prompted Norbert Wiener to introduce the term “cybernetics,” which is derived
etymologically from the Greek and means steersman. It is no coincidence that
in his text on cybernetics, Wiener writes about information as follows (Cybernetics,
2nd ed., p. 132): “Information is information, not matter or energy. No
materialism which does not admit this can survive at the present day.”
How much energy is required to impart information? We have sensors that can
detect quantum events and amplify them to the macroscopic level. What’s more,
the energy in quantum events is proportional to frequency or inversely
proportional to wavelength. And since there is no upper limit to the
wavelength of, for instance, electromagnetic radiation, there is no lower
limit to the energy required to impart information. In the limit, a designer
could therefore impart information into the universe without inputting any
energy at all. Whether the designer works through quantum mechanical effects
is not ultimately the issue here. Certainly quantum mechanics is much more
hospitable to an information processing view of the universe than the older
mechanical models. All that’s needed, however, is a universe whose
constitution and dynamics are not reducible to deterministic natural laws.
Such a universe will produce random events and thus have the possibility of
producing events that exhibit specified complexity (i.e., events that stand
out against the backdrop of randomness). Now as I’ve already noted, specified
complexity is a form of information, albeit a richer form than Shannon
information, which trades purely in complexity (cf. chapter 6 of my book Intelligent
Design as well as my forthcoming No Free Lunch). What’s more, as
I’ve argued in The Design Inference, specified complexity (or
specified improbability as I call it there--the concepts are the same) is a
reliable empirical marker of actual design. Now the beauty is that we live in
a non-deterministic universe that is open to novel information, that exhibits
specified complexity, and that therefore gives clear evidence of a designer
who has imparted it with information.
It’s at this point that critics of design throw up their hands in disgust and
charge that design theorists are merely evading the issue of how a designer
introduces design into the world. From the design theorists perspective,
however, there is no evasion here. Rather there is a failure of imagination
on the part of the critic (and this is not meant as a compliment). In asking
for a mechanistic account of how the designer imparts information and thereby
introduces design, the critic of design is like a physicist trained only in
Newtonian mechanics and desperately looking for a mechanical account of how a
single particle like an electron can go through two slits simultaneously to
produce a diffraction pattern on a screen (cf. the famous double-slit
experiment). On a classical Newtonian view of physics, only a mechanical
account in terms of sharply localized and individuated particles makes sense.
And yet nature is unwilling to oblige any such mechanical account of the
double slit experiment (note that the Bohmian approach to quantum mechanics
merely shifts what’s problematic in the classical view to Bohm’s quantum
potential). Richard Feynman was right when he remarked that no one
understands quantum mechanics. The “mechanics” in “quantum mechanics” is
nothing like the “mechanics” in “Newtonian mechanics.” There are no analogies
that carry over from the dynamics of macroscopic objects to the quantum
level. In place of understanding we must content ourselves with knowledge. We
don’t understand how quantum mechanics works, but we know that
it works. So too, we don’t understand how a designer imparts
information into the world, but we know that a designer imparts
information.
It follows that Howard Van Till’s riddle to design theorists is ill-posed.
Van Till asks whether the design that design theorists claim to find in
natural systems is strictly mind-like (i.e., conceptualized by a mind to
accomplish a purpose) or also hand-like (i.e., involving a coercive
extra-natural mode of assembly). As with many forced choices Van Till has
ignored a tertium quid, namely, that design can also be word-like
(i.e., imparting information to a receptive medium). In the liturgies of most
Christian churches, the faithful pray that God keep them from sinning in
“thought, word, and deed.” Each element of this tripartite distinction is
significant. Thoughts left to themselves are inert and never accomplish
anything outside the mind of the individual who thinks them. Deeds, on the
other hand, are coercive, forcing physical stuff to move now this way and now
that way (it’s no accident that the concept of force plays such a
crucial role in the rise of modern science). But between thoughts and deeds
are words. Words mediate between thoughts and deeds. Words give expression to
thoughts and thus bring the self in contact with the other. On the other
hand, words by themselves are never coercive (without deeds to back up words,
words lose their power to threaten). Nonetheless, words have the power to
engender deeds not by coercion but by persuasion. Process and openness-of-God
theologians will no doubt find these observations congenial. Nonetheless,
Christian theologians of a more traditional bent can readily sign off on them
as well.
7. Must All the Design in the Natural World Be Front-Loaded?
But simply to allow that a designer has imparted information into the natural
world is not enough. There are many thinkers who are sympathetic to design
but who prefer that all the design in the world be front-loaded. The
advantage of putting all the design in the world at, say, the initial moment
of the Big Bang is that it minimizes the conflict between design and science
as currently practiced. A designer who front-loads the design of the world
imparts all the world’s information before natural causes become operational
and express that information in the course of natural history. In effect,
there’s no need to think of the world as an informationally open system.
Rather, we can still think of it mechanistically--like the outworking of a
complicated differential equation, albeit with the initial and boundary
conditions designed. The impulse to front-load design is deistic, and I
expect any theories about front-loaded design to be just as successful as
deism was historically, which always served as an unsatisfactory halfway
house between theism (with its informationally open universe) and naturalism
(which insists the universe remain informationally closed).
There are no good reasons to require that the design of the universe must be
front-loaded. Certainly maintaining peace with an outdated mechanistic view
of science is not a good reason. Nor is the theological preference for a
hands-off designer, even if it is couched as a Robust Formational Economy
Principle. To be sure, front-loaded design is a logical possibility. But so
is interactive design (i.e., the design that a designer introduces by
imparting information over the course of natural history). The only
legitimate reason to limit all design to front-loaded design is if there
could be no empirical grounds for preferring interactive design to
front-loaded design. Michael Murray in his recent paper “Natural Providence”
for the Wheaton Philosophy Conference (October 2000,
www.wheaton.edu/philosophy/conference.html) attempts to make such an
argument. Accordingly, he argues that for a non-natural designer front-loaded
design and interactive design will be empirically equivalent. Murray’s
argument hinges on a toy example in which a deck of cards has been stacked by
the manufacturer before it gets wrapped in cellophane and distributed to
card-players. Should a card-player now insist on using the deck as it left
the manufacturer and repeatedly win outstanding hands at poker, even if there
were no evidence whatsoever of cheating, then the arrangement of the deck by
the manufacturer would have to be attributed to design. Murray implies that
all non-natural design is like this, requiring no novel design in the course
of natural history but only at the very beginning when the deck was stacked.
But can all non-natural design be dismissed in this way?
Take the Cambrian explosion in biology, for instance. David Jablonsky, James
Valentine, and even Stephen Jay Gould (when he’s not fending off the charge
of aiding creationists) admit that the basic metazoan body-plans arose in a
remarkably short span of geological time (5 to 10 million years) and for the
most part without any evident precursors (there are some annelid tracks as
well as evidence of sponges leading up to the Cambrian, but that’s about it
with regard to metazoans; single-celled organisms abound in the Precambrian).
Assuming that the animals fossilized in the Cambrian exhibit design, where
did that design come from? To be committed to front-loaded design means that
all these body-plans that first appeared in the Cambrian were in fact already
built in at the Big Bang (or whenever that information was front-loaded),
that the information for these body-plans was expressed in the subsequent
history of the universe, and that if we could but uncover enough about the
history of life, we would see how the information expressed in the Cambrian
fossils merely exploits information that was already in the world prior to
the Cambrian period. Now that may be, but there is no evidence for it. All we
know is that information needed to build the animals of the Cambrian period
was suddenly expressed at that time and with no evident informational
precursors.
To see what’s at stake here, consider the transmission of a manuscript by an
anonymous author, say the New Testament book of Hebrews. There’s a manuscript
tradition that allows us to trace this book (and specifically the information
in it) back to at least the second century A.D. More conservative scholars
think the book was written sometime in the first century by a colleague of
the Apostle Paul. One way or another we cannot be certain of the author’s
identity. What’s more, the manuscript trail goes dead in the first century
A.D. Consequently, it makes no sense to talk about the information in this
book being in some sense front-loaded at any time prior to the first century
A.D. (much less at the Big Bang).
Now Murray would certainly agree (for instance, he cites the design of the
pyramids as not being front-loaded). In the case of the transmission of
biblical texts, we are dealing with human agents whose actions in history are
reasonably well understood. But the distinction he would draw between this
example, involving the transmission of texts, and the previous biological
example, involving the origin of body-plans, cannot be sustained. Just
because we don’t have direct experience of how non-natural designers impart
information into the world does not mean we can’t say where that information
was initially imparted and where the information trail goes dead. The key
evidential question is not whether a certain type of designer (mundane or
transcendent) produced the information in question, but how far that
information can be traced back. With the Cambrian explosion the information
trail goes dead in the Cambrian. So too with the book of Hebrews it goes dead
in the first century A.D. Now it might be that with the Cambrian explosion,
science may progress to the point where it can trace the information back
even further--say to the Precambrian or possibly even to the Big Bang. But
there’s no evidence for it and there’s no reason--other than a commitment to
methodological naturalism--to think that all naturally occurring information
must be traceable back in this way. What’s more, as a general rule,
information tends to appear discretely at particular times and places. To
require that the information in natural systems (and throughout this
discussion the type of information I have in mind is specified complexity)
must in principle be traceable back to some repository of front-loaded
information is, in the absence of evidence, an entirely ad hoc restriction.
It’s also important to see that there’s more to theory choice in science than
empirical equivalence. The ancient Greeks knew all about the need for a
scientific theory to “save the phenomena” (Pierre Duhem even wrote a
delightful book about it with that title). A scientific theory must save or
be faithful to the phenomena it is trying to characterize. That is certainly
a necessary condition for an empirically adequate scientific theory. What’s
more, scientific theories that save the phenomena equally well are by
definition empirically equivalent. But there are broader coherence issues
that always arise in theory choice so that merely saving phenomena is not
sufficient for choosing one theory over another. Empirically equivalent to
the theory that the universe is 14 billion years old is the theory that it is
only five minutes old and that it was created with all the marks of being 14
billion years old. Nonetheless, no one takes seriously a five minute old
universe. Also empirically equivalent to a 14 billion year old universe is a
6,000 year old universe in which the speed of light has been slowing down and
enough ad hoc assumptions are introduced to account for the evidence from
geology and archeology that is normally interpreted as indicating a much
older earth. In fact, the scientific community takes young earth creationists
to task precisely for making too many ad hoc assumptions that favor a young
earth. Provided that there are good reasons to think that novel design was
introduced into the world subsequent to its origin (as for instance with the
Cambrian explosion, where all information trails go dead in the Precambrian),
it would be entirely artificial to require that science nonetheless treat all
design in the world as front-loaded just because methodological naturalism
requires it or because it remains a bare possibility that the design was
front-loaded after all.
Please note that I’m not offering a theory about the frequency or
intermittency with which a non-natural designer imparts information into the
world. I wouldn’t be surprised if most of the information imparted by such a
designer will elude us, not conforming to any patterns that might enable us
to detect it (just as we might right now be living in a swirl of radio
transmissions by extraterrestrial intelligences, though for lack of being
able to interpret these transmissions we lack any evidence that embodied
intelligences on other planets exist at this time). The proper question for
science is not the schedule according to which a non-natural designer imparts
information into the world, but the evidence for that information in the
world, and the times and locations where that information first becomes
evident. That’s all empirical investigation can reveal to us. What’s more,
short of tracing the information back to the Big Bang (or wherever else we
may want to locate the origin of the universe), we have no good reason to
think that the information exhibited in some physical system was in fact
front-loaded.
8. The Distinction Between Natural and Non-Natural Designers
But isn’t there an evidentially significant difference between natural and
non-natural designers? It seems that this worry is really what’s behind the
desire to front-load all the design in nature. We all have experience with
designers that are embodied in physical stuff, notably other human beings.
But what experience do we have of non-natural designers? With respect to
intelligent design in biology, for instance, Elliott Sober wants to know what
sorts of biological systems should be expected from a non-natural designer.
What’s more, Sober claims that if the design theorist cannot answer this
question (i.e., cannot predict the sorts of biological systems that might be
expected on a design hypothesis), then intelligent design is untestable and
therefore unfruitful for science.
Yet to place this demand on design hypotheses is ill-conceived. We infer
design regularly and reliably without knowing characteristics of the designer
or being able to assess what the designer is likely to do. In his 1999
presidential address for the American Philosophical Association Sober himself
admits as much in a footnote that deserves to be part of his main text
(“Testability,” Proceedings and Addresses of the APA, 1999, p. 73, n.
20): “To infer watchmaker from watch, you needn’t know exactly what the
watchmaker had in mind; indeed, you don’t even have to know that the watch is
a device for measuring time. Archaeologists sometimes unearth tools of
unknown function, but still reasonably draw the inference that these things
are, in fact, tools.”
Sober is wedded to a Humean inductive tradition in which all our knowledge of
the world is an extrapolation from past experience. Thus for design to be
explanatory, it must fit our preconceptions, and if it doesn’t, it must lack
epistemic value. For Sober, to predict what a designer would do requires
first looking to past experience and determining what designers in the past
have actually done. A little thought, however, should convince us that any
such requirement fundamentally misconstrues design. Sober’s inductive
approach puts designers in the same boat as natural laws, locating their
explanatory power in an extrapolation from past experience. To be sure,
designers, like natural laws, can behave predictably. Yet unlike natural
laws, which are universal and uniform, designers are also innovators.
Innovation, the emergence of true novelty, eschews predictability. It follows
that design cannot be subsumed under a Humean inductive framework. Designers
are inventors. We cannot predict what an inventor would do short of becoming
that inventor.
But the problem goes deeper. Not only can’t Humean induction tame the
unpredictability inherent in design; it can’t account for how we recognize
design in the first place. Sober, for instance, regards the intelligent
design hypothesis as fruitless and untestable for biology because it fails to
confer sufficient probability on biologically interesting propositions. But
take a different example, say from archeology, in which a design hypothesis
about certain aborigines confers a large probability on certain artifacts,
say arrowheads. Such a design hypothesis would on Sober’s account be testable
and thus acceptable to science. But what sort of archeological background
knowledge had to go into that design hypothesis for Sober’s inductive
analysis to be successful? At the very least, we would have had to have past
experience with arrowheads. But how did we recognize that the arrowheads in
our past experience were designed? Did we see humans actually manufacture
those arrowheads? If so, how did we recognize that these humans were acting
deliberately as designing agents and not just randomly chipping away at
random chunks of rock (carpentry and sculpting entail design; but whittling
and chipping, though performed by intelligent agents, do not). As is evident
from this line of reasoning, the induction needed to recognize design can
never get started.
My argument then is this: Design is always inferred, never a direct
intuition. We don’t get into the mind of designers and thereby attribute
design. Rather we look at effects in the physical world that exhibit the
features of design and from those features infer to a designing intelligence.
The philosopher Thomas Reid made this same argument over 200 years ago (Lectures
on Natural Theology, 1780): “No man ever saw wisdom [read “design”], and
if he does not [infer wisdom] from the marks of it, he can form no
conclusions respecting anything of his fellow creatures.... But says Hume,
unless you know it by experience, you know nothing of it. If this is the
case, I never could know it at all. Hence it appears that whoever maintains
that there is no force in the [general rule that from marks of intelligence
and wisdom in effects a wise and intelligent cause may be inferred], denies
the existence of any intelligent being but himself.” The virtue of my work is
to formalize and make precise those features that reliably signal design,
casting them in the idiom of modern information theory.
Larry Arnhart remains unconvinced. In the most recent issue of First
Things (November 2000) he claims that our knowledge of design arises not
from any inference but from introspection of our own human intelligence; thus
we have no empirical basis for inferring design whose source is non-natural.
Though at first blush plausible, this argument collapses quickly when probed.
Piaget, for instance, would have rejected it on developmental grounds: Babies
do not make sense of intelligence by introspecting their own intelligence but
by coming to terms with the effects of intelligence in their external
environment. For example, they see the ball in front of them and then taken
away, and learn that Daddy is moving the ball--thus reasoning directly from
effect to intelligence. Introspection (always a questionable psychological
category) plays at best a secondary role in how initially we make sense of
intelligence.
Even later in life, however, when we’ve attained full self-consciousness and
when introspection can be performed with varying degrees of reliability, I
would argue that even then intelligence is inferred. Indeed, introspection
must always remain inadequate for assessing intelligence (by intelligence I
mean the power and facility to choose between options--this coincides with
the Latin etymology of “intelligence,” namely, “to choose between”). For
instance, I cannot by introspection assess my intelligence at proving
theorems in differential geometry, choosing the right sequence of steps, say,
in the proof of the Nash embedding theorem. It’s been over a decade since
I’ve proven any theorems in differential geometry. I need to get out paper
and pencil and actually try to prove some theorems in that field. Depending
on how I do--and not my memory of how well I did in the past--will determine
whether and to what degree intelligence can be attributed to my theorem
proving.
I therefore continue to maintain that intelligence is always inferred, that
we infer it through well-established methods, and that there is no principled
way to distinguish natural and non-natural design so that the one is
empirically accessible but the other is empirically inaccessible. This is the
rub. And this is why intelligent design is such an intriguing intellectual possibility--it
threatens to make the ultimate questions real. Convinced Darwinists like
Arnhart therefore need to block the design inference whenever it threatens to
implicate a non-natural designer. Once this line of defense is breached,
Darwinism quickly becomes indefensible.
9. The Question of Motives
Actually, there is still one remaining line of defense, and that is to
question the motives of design theorists. According to Larry Arnhart (First
Things, November 2000), “Most of the opposition to Darwinian theory ...
is motivated not by a purely intellectual concern for the truth or falsity of
the theory, but by a deep fear that Darwinism denies the foundations of
traditional morality by denying any appeal to the transcendent norms of God’s
moral law.” In a forthcoming response to an article of mine in American
Outlook (November 2000), Michael Shermer takes an identical line: “It is
no coincidence that almost all of the evolution deniers are Christians who
believe that if God did not personally intervene in the development of life
on earth, then they have no basis for their belief; indeed, that there can be
no basis to any morality or meaning of life.”
For critics of intelligent design like Arnhart and Shermer, it is
inconceivable that someone once properly exposed to Darwin’s theory could
doubt it. It is as though Darwin’s theory were one of Descartes’s clear and
distinct ideas that immediately impels assent. Thus for design theorists to
oppose Darwin’s theory requires some hidden motivation, like wanting to shore
up traditional morality or being a closet fundamentalist. For the record,
therefore, let me reassert that our opposition to Darwinism rests in the
first instance on scientific grounds. Yes, my colleagues and I are interested
in and frequently write about the cultural and theological implications of
intelligent design. But let’s be clear that the only reason we take seriously
such implications is because we are convinced that Darwinism is on its own
terms an oversold and overreached scientific theory and that even at this
early stage in the game intelligent design excels it.
Critics who think they can defeat intelligent design merely by assigning
disreputable motives to its proponents need to examine their own motives.
Consider Shermer’s motives for taking such a hard line against intelligent
design. Shermer, trained in psychology and the social sciences, endlessly
psychologizes those who challenge his naturalistic worldview. But is he
willing to psychologize himself? Look at his popular books (e.g., Why
People Believe Weird Things and How We Believe), and you’ll notice
on the inside dustjacket a smiling Shermer with a bust of Darwin behind him
as well as several books by and about Darwin. Shermer’s devotion to Darwin
and naturalism is no less fervent than mine is to Christianity. If there is a
difference in our devotion, it is this: Shermer is a dogmatist and I am not.
I am willing to admit that intelligent design might be wrong (despite
significant progress I believe design theorists still have their work cut out
for them). Also, I am eager to examine and take seriously any arguments and
evidence favorable to Darwinism. But Shermer cannot make similar concessions.
He can’t admit that Darwinism might be wrong. He is unwilling to take
seriously any positive evidence for intelligent design. But this is hardly
surprising. Shermer has a vested interest in taking a hard line against
intelligent design. Indeed, his base of support among fellow skeptics (who
rank among the most authoritarian and dogmatic people in contemporary
culture) would vanish the moment he allows intelligent design as a live
possibility.
The success of intelligent design neither stands nor falls with the motives
of its practitioners but with the quality of the research it inspires. That
said, design theorists do have an extra-scientific motive for wanting to see
intelligent design succeed. This motive derives not from a religious agenda
but from a very human impulse, namely the desire to overcome artificial,
tyrannical, or self-imposed limitations and thereby to open oneself and
others to new possibilities--in a word, freedom. This desire was beautifully
expressed in Bernard Malamud’s novel The Fixer (Penguin, 1966). Yakov
Bok, a handyman in pre-revolutionary Russia, leaves his small town and heads
off to the big city (Kiev). As it turns out, misfortune upon misfortune
awaits him there. Why does he go? He senses the risks. But he asks himself,
“What choice has a man who doesn’t know what his choices are?” (pp. 33-34)
The desire to open himself to new possibilities impels him to go to the big
city. Later in the novel, when he has been imprisoned and humiliated, so that
choice after choice has been removed and his one remaining choice is to
maintain his integrity, refuse to confess a crime he did not commit, and
thereby prevent a pogrom; after all this, he is reminded that “the purpose of
freedom is to create it for others.” (p. 286)
Design theorists want to free science from arbitrary constraints that stifle
inquiry, undermine education, turn scientists into a secular priesthood, and
in the end prevent intelligent design from receiving a fair hearing. The
subtitle of Richard Dawkins’s The Blind Watchmaker reads Why the
Evidence of Evolution Reveals a Universe Without Design. Dawkins may be
right that design is absent from the universe. But design theorists insist
that science address not only the evidence that reveals the universe to be
without design but also the evidence that reveals the universe to be with
design. Evidence is a two-edged sword: Claims capable of being refuted by
evidence are also capable of being supported by evidence. Even if design ends
up being rejected as an unfruitful explanation in science, such a negative
outcome for design needs to result from the evidence for and against design
being fairly considered. On the other hand, the rejection of design must not
result from imposing regulative principles like methodological naturalism
that rule out design prior to any consideration of evidence. Whether design
is ultimately rejected or accepted must be the conclusion of a scientific
argument, not a deduction from an arbitrary regulative principle.
What choice does science have if it doesn’t know what its choices are? It can
choose to stop arbitrarily limiting its choices.
|