The Fantasy Life of Richard Wein:

A Response to a Response

 

By William A. Dembski

 

 

Talk.origins has officially archived Richard Wein's critique of my book No Free Lunch at http://www.talkorigins.org/design/faqs/nfl. I responded on the ISCID website at http://www.iscid.org/papers/Dembski_ObsessivelyCriticized_050902.pdf. Wein has now responded to that response at http://www.talkorigins.org/design/faqs/nfl/replynfl.html. This is my response to Wein's latest. My response here is copyright © 2002 and may be reprinted only for personal use.

 

 

 

I expected my response to Richard Wein's "Not a Free Lunch ..." to be the last thing I would write in reply to him. Despite his critique's girth, Wein's level of argumentation was shallow and inevitably sidestepped key places where my ideas have traction against the anti-teleological claims of evolutionary biology. Wein, true to form, makes a similar charge against me. Let the reader judge. At any rate, my aim has never been to win Wein or his confreres at talk.origins but rather those listening in on our conversation who might still have an open mind.

 

I am continuing the conversation with Wein (if it may be called that) not because I have much to add to what I've written thus far in reply to him or because I think there's anything I need to retract. Rather, I'm writing because my initial response to Wein attracted an enormous amount of attention on the ISCID website, at least by the standards of that site (many hundreds of unique downloads). Also, this further response allows me to reinforce certain points I made earlier. The bottom line is that Richard Wein inhabits a fantasy world populated by a fantasy life that has no more connection to biological reality than Naugahyde has to cowhide.

 

Francis Bacon advised "Read not to contradict but to weigh and consider." A quick perusal of Wein's responses to my work indicates that he has little use for Bacon's advice. According to Wein I am a pseudoscientist who must not only be put in his proper place but to whom no concessions at all must be granted. Nothing less than total repudiation of Dembski's project will do. There's no point at which Wein does not contradict.

 

To see the unreasonableness of this stance, I want to start by reviewing my project. It starts by asking a straightforward question: "If an intelligence were involved in the occurrence of some event or the formation of some object, and if we had no direct evidence of such an intelligence's activity, how could we know that an intelligence was involved at all?" The question thus posed is quite general but arises in numerous contexts, including archeology, SETI, and data falsification in science.

 

I want here to focus on data falsification because it will help point up the legitimacy of my project as well as tie it in to a matter of real urgency facing the scientific community. On May 23rd of this year the New York Times reported on the work of "J. Hendrik Schön, 31, a Bell Labs physicist in Murray Hill, N.J., who has produced an extraordinary body of work in the last two and a half years, including seven articles each in Science and Nature, two of the most prestigious journals."

 

As it turns out, Schön's career is on the line. Why? According to the New York Times, Schön published "graphs that were nearly identical even though they appeared in different scientific papers and represented data from different devices. In some graphs, even the tiny squiggles that should arise from purely random fluctuations matched exactly." As a consequence, Bell Labs appointed an independent panel to determine whether Schön "improperly manipulat[ed] data in research papers published in prestigious scientific journals."

 

The theoretical issues raised in this case of putative data falsification are precisely those that my work on the design inference seeks to address. The match between the two graphs in Schön's articles constitutes an independently given pattern or specification (more precisely, the first published graph provides a specification for the second). Moreover, the random fluctuations in the graphs are highly improbable and thus "complex" in the sense I define it. The randomness here is well-understood. As a consequence, no unknown mechanism is being sought for how the graphs from independent experiments on independent devices could have exhibited the same pattern of random fluctuations. At issue is the question of data manipulation and design, and we get there by a pure process of elimination.

 

Regardless whether specified complexity constitutes, as I claim, a sufficient condition for detecting design, it certainly constitutes a necessary condition. Design inferences apply in cases where the evidence is circumstantial and thus where we lack direct evidence of a designing intelligence. In the case of Schön's graphs, under the relevant chance hypotheses characterizing the random fluctuations in question, the match between graphs had better be highly improbable (if the graphs were merely two-bar histograms with only a few possible gradations in height, then a match between the graphs would be reasonably probable and no one would ever have questioned Schön's integrity). Improbability, however, isn't enough. The random fluctuations of each graph taken individually are indeed highly improbable. But it's the match between the graphs that raises suspicions. That match renders one graph a specification for the other so that in the presence of improbability a design inference is warranted.

 

By itself the design inference does not implicate any particular intelligence. A design inference would show that the data in Schön's papers were improperly manipulated. It could not, however, show that Schön was the actual culprit (though as first author on these papers he, like the captain on a proverbial sinking ship, would be in deep trouble nonetheless). To identify the actual intelligence would require a more thorough causal analysis (an analysis that in the Schön case is being conducted by Bell Labs' independent panel).

 

Although this example seems straightforward, there are subtleties that need to be addressed in any rational reconstruction of how we draw design inferences. I attempt to do that in my books The Design Inference (TDI) and No Free Lunch (NFL). Perhaps the most important subtlety is the one on which Wein continually founders, namely, the issue of tractability. Wein inadequately rehearses the tractability condition in his first response (without calling it that) and then merely dismisses it by saying that the notion of patterned complexity that it employs is "subjective" and "not well-defined."

 

Let's look at these charges. Consider first the charge of subjectivity. Design inferences involve intelligences drawing on background knowledge to implicate other intelligences. The idea that we can abolish subjectivity in drawing design inferences is therefore absurd. The question is not whether subjectivity can be eliminated but whether it can be adequately disciplined so that we can have confidence our judgments. That's the point of the tractability condition. Wein works with a remarkable set of double standards. In place of my approach to detecting design, he would substitute a Bayesian approach, which is chock-full not merely of subjectivity but of a subjectivity that admits no discipline (for instance, the assignment of prior probabilities reflects a subject's prior beliefs even if those beliefs are wrong or unfounded).

 

As I point out in NFL (ch. 2), specifications and the tractability condition on which they depend are, to use John Searle's distinction, ontologically subjective but epistemically objective (by contrast, the prior probabilities that come up in Bayesian decision theory are usually both ontologically and epistemically subjective). What this means is that specifications have no existence independent of subjects. Nonetheless, relative to a subject or community of subjects, whether something is a specification and whether the tractability condition is fulfilled is objective.

 

Except in the simplest statistical cases, where a rejection region is set up in advance of an experiment, design inferences work as follows. A subject S comes across some event E. The causal story behind E is occluded. Thus simply from presently available features of E, S must conclude whether an intelligence was involved in E's causal history. S therefore tries to match E to some pattern, call it F. As a pattern F corresponds to some event within which E falls (think of E as rolling a die that lands six and F as the pattern characterized by "an even numbered die roll"). Provided the event to which F corresponds has small probability, what additionally must be true of F if it is to implicate a designing intelligence in the production of E? Implicit in this question is some controls on F so that S can't artificially concoct F and then illegitimately claim that the match between E and F is a coincidence that requires an intelligence behind E.

 

Given an event E, a subject S's task in attempting to draw a design inference, therefore, is to propose some pattern F (usually one that strikes S as significant or salient) and then establish that it wasn't artificially contrived. There is a two part process here, and Wein is unhappy about both parts. On the matter of proposing a pattern, Wein is unhappy that there is no well-defined procedure for getting from background knowledge to pattern. But the desire for such a procedure is misconceived. We're dealing with intelligences and not with natural laws, regularities, or algorithms. It takes creativity, for instance, for a detective to see a pattern that incriminates a clever villain, and that creativity cannot be captured by "well-defined methods."

 

Whether there are well-defined methods for generating patterns qua specifications from background information is of no consequence. The issue is not how we got the patterns, but rather once we have the patterns whether, in the presence of improbability, they implicate a designing intelligence. Philosophers of science distinguish between the context of discovery and the context of justification. The context of discovery, which resides in the world of creativity and design, has thus far resisted any adequate analytic account by philosophers of science. By contrast, the context of justification has proven much more amenable to analytic methods.

 

The main focus of the design inference is on the context of justification and in particular justifying that the patterns we use to detect design are suitable to the task (however they were arrived at). Where and how we got the patterns is a function of the creativity and background knowledge of the subject drawing the design inference. It would be an interesting psychological study to examine a subject's ability to find patterns that successfully implicate design. But the validity of the design inference does not stand or fall with such psychological studies. The issue is not where we got the patterns but whether we can establish their bona fides once we have them.

 

Wein is unhappy with this second aspect of the design inference as well. Here again he raises the charges of subjectivity and lack of well-definedness. From the vantage of the design inference, what's crucial for a pattern to count as a specification is that it be conditionally independent of the event in question and that it satisfy a tractability condition. Conditional independence guarantees that explicit knowledge of the event didn't induce the pattern. But it doesn't control for rummaging around our background knowledge until we happen to hit on a pattern that matches the event. The point of the tractability condition is to limit this rummaging around.

 

Consider the following case from cryptography. Imagine you are confronted with the following cryptographic text:

 

nfuijolt ju jt mjlf b xfbtfm

 

Is this a random sequence of letters interspersed with spaces or do the spaces separate encrypted words? Suppose someone comes along and tells you that this sequence really stands for the sentence

 

progress is an idea i esteem

 

The letters and spaces match up, but it's not at all clear how to get from one to the other via some cryptographic scheme. To be sure, it could be that the ciphertext was gotten from the plaintext via a one-time pad (i.e., by taking a randomly generated string of alphabetic characters and then adding them modularly to the plaintext). But without independent evidence of this one-time pad, there would be no reason to treat PROGRESS IS AN IDEA I ESTEEM as the translation of NFUIJOLT JU JT MJLF B XFBTFM.

 

But consider next the following plaintext translation of NFUIJOLT JU JT MJLF B XFBTFM, namely:

 

methinks it is like a weasel

 

In this case, there's a simple and straightforward way to get from the plaintext to the ciphertext, namely, by moving each letter of the alphabet up one letter (we're dealing here with a Caesar cipher).

 

Question: Why do we think METHINKS IT IS LIKE A WEASEL is the proper translation of NFUIJOLT JU JT MJLF B XFBTFM and not PROGRESS IS AN IDEA I ESTEEM? Answer: Because the pattern that matches the two up is so much simpler in the one case than in the other.

 

The point of the tractability condition is to grade the level of complexity that a subject assigns to patterns so that the simpler patterns incur less cost than the more complex patterns, with the cost having to be paid in specificational resources. Such a condition must be in place for otherwise one could draw a design inference for NFUIJOLT JU JT MJLF B XFBTFM by claiming PROGRESS IS AN IDEA I ESTEEM as its translation (which would clearly be absurd).

 

What are we to make of the complexity measure that characterizes the degree of complexity of a pattern? According to Wein it is subjective and not well-defined. I agree that it is subjective. Indeed, it had better be subjective if it is to model how subjects draw design inferences. For such measures of patterned complexity must characterize how subjects actually do grade the complexity of patterns in light of their cognitive powers and background knowledge. My characterization of such measures of patterned complexity is in the same spirit and at the same level of detail as Amos Tversky's similarity measures in his celebrated article "Features of Similarity" (Psych. Rev., 1977) and the enterprise of assigning truth conditions to counterfactual conditionals in terms of similarity measures on spaces of possibilities (cf. the work of David Lewis and Robert Stalnaker). Nor are these complexity measures any worse as cognitive maps than the subjective probabilities that come up in Bayesian decision theory.

 

Wein's other concern is that these measures of patterned complexity are not well-defined. Wein confuses something being well-defined with something being explicitly exhibited. The solutions of certain differential equations are well-defined (there are existence and uniqueness proofs that guarantee their solution). But  the solutions themselves may be notoriously hard to calculate and exhibit explicitly. The measures of patterned complexity that I describe are, except in the simplest cases (e.g., Caputo), hard to evaluate explicitly. But that is not to say they are not well-defined (if Wein wants to take exception here, then he better be prepared to take exception with the whole Bayesian approach to probability in scientific reasoning, for which probabilities are rarely explicitly given either). Patterns come with degrees of complexity. There are mathematical theories that deal with the complexity of many patterns. To say that subjects ascribe complexity to patterns is hardly controversial.

 

There is an issue that remains, however. Let's grant that subjects make individual assessments of complexity for patterns. If so, design inferences could be compelling to individual subjects without being compelling to other subjects. The issue here is one of intersubjective agreement and thus epistemic objectivity. My background knowledge and cognitive powers may privilege certain patterns and lead me to draw a design inference for them. But if my assessment of complexity for patterns is vastly difference from that of other subjects, then how can I convince them of the force of a design inference.

 

There is an important distinction here between rational reconstruction and objective justification. I argue that my characterization of design inferences rationally reconstructs how we in fact do persuade ourselves individually of the reality of design across a variety of contexts. Design inferences work, and I submit that they work substantially as I describe them (if they didn't work and if I were not on to an important mode of inference, there would be no controversy surrounding my work). Often our assessments of probability are quick and dirty, and the probabilities aren't really all that small. Often our assessments of the complexity of patterns are not only imprecise but merely try to make sure that these patterns aren't too artificial.

 

Such a fast and loose approach to design inferences, however, is not going to be adequate for science. For design inferences to apply in science, the bar needs to be raised so that design inferences are sufficiently stringent to underwrite intersubjective agreement. This is where rational reconstruction must give way to objective justification. We draw design inferences in many contexts and my codification of them gives an accurate account of how we do it in practice (this I contend in opposition to the Bayesian approach -- in the Schön case, for instance, the decisive probabilistic consideration will be the probability that the graphs would coincide on the assumption that random fluctuations operated in both experiments; the decisive probabilistic consideration won't be any subjective assessment of probability that the graphs would  coincide on the assumption that Schön was motivated to cheat).

 

Given that my codification of design inferences provides a sound rational reconstruction of the way we draw design inferences in practice, the question remains whether what we do in practice should be normative and can be objectively justified. Now my way of addressing this concern is with the universal probability bound. The universal probability bound factors in all specifications (specificational resources) that subjects embodied in the known physical universe might ever employ in assessing the complexity of patterns. In consequence, the need actually to evaluate the complexity of patterns disappears (just as on a package-deal vacation there is no need to keep track of what every item costs -- it's all included).

 

With all conceivable specifications factored into the universal probability bound, it remains improbable that any event will ever conform to specifications that are explicitly identified by such subjects and whose associated probability is less than the universal probability bound. The only way to counter the conclusion of design in this case, then, is to argue that the probability was in fact miscalculated and that it actually is much larger than it appears (perhaps because some unknown mechanism wasn't considered -- this is the possibility on which Wein pins all his hopes). The problem with most evolutionary biologists, however, is that they don't offer such an argument. They merely assert that some unknown evolutionary pathway renders the probability large.

 

The design inference has all the controls and safeguards one could want for not incorrectly drawing design inferences. All the counterexamples Wein considers, like Kepler claiming the craters on the moon were designed (an example Wein got from me), arise in cases where the relevant probability distributions are not well-understood. Where they are well-understood, there is no problem. In the Schön case, for instance, because the graphs derive from separate experiments on separate devices, it follows that probabilistic independence is at work, that the probabilities can be reliably determined, and therefore that no unknown material mechanisms will need to be invoked to explain the coincidence.

 

But are the relevant probability distributions well-understood in biology? That's going to depend on the biological system in question. The most important work on the probability of specified biological  systems at the moment is, in my view, not being done on irreducibly complex molecular machines but on individual enzymes, whose the probabilities are far more manageable.

 

But even if we haven't nailed down the probability of a biological system to everyone's satisfaction, it is unreasonable to give a mechanistic anti-teleological biology pride of place. The fact is that for systems like the bacterial flagellum biologist don't have a clue how such a system originated (feel free to email me detailed testable proposals). Hyper-Darwinists like Richard Dawkins excoriate intelligent design theorists like Michael Behe for attributing the flagellum to design and not doing the hard biochemical research to figure out how such systems might have arisen naturalistically. But in fact what Behe is doing with his concept of irreducible complexity is closing off possible avenues by which such systems might have evolved naturalistically. My own tightening of Behe's concept of irreducible complexity closes further avenues. In fact the biological community should be grateful to Behe for pointing up deep unresolved problems. At the very least, Behe is showing that such systems are harder to evolve that was previously thought (though I argue in ch. 5 of NFL that he is in fact doing much more).

 

Now, if one is a committed naturalist, one is entirely in one's rights to continue to try to formulate a detailed testable naturalistic account of how a system like the bacterial flagellum might have evolved. But that hardly gives one the right to trash design and refuse it as a live hypothesis for the origination of the flagellum. If the flagellum is designed, how could we know it? Does the flagellum have to have inscribed on it in Roman characters "made by God"? Our best probabilistic estimates show it to be wildly improbable. Darwinists offer no counter-calculation but merely gesture at co-optation scenarios in which functions and structures co-evolve.

 

Wein thinks there's more to it than co-optation, but there isn't. Evolution takes pre-existing materials and reworks them. That's all co-optation asserts. The problem is that when systems evolve without intelligent guidance, co-optation must work by baby steps (multiple simultaneous coordinated reworkings of pieces targeted for different uses is thoroughly teleological). Such baby steps must be reasonably probable. Yet for systems like the flagellum, no detailed testable sequence of such baby steps is ever offered.

 

Evolutionary naturalists like Wein have no model for how the bacterial flagellum system was produced. They resort to making up bogus stories for how such systems could happen (stories that are neither detailed nor testable) and then try to pass these stories off as science. They sputter and rage when these stories are criticized and design is proposed as an alternative. They are living in a fantasy world, and intelligent design is providing some much needed reality therapy. If design theorists have the burden of showing how the detection of design for biological systems like the flagellum can be fruitful for biology (and we do), evolutionary naturalists committed to a mechanistic understanding of life have the far greater burden of showing why anyone should think that material mechanisms are sufficient to account for all of biological complexity, especially when such systems have all the hallmarks of high-tech nano-engineering, exhibit irreducible and specified complexity, and show no sign of submitting to naturalistic explanation.

 

Richard Wein is much taken with inference to the best explanation, but he gives little evidence of understanding it. No doubt he would agree that it is a method of reasoning employed in the sciences in which scientists elect that hypothesis which would, if true, best explain the relevant evidence. But what he fails to appreciate is that those hypotheses that qualify as "best" must above all provide causally adequate explanations of the evidence or phenomena in question. Wein has no causally adequate explanation for systems like the flagellum. He conjectures that some unknown material mechanism is adequate to account for it, but that is not an explanation. It is a hope that a certain type of explanation will pan out, and a vain hope at that.

 

Consider his latest response: "Dembski is not required to establish a universal negative. He just needs to show that a design hypothesis is better, given the available evidence, than the hypothesis of purely natural evolution. But he rejects inferences to the best explanation, insisting on a purely eliminative mode of inference, and that puts him in the unenviable position of either establishing a "universal negative" or admitting there is a category of possibilities he has not eliminated. Since he cannot do the first and does not wish to do the second, he equivocates, first claiming that he has ruled out all Darwinian possibilities (his proscriptive generalization) and then, when it is shown he has not done so, complaining that the expectation was unreasonable. In short, he wants to have his lunch and eat it too!"

 

First off, a purely eliminative mode of inference can be entirely compatible with inference to the best explanation. Consider the following evidence E: "I've looked in every room and closet of my house and there's wasn't an elephant in any of them." Now consider the hypotheses H1 and H2. H1: There's no elephant in my house. H2: There's an elephant in my house. Given these explanations, H1 is the best explanation of E, and I've established it by pure elimination.

 

Now consider the bacterial flagellum as evidence E, and let H1 and H2 be respectively the design and naturalistic hypotheses. Now factor in some auxiliary considerations. Aux1: H2 includes no detailed testable model to account for E. Aux2: We've ruled out all known mechanisms operating in known ways (yes we have in the case of the flagellum because there's no known Darwinian or other materialistic pathway by which it was attained). Aux3: The bacterial flagellum is a high-tech nano-engineered machine. Aux4: Many thousands of papers have been written about the flagellum; it has been intensely studied, and still no model has been put forward for its naturalistic construction. Aux5: Intelligent design is known to produce highly-integrated high-tech systems like the flagellum; undirected causes are not. Aux6: H2 shows no sign of being causally adequate to account for E (this is a corollary of Aux5).

 

Given all these considerations, H1 comes out a clear winner over H2 in any inference to the best explanation. This is the sort of argument Stephen Meyer makes to great effect. Wein is right that I'm going after something stronger in the form of a proscriptive generalization that establishes a universal negative. But if all one wants is inference to the best explanation, design is already the clear winner. The way Wein tries to counter this is by assimilating inference to the best explanation to a Bayesian scheme in which hypotheses must confer probabilities on evidence, thereby requiring that the design hypothesis confer some probability on, say, the flagellum (which in turn requires getting into the mind of the designer, attributing motives and purposes, and thus making for a self-defeating conception of design for biology -- design inferences do not require fathoming a designer's mind or purposes). As I argue in chapter 2 of NFL, there's no reason to grant this concession to Bayesianism. The Bayesian scheme has loads of problems on its own. Certainly Charles Peirce wasn't limiting inference to the best explanation (or abduction as he called it) to Bayesian decision theory when he formulated his thoughts on the matter over a hundred years ago.

 

I am indeed after more than inference to the best explanation. To really nail down specified complexity, there have to be good reasons for thinking we've exhausted the probability distributions attached to all the material mechanisms that might be operating in a given circumstance. The fact is that we do it in many cases (the case of J. Hendrik Schön may well prove to be a paradigm case down the road). I argue that we do this by exhausting the known material mechanisms and then on grounds of contingency, symmetry, geometry, and degrees of freedom rule out the rest. Wein never adequately addresses this argument, and it's here that his argument from ignorance objection crashes and burns.

 

I'm afraid this response has already become longer than I should wish, so I'll close with some bullet point replies to some of Wein's more particular points:

 

 

PROPER ENDORSEMENT

Wein continues to sputter about the lack, as he sees it, of proper endorsement for my work: "The overwhelming weight of scientific authority is against him." Darwinists seem to love the word "overwhelming," especially the phrase "overwhelming evidence," which happens to be singularly lacking for the power of natural selection to engender biological complexity. The inflated rhetoric is really quite remarkable. "Darwin's theory of evolution is as well-established as Einstein's theory of general relativity." I've seen that. Why don't we ever see physicists say, "Einstein's theory of general relativity is as well established as Darwin's theory"? By the way, I own the domain names www.overwhelmingevidence.com and www.underwhelmingevidence.com. But I digress.

 

Let Wein sputter. The fact is that I maintain a healthy correspondence with Nobel laureates, members of the National Academy of Sciences, and many other notable scientists about my work on intelligent design. In most instances the treatment is respectful and there is an acknowledgment that I'm on to something important (though, granted, few want to go as far with these ideas as I do). Here's a brief sampling of responses to my work:

 

FROM A NOBEL LAUREATE:

Dear Dr. Dembski,

I am delighted by your plans [for the Michael Polanyi Center]. I hope they succeed.... I enjoyed your MSS on the net [referring to some of my articles on intelligent design; I sent this individual a copy of TDI]

With every good wish,

Xxx

 

FROM A SENIOR MEMBER OF THE NATIONAL ACADEMY OF SCIENCES

[commenting on chapter 4 of NFL]

Dear Bill:

...

I see four alternatives for biology:

        (a) Intelligent design

        (b) Some natural biological process, as yet undiscovered, that yields the

organisms we have without relying solely on conventional natural selection

operating on random variation. 

        (c) Existing evolutionary algorithms do not accurately mimic the process

of natural selection.  That is, incompetent transfer of population biology

to computer code.

        (d) Existing evolutionary algorithms are OK but are not given enough time

(iterations) to do the job.

Of these, I sort of favor (b).  If (b) is true, then Darwin et al. have

found a mechanism that works in simple cases (which it certainly does!) but

misses more important mechanisms of evolutionary change and adaptation.

The search for the missing mechanisms can only be helped by people like you

asking tough questions.  Keep at it!

Cheers,

Xxx

 

FROM AN INTERVIEW WITH A WELL-KNOWN SCIENCE WRITER

http://www.christianitytoday.com/bc/2002/002/14.28.html

[Karl W. Giberson] "Are you impressed with William Dembski's attempts to give a philosophical and mathematical framework for the detection of design?"

[Paul Davies] "Yes, I think that Dembski has done a good job in providing a way of mathematizing design. That is really what we need because otherwise, if we are just talking about subjective impressions, we can argue till the cows come home. It has got to be possible, or it should be, to quantify the degree of "surprise" that one would bring to bear if something turned out to be the result of pure chance. I think that that is a very useful step. Of course I don't exactly endorse Dembski's interpretation or his application of those design arguments to biology. We all recognize that biological organisms have the appearance of design. Where I would part company from him is in the matter of irreducible complexity at the level of cells. That's another issue, but I think that he has made a useful contribution by trying to mathematize the design idea."

 

I've presented my ideas on intelligent design now at many universities around the world not only in popular forums but also in seminars at departments of physics (e.g., Univ. of Waterloo), statistics (e.g., Univ. of South Carolina), mathematics (e.g., University of Illinois at Chicago), and philosophy (e.g., Univ. of Texas, Austin). I've found my ideas about design inferences eminently defensible against quite hostile audiences and in contexts (like seminars) where these ideas were under intense scrutiny. Perhaps I will be debunked someday, but it won't be at the hands of Richard Wein.

 

One final point. Wein remains concerned that no notable statistician or information theorist has endorsed my work. The Templeton Foundation shared Wein's concern and had TDI vetted before it appeared in print by a number of scholars including Paul Davies and David Bartholomew (an emeritus professor of statistical and mathematical science at the London School of Economics). Bartholomew gave it a positive review, which is one of the main reasons the Templeton Foundation warmed to my work. I still have it somewhere (it was an in-house Templeton review, so it did not appear in print). The fact is that statisticians and information theorists have come out neither against nor for my work in print because I offer no new mathematical techniques or tools for them to use in their research. Rather, I'm taking their techniques and tools and framing them within a logical apparatus for detecting design. The people who are getting hot under the collar about my work are  the ones who should be, namely, philosophers and biologists. Wein wants endorsements from statisticians and information theorists for my work. He should instead be looking for refutations of my work from them. The fact that he hasn't found such refutations is evidence that any problems with my work lie not in its mathematics or statistics.

 

ELIMINATIVE INFERENCE

Wein continues to stay stuck on design inferences being eliminative. When you eliminate from an exhaustive class of hypotheses, what remains is the complement. Thus when you eliminate chance (broadly construed as I do), what remains is design. Where Wein and I differ is whether it is possible to get a handle on all relevant chance hypotheses that could characterize an event and thus have good reason to think we are eliminating from an exhaustive class of chance hypotheses. I say we can. It's happening in the Schön case (no search for an unknown mechanism there). What does Wein have to counter the design inference? He needs to invoke unknown material mechanisms. Now I argue that symmetry conditions and degrees of freedom can provide compelling reasons to think that we have an adequate handle on the probability distributions in question. To deny this, Wein offers one contrived example after another. For instance, he considers a combination lock (one of my examples) whose tumblers are biased in ways that allows natural processes to open them. But combination locks are constructed precisely to preclude this possibility. Yes, as a bare possibility, a lock might be poorly constructed and thus permit opening apart from a designing intelligence. And yes, as a bare possibility, bacterial flagella might evolve without intelligent guidance. But if we are to take such possibilities seriously, then the positive evidence for the sufficiency of material mechanisms to produce such effects needs to be demonstrated. Darwinists do not provide such evidence in the case of irreducibly complex systems but merely confess their faith in mechanism.

 

EVOLUTION OF EVOLVABILITY

In Wein's fantasy world, evolution unguided by intelligence does it all. Is a straightforward evolutionary pathway unable to accomplish a certain biological feat? Let's invoke a co-evolving pathway, in which functions co-evolve with structures (no evidence or actual pathway needed). Is that not enough? Let's evolve evolvability. For those stuck in the bog of Darwin's theory, this piling of epicycles upon epicycles will seem perfectly reasonable. The disinterested outsider, however, will rightly be skeptical.

 

The evolution of evolvability makes perfect sense if we have a rational basis for evolution in the first place. The evolution of evolvability is akin to praying that one's prayers be answered. If we have a rational basis for prayer, then a prayer that one's prayers be answered might have some basis (though theologically it might be suspect even if there's a basis for prayer). So what is Wein's basis for evolution? In the case of systems like the flagellum it is an assertion without a justification. The evolution of evolvability actually makes perfectly good sense within a design-theoretic context where organisms are designed to be flexible and adapt to their environments. But without a basis for evolution, Wein has no basis for the evolution of evolvability.

 

"SPECIFIED COMPLEXITY"

Wein objects to counterintuitive instances of specified complexity that arise in some contexts. For instance, in the context of coin tossing, the highly compressible (rather than incompressible) bit strings end up exhibiting specified complexity. Even though I addressed this point at length in my first response, Wein remains unhappy. According to him, for specified complexity to apply to such instances is an abuse of terminology, especially since scientists like Paul Davies and Leslie Orgel intended to apply the concept to aperiodic random sequences.

 

In response I would just note that when intuitive, common sense notions get formalized (and I submit that specified complexity has a profound basis in common sense, though the actual term is not in most people's vocabulary), counterintuitive instances can come up. Counterintuitiveness, however, is not the same as contradiction. The faces on Mount Rushmore exhibit specified complexity. The rock formations of the Grand Canyon do not. Both are complex according to our intuitive understanding. The monolith in Stanley Kubrick's 2001: A Space Odyssey (a homogeneous rectangular solid) exhibits specified complexity. The sphericity of the stars do not. Both are simple according to our intuitive understanding. Yet natural processes spontaneously give rise to spheres but not to homogeneous rectangular solids.

 

TRIZ AND TECHNOLOGICAL EVOLUTION

Wein writes: "Dembski argues that, because engineers do not use Darwinian methods to solve "inventive" problems, biological evolution cannot do so. The argument is an absurd non sequitur. Biological evolution can make billions of trials, thanks to large populations and unimaginable periods of time. Human engineers do not have such vast resources available. Furthermore, the premise of Dembski's argument is false."

 

Do engineers fail to use Darwinian methods to solve problems because they don't have billions of trials and unimaginable periods of time (actually one can imagine Wein's unimaginable periods quite nicely and readily characterize them mathematically) or because the sorts of problems they are solving are intrinsically beyond the remit of Darwinian methods? I've argued the latter. There's no non sequitur on my part here. It is a real and open question whether all problems of biological complexity submit to Darwinian methods. I argue that the TRIZ distinction between routine and inventive problems carries over to biology. All Wein offers is the typical Darwinian gesture at "billions of trials ... and unimaginable periods of time." With computer power what it is now, we can run through many more than a billion trials easily and quickly (cf. the MESA program on the ISCID website). And what have evolutionary algorithms given us? They've tweaked existing designs. They've not invented new highly-integrated multi-part functional systems. Wein has done nothing to refute the relevance of TRIZ and technological evolution to biological evolution.

 

NITPICKING

In his original response, Wein betrays his lack of philosophical knowledge by defining explanatory power in terms of retrodiction when in fact it refers to consilience (the ability of a theory to encompass a wide range of phenomena). In my response I quoted Ernest Sosa and Jonathan Dancy's definition of explanatory power in their Blackwell Companion to Epistemology. Rather than simply admit that he didn't know what explanatory power meant as a term of philosophical use, Wein attacks Sosa and Dancy's definition, calling it "trivial and question-begging" and accusing me of nitpicking. Sosa and Dancy are premier epistemologists in the Anglo-American world. They, if anybody, are in a position to define what explanatory power means. But since their definition helps my cause, Wein, knee-jerk fashion, needs to attack their definition. The absurdity here is palpable. This is not just a matter of nitpicking on my part. His reaction betrays an obstinacy and willfulness that ought to raise questions about his motivation for attacking my work. 

 

+++++

 

I close with a word of advice for Richard Wein. Since he has expended close to 50,000 words on responding to me just on NFL (he's also written at length on TDI), I suggest he collect all his writings against me in book form. Let me also suggest a publisher -- MIT Press. I expect Rob Pennock will help smooth the way (though please, no more reprints of my work without my knowledge or permission). If MIT Press wants an anthology, Wein should offer to co-edit it with Wesley Elsberry and possibly Jeffrey Shallit. I would enjoy a title like William Dembski -- Scourge of Science or Intelligent Design Creationism's Great White Hope or perhaps even Neo-Creationism's Lysenko. But I suppose I'd be content with Pseudoscientist of the Century. I'd even be willing to write the afterword.