The Success of Mathematics in Advancing Intelligent Design: A Guide to Reading Jason Rosenhouse

Book review by William A. Dembski of Jason Rosenhouse, The Failures of Mathematical Anti-Evolutionism (New York: Cambridge University Press, 2022), 310 pages. 

1 Why a Guide?

Intelligent design can credit at least some of its success to mathematics. Math has helped to advance its program for scientifically understanding the role of intelligent causes in nature. Math has also helped its efforts to unseat neo-Darwinism as the reigning paradigm for biological evolution. Success here, however, can be interpreted in two ways. On the one hand, it can mean that design theorists have developed rigorous mathematical ideas that both advance intelligent design and solidly refute neo-Darwinism. On the other hand, it can mean that design theorists have developed a mathematical song and dance that convinces the naive and gullible, but that falls apart upon informed scrutiny.

Mathematician Jason Rosenhouse, in his new book with Cambridge University Press (released May 2022), and titled The Failures of Mathematical Anti-Evolutionism, takes the more cynical view, urging that the intelligent design community’s use of mathematics is, without exception, a sham. Nothing of mathematical merit to see here—move on. Rosenhouse writes as the vanquisher of intelligent design’s mathematical pretensions. He gives the impression that his critique is so devastating that intelligent design can no longer be considered a viable research program or school of thought. The back-cover endorsements by Richard Dawkins and Steven Pinker encourage the belief that Rosenhouse has decisively finished off intelligent design.

The intelligent design figure to receive the brunt of Rosenhouse’s criticism is mathematician William A. Dembski. Rosenhouse devotes as much of his book’s index to Dembski as to the other intelligent design figures cited in it. As one reads Rosenhouse’s painstaking dissection of Dembski’s ideas, one marvels how Dembski could ever have achieved his position of prominence in the intelligent design community. Indeed, as I was reading this book, I kept saying to myself “What an idiot this Dembski character is.” And then I remembered, “Wait a second, I am Dembski …”

But seriously. Rosenhouse has been a critic of intelligent design since 2000, when he received his PhD for work in algebraic graph theory. I suspect many readers of his book will have no sense of the debate and prior discussions about the applicability of mathematics for assessing, both pro and con, the evolvability of biological systems by Darwinian means. What this means is that if unsuspecting readers jump into this book with no knowledge or background of the debate, they are in danger of embracing his critique of “mathematical anti-evolutionism.” And that would be a mistake because if there’s one thing that Rosenhouse does well in this book, it is to misrepresent intelligent design and its use of mathematics. In this review, I will justify that charge in detail. It is why I subtitled this review “A Guide to Reading Jason Rosenhouse.”

In this review, I’ll focus on Rosenhouse’s criticisms of my work and that of my colleagues in the intelligent design movement. Rosenhouse also criticizes young-earth creationists as well as thinkers from the past (such as Lecomte du Noüy) who used math to criticize evolutionary theory. Moreover, he criticizes attempts to use entropy and the Second Law of Thermodynamics to undermine evolution. But these criticisms by Rosenhouse are peripheral to his main task, so I’ll largely bypass them in this review. Rosenhouse has set his sights on intelligent design, and me in particular. That’s where he directs his heavy artillery, and so it will help to see why that artillery, as deployed by him, in fact leaves intelligent design unscathed. 

2 The Book That Launched a Thousand Barbs

The publication of Rosenhouse’s book comes at an opportune time for me in that I’m currently working on a second edition of The Design Inference (co-authored with my colleague Winston Ewert, and due out for a 25th anniversary edition in 2023 — the first edition appeared in 1998). Like Rosenhouse’s book, The Design Inference was published with Cambridge University Press. Unlike it, The Design Inference appeared in a statistical monograph series (Cambridge Studies in Probability, Induction, and Decision Theory), and thus constituted a full-scale technical treatise rather than a popular exposition, as with Rosenhouse’s book. At the risk of immodesty, I’ll venture that without The Design Inference, Rosenhouse’s book would never have been written. 

So, what is the upshot of The Design Inference, and why is Rosenhouse so dead-set against not only it but also the subsequent mathematical ideas and research that it inspired? The Design Inference purports to provide a reliable statistical method for uncovering the effects of intelligent causes and teasing them apart from unintelligent causes (i.e., chance, necessity, and their combination, as exemplified in the Darwinian mechanism of natural selection acting on random variations). If this method could legitimately be applied to biological systems, it might, potentially, undercut the credibility of Darwinian processes to produce biological innovation. The mere possibility that The Design Inference could pose a threat to Darwinism is, however, too much for Rosenhouse and fellow Darwinists.

What is that method? Briefly, the design inference (the method rather than the book) identifies two features as essential for eliminating chance: improbability and specification. If something is not improbable, then it could readily happen by chance (think of tossing three heads in a row, which has probability of ⅛, and is thus quite likely—no one would think this result beyond the reach of chance). Even so, highly improbable things happen. In fact, just about anything that happens is highly improbable. Toss a coin a thousand times, and you’ll witness an event of probability less than 1 in 10^300. Ordinarily, you’ll attribute it to chance. 

But you won’t attribute that observed sequence to chance if it exhibits a salient pattern. It might be all heads, or it might correspond to the expansion of π, or it might spell out in Unicode (treating tails as 0 and heads as 1) the first lines of Shakespeare’s Hamlet. Such salient patterns are called specifications. Their defining characteristic is that they have short descriptions (more on this later). Specifications together with improbability eliminate chance, and in some cases actually sweep the field clear of chance hypotheses, in which case they warrant a design inference. That’s the design inferential method in a nutshell. I’ll expand on it later in this review in the discussion of specified complexity.

In any case, the Darwinist logic for dismissing The Design Inference and its design inferential method is instructive. Methods are what they are. Methods don’t care where they’re applied. If something is a bonafide method, it does not bias or prejudge the outcome of applying the method. When The Design Inference was first published, it received enthusiastic endorsement from a wide cross-section of scientists and scholars. That initial enthusiasm, however, abated among evolutionary naturalists once my views on intelligent design became clear. Take, as an example of early enthusiasm, the following endorsement of The Design Inference by Bill Wimsatt, an evolutionary naturalist and philosopher of biology at the University of Chicago (an endorsement removed by the publisher, for whatever reason, from the back cover of the paperback edition):

Dembski has written a sparklingly original book. Not since David Hume’s Dialogues Concerning Natural Religion has someone taken such a close look at the design argument, but it is done now in a much broader post-Darwinian context. Now we proceed with modern characterizations of probability and complexity, and the results bear fundamentally on notions of randomness and on strategies for dealing with the explanation of radically improbable events. We almost forget that design arguments are implicit in criminal arguments “beyond a reasonable doubt,” plagiarism, phylogenetic inference, cryptography, and a host of other modern contexts. Dembski’s analysis of randomness is the most sophisticated to be found in the literature, and his discussions are an important contribution to the theory of explanation, and a timely discussion of a neglected and unanticipatedly important topic.

Wimsatt admits that the method is widely applicable. In consequence, if this method could legitimately be applied to Darwinian evolutionary processes, that could pose a threat to Darwinism itself. Maybe the method would give results acceptable to Darwinists: “We’ve applied the design inferential method to Darwinian evolutionary processes, and invariably found that design could not be convincingly inferred.” Fair enough, that could be one outcome. But what if the method instead yielded: “We’ve applied the design inferential method to Darwinian evolutionary processes, and found that at least in some instances design could be convincingly inferred.” Even that possibility was a bridge too far for Darwinists. Note that to show design in biology, it’s not necessary to show that every aspect of biological systems is designed. Even one unequivocal case of design in biology would be enough. Darwinists maintain that all biological systems give no evidence of design. To refute this claim, logic only requires showing that some biological system gives evidence of design.

The bottom line is that the very method developed in The Design Inference needed to be invalidated. Critics got busy on that work of invalidation shortly after The Design Inference was published (e.g., Elliott Sober). And a new generation of critics continues these efforts to this day (e.g., Josh Swamidass). I’ve even seen a book review of The Design Inference, written well over a decade after its publication, disparaging it (see James Bradley’s 2010 review for Biologos). If the very method described in the book is misconceived, then there’s no need to worry about its application to biology, or to anything else for that matter. Rosenhouse’s aspiration for The Failures of Mathematical Anti-Evolutionism is that it will definitively invalidate The Design Inference and subsequent work inspired by it. 

3 The Rosenhouse Challenge

To show readers that he means business and that he is a bold, brave thinker, Rosenhouse lays down the gauntlet: “Anti-evolutionists play well in front of friendly audiences because in that environment the speakers never pay the price of being wrong. The response would be a lot chillier if they tried the same arguments in front of audiences with the relevant expertise. Try telling a roomful of mathematicians that you can refute evolutionary theory with a few back-of-the-envelope probability calculations, and see how far you get.” (Epilogue, pp. 270-271)

I’m happy to take up Rosenhouse’s gauntlet. In fact, I already have. I’ve presented my ideas and arguments to roomfuls of not just mathematicians but also biologists and the whole range of scientists on whose disciplines my work impinges. A case in point is a 2014 talk I gave on conservation of information at the University of Chicago, a talk sponsored by my old physics advisor Leo Kadanoff (the entire talk, including Q&A, is available on YouTube here). In such talks, I present quite a bit more detail than a mere back-of-the-envelope probability calculation, though full details, in a single talk (as opposed to a  multi-week seminar), require referring listeners to my work in the peer-reviewed literature (none of which Rosenhouse cites in his book). 

If I receive a chilly reception in giving such talks, it’s not for any lack of merit in my ideas or work. Rather, it’s the prejudicial contempt evident in Rosenhouse’s challenge above, which is widely shared among Darwinists, who are widespread in the academy. For instance, Rosenhouse’s comrade in arms, evolutionary biologist Jerry Coyne, who is at the University of Chicago, tried to harass Leo into canceling my 2014 talk, but Leo was not a guy to be intimidated—the talk proceeded as planned (Leo sent me copies of the barrage of emails he received from Coyne to persuade him to uninvite me). For the record, I’m happy to debate Rosenhouse, or any mathematicians, engineers, biologists, or whatever, who think they can refute my work. 

4 A Convinced Darwinist

For Rosenhouse, Darwin can do no wrong and Darwin’s critics can do no right. As a fellow mathematician, I would have liked to see from Rosenhouse a vigorous and insightful discussion of my ideas, especially where there’s room for improvement, as well as some honest admission of why neo-Darwinism falls short as a compelling theory of biological evolution and why mathematical criticisms of it could at least have some traction. Instead, Rosenhouse assumes no burden of proof, treating Darwin’s theory as a slam dunk and treating all mathematical criticisms of Darwin’s theory as laughable. Indeed, he has a fondness for the word “silly,” which he uses repeatedly, and according to him mathematicians who use math to advance intelligent design are as silly as they come.

In using the phrase “mathematical anti-evolutionism,” Rosenhouse mistitled his book. Given its aim and arguments, it should have been titled The Failures of Mathematical Anti-Darwinism. Although design theorists exist who reject the transformationism inherent in evolutionism (I happen to be one of them), intelligent design’s beef is not with evolution per se but with the supposed naturalistic mechanisms driving evolution. And when it comes to naturalistic mechanisms driving evolution, there’s only one game in town, namely, neo-Darwinism, which I’ll refer to simply as Darwinism. In any case, my colleague Michael Behe, who also comes in for criticism from Rosenhouse, is an evolutionist. Behe accepts common descent, the universal common ancestry of all living things on planet earth. And yet Behe is not a Darwinist — he sees Darwin’s mechanism of natural selection acting on random variations as having at best very limited power to explain biological innovation. 

Rosenhouse is a Darwinist, and a crude reflexive one at that. For instance, he will write: “Evolution only cares about brute survival. A successful animal is one that inserts many copies of its genes into the next generation, and one can do that while being not very bright at all.” (p. 14) By contrast, more nuanced Darwinists (like Robert Wright) will stress how Darwinian processes can enhance cooperation. Others (like Geoffrey Miller) will stress how sexual selection can put a premium on intelligence (and thus on “being bright”). But Rosenhouse’s Darwinism plays to the lowest common denominator. Throughout the book, he hammers on the primacy of natural selection and random variation, entirely omitting such factors as symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, to say nothing of self-organizational processes.

Rosenhouse’s Darwinism commits him to Darwinian gradualism: Every adaptation of organisms is the result of a gradual step-by-step evolutionary process with natural selection ensuring the avoidance of missteps along the way. Writing about the evolution of “complex biological adaptations,” he notes: “Either the adaptation can be broken down into small mutational steps or it cannot. Evolutionists say that all adaptations studied to date can be so broken down while anti-evolutionists deny this…” (p. 178) At the same time, Rosenhouse denies that adaptations ever require multiple coordinated mutational steps: “[E]volution will not move a population from point A to point B if multiple, simultaneous mutations are required. No one disagrees with this, but in practice there is no way of showing that multiple, simultaneous mutations are actually required.” (pp. 159–160) 

And why are multiple simultaneous mutations strictly verboten? Because they would render life’s evolution too improbable, making it effectively impossible for evolution to climb Mount Improbable (which is both a metaphor and the title of a book by Richard Dawkins). Simultaneous mutations throw a wrench in the Darwinian gearbox. If they played a significant role in evolution, Darwinian gradualism would become untenable. Accordingly, Rosenhouse maintains that such large scale mutational changes never happen and are undemonstrable even if they do happen. Rosenhouse presents this point of view not with a compelling argument, but as an apologist intent on neutralizing intelligent design’s threat to Darwinism. 

5 Evolutionary Discontinuity

The Darwinian community has been strikingly unsuccessful in showing how complex biological adaptations evolved, or even how they might have evolved, in terms of detailed step-by-step pathways between different structures performing different functions (pathways that must exist if Darwinian evolution holds). Rosenhouse admits the problem when he says that Darwinians lack “direct evidence” of evolution and must instead depend on “circumstantial evidence. (pp. 47–48) He elaborates: “As compelling as the circumstantial evidence for evolution is, it would be better to have direct experimental confirmation. Sadly, that is impossible. We have only the one run of evolution on this planet to study, and most of the really cool stuff happened long ago.” (p. 208) How very convenient. 

Design theorists see the lack of direct evidence for Darwinian processes creating all that “cool stuff”—in the ancient past no less—as a problem for Darwinism. Moreover, they are unimpressed with the circumstantial evidence that convinces Darwinists that Darwin got it right. Rosenhouse, for instance, smugly informs his readers that “eye evolution is no longer considered to be especially mysterious.” (p. 54) It’s not that the human eye and the visual cortex with which it is integrated are even remotely well enough understood to underwrite a realistic model of how the human eye might have evolved. The details of eye evolution, if such details even exist, remain utterly mysterious.

Instead, Rosenhouse does the only thing that Darwinists can do when confronted with the eye: point out that eyes of many different complexities exist in nature, relate them according to some crude similarity metric (whether structurally or genetically), and then simply posit that gradual step-by-step evolutionary paths connecting them exist (perhaps by drawing arrows to connect similar eyes). Sure, Darwinists can produce endearing computer models of eye evolution (what two virtual objects can’t be made to evolve into each other on a computer?). And they can look for homologous genes and proteins among differing eyes (big surprise that similar structures may use similar proteins). But eyes have to be built in embryological development, and eyes evolving by Darwinian means need a step-by-step path to get from one to the other. No such details are ever forthcoming. Credulity is the sin of Darwinists.

Intelligent design’s scientific program can thus, at least in part, be viewed as an attempt to unmask Darwinist credulity. The task, accordingly, is to find complex biological systems that convincingly resist a gradual step-by-step evolution. Alternatively, it is to find systems that strongly implicate evolutionary discontinuity with respect to the Darwinian mechanism because their evolution can be seen to require multiple coordinated mutations that cannot be reduced to small mutational steps. Michael Behe’s irreducibly complex molecular machines, such as the bacterial flagellum, described in his 1996 book Darwin’s Black Box, provided a rich set of examples for such evolutionary discontinuity. By definition, a system is irreducibly complex if it has core components for which the removal of any of them causes it to lose its original function.

Interestingly, in the two and a half decades since Behe published that book, no convincing, or even plausible, detailed Darwinian pathways have been put forward to explain the evolution of these irreducibly complex systems. The silence of evolutionary biologists in laying out such pathways is complete. Which is not to say that they are silent on this topic. Darwinian biologists continue to proclaim that irreducibly complex biochemical systems like the bacterial flagellum have evolved and that intelligent design is wrong to regard them as designed. But such talk lacks scientific substance.

6 A Shift in Tone

Unfortunately for Darwinists, irreducible complexity raises real doubts about Darwinism in people’s minds. Something must be done. Rising to the challenge, Darwinists are doing what must be done to control the damage. Take the bacterial flagellum, the poster child of irreducibly complex biochemical machines. Whatever biologists may have thought of its ultimate origins, they tended to regard it with awe. Harvard’s Howard Berg, who discovered that flagellar filaments rotate to propel bacteria through their watery environments, would in public lectures refer to the flagellum as “the most efficient machine in the universe.” (And yes, I realize there are many different bacteria sporting many different variants of the flagellum, including the souped-up hyperdrive magnetotactic bacteria, which swim ten times faster than E. coliE. coli’s flagellum, however, seems to be the one most studied.)

In 1998, writing for a special issue of Cell, the National Academy of Sciences president at the time, Bruce Alberts, remarked:

We have always underestimated cells… The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines… Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.

A few years later, in 2003, Adam Watkins, introducing a special issue on nanomachines for BioEssays, wrote: 

The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.

Neither of these special issues offered detailed step-by-step Darwinian pathways for how these machine-like biological systems might have evolved, but they did talk up their design characteristics. I belabor these systems and the special treatment they received in these journals because none of the mystery surrounding their origin has in the intervening years been dispelled. Nonetheless, the admiration that they used to inspire has diminished. Consider the following quote about the flagellum from Beeby et al.’s 2020 article on propulsive nanomachines. Rosenhouse cites it approvingly, prefacing the quote by claiming that the flagellum is “not the handiwork of a master engineer, but is more like a cobbled-together mess of kludges” (pp. 151–152):

Many functions of the three propulsive nanomachines are precarious, over-engineered contraptions, such as the flagellar switch to filament assembly when the hook reaches a pre-determined length, requiring secretion of proteins that inhibit transcription of filament components. Other examples of absurd complexity include crude attachment of part of an ancestral ATPase for secretion gate maturation, and the assembly of flagellar filaments at their distal end. All cases are absurd, and yet it is challenging to (intelligently) imagine another solution given the tools (proteins) to hand. Indeed, absurd (or irrational) design appears a hallmark of the evolutionary process of co-option and exaptation that drove evolution of the three propulsive nanomachines, where successive steps into the adjacent possible function space cannot anticipate the subsequent adaptations and exaptations that would then become possible. 

The shift in tone from then to now is remarkable. What happened to the awe these systems used to inspire? Have investigators really learned so much in the intervening years to say, with any confidence, that these systems are indeed over-engineered? To say that something is over-engineered is to say that it could be simplified without loss of function (like a Rube Goldberg device). And what justifies that claim here? Have scientists invented simpler systems that in all potential environments perform as well as or better than the systems in question? Are they able to go into existing flagellar systems, for instance, and swap out the over-engineered parts with these more efficient (sub)systems? Have they in the intervening years gained any real insight into the step-by-step evolution of these systems? Or are they merely engaged in rhetoric to make flagellar motors seem less impressive and thus less plausibly the product of design? To pose these questions is to answer them.

Rosenhouse even offers a quasi-Humean anti-design argument. Humans are able to build things like automobiles, but not things like organisms. Accordingly, ascribing design to organisms is an “extravagant extrapolation” from “causes now in operation.” Rosenhouse’s punchline: “Based on our experience, or on comparisons of human engineering to the natural world, the obvious conclusion is that intelligence cannot at all do what they [i.e., ID proponents] claim it can do. Not even close. Their argument is no better than saying that since moles are seen to make molehills, mountains must be evidence for giant moles.” (p. 273) 

Seriously?! As Richard Dawkins has been wont to say, “This is a transparently feeble argument.” So, primitive humans living with stone-age technology, if they were suddenly transported to Dubai, would be unable to get up to speed and recognize design in the technologies on display there? Likewise, we, confronted with space aliens whose technologies can build organisms using ultra-advanced 3D printers, would be unable to recognize that they were building designed objects? I intend these statements as rhetorical questions whose answer is obvious. What underwrites our causal explanations is our exposure to and understanding of the types of causes now in operation, not the idiosyncrasies of their operation. Because we are designers, we can appreciate design even if we are unable to replicate the design ourselves. Lost arts are lost because we are unable to replicate the design, not because we are unable to recognize the design. Rosenhouse’s quasi-Humean anti-design argument is ridiculous.

7 Track 1 and Track 2

Rosenhouse weighs the mathematical merits of intelligent design as a sophist intent on destroying it, not as a serious thinker intent on gaining real insight. Nor does he show any inclination to understand evolution as it really is. It therefore helps to lay out his evolutionary presuppositions and the lengths to which he will go to defend them. That has now been accomplished. With the stage thus set, let’s next turn to the mathematical details of Rosenhouse’s critique.

In the interest of helping his case, Rosenhouse appoints himself as math cop, stipulating the rules by which design proponents may use mathematics against Darwinism and for intelligent design. But there’s a problem: What’s good for the goose isn’t good for the gander. Rosenhouse insists that intelligent design proponents obey his rules, but happily flouts them himself. The device he uses to play math cop is to separate mathematical usage into two tracks: track 1 and track 2. Track 1 refers to math used intuitively, with few if any details or formalism. Track 2 refers to math used with full rigor, filling in all the details and explicitly identifying the underlying formalism. According to Rosenhouse, math that’s taken seriously needs to operate on track 2.

Rosenhouse’s distinction is artificial because most mathematics happens somewhere between tracks 1 and 2, not totally informal but not obsessively rigorous. The fact is that mathematicians, especially when working in their areas of specialization, can assume a lot of background knowledge in common with their fellow mathematicians. I remember an algebraic topology course I once took and thinking “where are the proofs?” — the justification of theorems in the course seemed so visual, so evocative, so abbreviated. But this level of rigor (or lack thereof) seemed not to stem the intellectual vitality of the course. Thus, mathematicians can seem to be merely on track 1 when in fact they are tacitly filling in the details needed to satisfy track 2. 

No matter, to play math cop effectively, Rosenhouse needs a sharp distinction between track 1 and track 2. Specifically, to debunk intelligent design’s use of mathematics, he levels the following accusation: You say you’re on track 2, but really you’re on track 1, and so you haven’t made your case or established anything. Over and over again he makes this accusation against my work as well as that of my colleagues in the ID community. He makes the accusation even when I have operated on track 2 by his standards. And he makes the accusation at other times even when I’ve supplied enough details so that track 2 can be readily achieved. 

As it is, Rosenhouse doesn’t meet his own exacting standards. With probability, for instance, he insists that track 2 requires fully specifying the underlying probability space as well as the probability distribution over it, and also any pertinent geometry of the space. Frankly, that can be overkill when the spaces and probabilities are given empirically, or when all the interesting probabilistic action is in some corner of the space not requiring full details for the entire space. What’s more, estimates of probabilities are often easy and suffice to make an argument even when exact probabilities may be difficult to calculate. It may, for instance, be enough to see that a probability is less than 1 in 10^100, and thus suitably “small,” without doing any further work to show that it really is the much smaller 1 in 10^243. 

Even so, having set the standard, Rosenhouse should meet it. But he doesn’t. For instance, when he describes a standard statistical mechanical set up of gas molecules in a box, he remarks: “We are far more likely than not to find the molecules evenly distributed.” (p. 234) I would agree, but what exactly is the probability space and probability distribution here? And what level of probability does he mean by “far more likely.” Rosenhouse doesn’t say. His entire treatment of the topic here, even in context, resides on track 1 rather than on track 2 (assuming we’re forced to play his game of choosing tracks). Rosenhouse might reply that he intended the argument to reside on track 1. But given the weight he puts on statistical mechanics in refuting appeals by creationists to the Second Law of Thermodynamics, one could argue that he had no business confining himself to track 1. Note that I don’t fault Rosenhouse for the substance of what he’s saying here. I fault him for the double standard.

Or consider his claim that evolution faces no obstacle from the sparsity or improbability of viable biological systems so long as there are both gradualistic pathways that connect these systems and local areas around these systems (neighborhoods) that can readily be explored by evolution to find new steps along the pathways. Rosenhouses illustrates this claim with a two-dimensional diagram showing dots with circular neighborhoods around them, where overlapping neighborhoods suggest an evolutionary path. (p. 128) He even identifies one of the dots as “origin of life” and captions the diagram with “searching protein space.” 

The point of Rosenhouse’s “searching protein space” example is that new proteins can evolve by Darwinian means irrespective of how improbable proteins may be when considered in isolation; instead, the important thing for Darwinian processes to evolve new proteins is that proteins be connected by gradual evolutionary paths. Accordingly, what’s needed is for protein space to contain highly interconnected gradualistic evolutionary pathways (they can be but the merest tendrils) that from the vantage of the large-scale structure of the protein space may be highly improbable. Darwinian processes can then still traverse such pathways.

I’m largely in agreement with the mathematical point Rosenhouse is making in this example. Even so, it does seem that the sparsity or improbability of proteins with respect to the large-scale structure of the protein space may contribute to a lack of interconnectivity among proteins, making it difficult for evolutionary pathways to access far-flung proteins. Moreover, in Rosenhouse’s model, it is necessary to get on an evolutionary path in the first place. So brute improbability may become a challenge for getting the evolutionary process started. It’s therefore ironic that he starts his model from the origin of life, for which no widely accepted naturalistic theory exists and for which the absence of causal details is even worse than for Darwinism.

Whatever the merits of Rosenhouse’s argument in proposing this model, it is nonetheless the case that if forced to choose between his two tracks, we’d need to assign what he’s doing here to track 1 rather than track 2. And unlike the statistical mechanics example, this example is central to Rosenhouse’s defense of Darwinism and to his attack on “mathematical anti-evolutionism.” So there’s really no excuse for him to develop this model without the full specificity of track 2.

8 Discrete Hypercube Evolution

Because of the centrality of the “searching protein space” model to Rosenhouse’s argument, it’s instructive to illustrate it with the full rigor of track 2. Let me therefore lay out such a model in detail. Consider a 100-dimensional discrete hypercube of 100-tuples of the form (a_1, a_2, …, a_100), where the a_i’s are all natural numbers between 0 and 100. Consider now the following path in the hypercube starting at (0, 0, …, 0) and ending at (100,100, …, 100). New path elements are now defined by adding 1’s to each position of any existing path element, starting at the left and moving to the right, and then starting over at the left again. Thus the entire path takes the form

0:   (0, 0, …, 0)

1:   (1, 0, …, 0)

2:   (1, 1, …, 0)

100:   (1, 1, …, 1)

101:   (2, 1, …, 1)

102:   (2, 2, …, 1)

200:   (2, 2, …, 2)

300:   (3, 3, …, 3)

1,000:   (10, 10, …, 10)

2.000:   (20, 20, …, 20)

10,000:   (100, 100, …, 100)

The hypercube consists of 101^100, or about 2.7 x 10^200 elements, but the path itself has only 10,001 path elements and 10,000 implicit path edges connecting the elements. For simplicity, let’s put this discrete hypercube under a uniform probability distribution (we don’t have to, but it’s convenient for the purposes of illustration—Rosenhouse mistakenly claims that intelligent design mathematics automatically defaults to uniform or equiprobability, but that’s not the case, as we will see; but there are often good reasons to begin an analysis there). Given a uniform probability on the discrete hypercube, the path elements, all 10,001 of them considered together, have probability roughly 1 in 2.7 x 10^196 (10,001 divided by the total number of elements making up the hypercube). That’s very small, indeed smaller than the probability of winning 23 Powerball jackpots in a row (the probability of winning one Powerball jackpot is 1 in 292,201,338). 

Each path element in the hypercube has 200 immediate neighbors. Note that in one dimension there would be two neighbors, left and right; in two dimensions there would be four neighbors, left and right as well as up and down; in three dimensions there would be six neighbors, left and right, up and down, forward and backward; etc. Note also for path elements on the boundary of the hypercube, we can simply extend the hypercube into the ambient discrete hyperspace, finding there neighbors that never actually end up getting used (alternatively, the boundaries can be treated as reflecting barriers, a device commonly used by probabilists). 

Next, let’s define a fitness function F that is zero off the path and assigns to path elements of the form (a_1, a_2, …, a_100) the sum a_1 + a_2 + … + a_100. The starting point (0, 0, …, 0) then has minimal fitness and the end point (100, 100, …, 100) then has maximal fitness. Moreover, each successive path element, as illustrated above, has higher fitness, by 1, than its immediate predecessor. If we now stay with a uniform probability, and thus sample uniformly from the adjoining 200 neighbors, then the probability p of getting to the next element on the path, as judged by the fitness function F, is 1 in 200 for any given sample query, which we can think of and describe as a mutational step

The underlying probability distribution for moving between adjacent path elements is the geometric distribution. Traversing the entire path from starting point to end point can thus be represented by a sum of independent and identically distributed (with geometric distribution) random variables. Thus, on average, it takes 200 evolutionary sample queries, or mutational steps, to move from one path element to the next, and it therefore takes on average 2,000,000 (= 200 x 10,000) evolutionary sample queries, or mutational steps, to move from the starting to the end point. Probabilists call these numbers waiting times. Thus, the waiting time for getting from one path element to the next is, on average, 200; and for getting from the starting to the end point is, on average, 2,000,000. 

As it is, the geometric distribution is easy to work with and illustrates nicely Rosenhouse’s point about evolution not depending on brute improbability. But suppose I didn’t see that I was dealing with a geometric distribution or suppose the problem was much more difficult probabilistically, allowing no closed-form solution as here. In that case, I could have written a simulation to estimate the waiting times: just evolve across the path from all zeros to all one-hundreds over and over on a computer and see what it averages to. Would it be veering from Rosenhouse’s track 2 to do a simulation to estimate the probabilities and waiting times? Throughout his book, he insists on an exact and explicit identification of the probability space, its geometry, and the relevant probability distributions. But that’s unnecessary and excessive. 

In many practical situations, we have no way of assigning exact theoretical probabilities. Instead, we must estimate them by sampling real physical systems or by running computer simulations of them. Even in poker, where all the moving parts are clearly identified, the probabilities can get so out of hand that only simulations can give us a grasp of the underlying probabilities. And what’s true for poker is even more true for biology. The level of specificity I’ve given in this hypercube example is way more than Rosenhouse gives in his “searching protein space” example. The hypercube makes explicit what he leaves implicit, namely, it distinguishes mathematically the entire search space from the evolutionary paths through it from the neighborhoods around points on the path. It thus captures a necessary feature of Darwinian evolution. But it does so at the cost of vast oversimplification, rendering the connection between Darwinian and real-world evolution tenuous at best.

Why have I just gone through this exercise with the 100-dimensional discrete hypercube, giving it the full track 2 monty? Two reasons. One, it is to rebut Rosenhouse’s insistence on Darwinian gradualism in the face of intelligent design (more on this later in the review). Two, it is to show Darwinist critics like Rosenhouse that we in the intelligent design community know exactly what they are talking about when they stress that rather than brute improbability, the real issue for evolvability is the improbability of traversing evolutionary pathways subject to fitness. I’ve known this for decades, as have my intelligent design colleagues Mike Behe, Steve Meyer, and Doug Axe. Rosenhouse continually suggests that my colleagues and I are probabilistically naive, failing to appreciate the nuances and subtleties of Darwinism. We’re not. I’ll be returning to the hypercube example because it also illustrates why Rosenhouse’s Darwinism is so implacably committed to sequential mutations and must disallow simultaneous mutations at all costs. But first …

9 “Mathematical Proof”

A common rhetorical ploy is to overstate an opponent’s position so much that it becomes untenable and even ridiculous. Rosenhouse deploys this tactic repeatedly throughout his book. Design theorists, for instance, argue that there’s good evidence to think that the bacterial flagellum is designed, and they see mathematics as relevant to making such an evidential case. Yet with reference to the flagellum, Rosenhouse writes, “Anti-evolutionists make bold, sweeping claims that some complex system [here, the flagellum] could not have arisen through evolution. They tell the world they have conclusive mathematical proof of this.” (p. 152) I am among those who have made a mathematical argument for the design of the flagellum. And so, Rosenhouse levels that charge specifically against me: “Dembski claims his methods allow him to prove mathematically that evolution has been refuted …” (p. 136)

Rosenhouse, as a mathematician, must at some level realize that he’s prevaricating. It’s one thing to use mathematics in an argument. It’s quite another to say that one is offering a mathematical proof. The latter is much much stronger than the former, and Rosenhouse knows the difference. I’ve never said that I’m offering a mathematical proof that systems like the flagellum are designed. Mathematical proofs leave no room for fallibility or error. Intelligent design arguments use mathematics, but like all empirical arguments they fall short of the deductive certainty of mathematical proof. I can prove mathematically that 6 is a composite number by pointing to 2 and 3 as factors. I can prove mathematically that 7 is a prime number by running through all the numbers greater than 1 and  less than 7, showing that none of them divide it. But no mathematical proof that the flagellum is designed exists, and no design theorist that I know has ever suggested otherwise.

So, how did Rosenhouse arrive at the conclusion that I’m offering a mathematical proof of the flagellum’s design? I suspect the problem is Rosenhouse’s agenda, which is to discredit my work on intelligent design irrespective of its merit. Rosenhouse has no incentive to read my work carefully or to portray it accurately. For instance, Rosenhouse seizes on a probabilistic argument that I make for the flagellum’s design in my 2002 book No Free Lunch, characterizing it as a mathematical proof, and a failed one at that. But he has no possible justification for calling what I do there a mathematical proof. Note how I wrap up that argument—the very language used is as far from a mathematical proof as one can find (and I’ve proved my share of mathematical theorems, so I know):

Although it may seem as though I have cooked these numbers, in fact I have tried to be conservative with all my estimates. To be sure, there is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody’s favor. Getting solid, well-confirmed estimates for perturbation tolerance and perturbation identity factors [used to estimate probabilities gauging evolvability] will require careful scientific investigation. Such estimates, however, are not intractable. Perturbation tolerance factors can be assessed empirically by random substitution experiments where one, two, or a few substitutions are made. (NFL, pp. 301–302)

Obviously, I’ve used mathematics here to make an argument. But equally obviously, I’m not claiming to have provided a mathematical proof. In the section where this quote appears, I’m laying out various mathematical and probabilistic techniques that can be used to make an evidential case for the flagellum’s design. It’s not a mathematical proof but an evidential argument, and not even a full-fledged evidential argument so much as a template for such an argument. In other words, I’m laying out what such an argument would look like if one filled in the biological and probabilistic details. 

As such, the argument falls short of deductive certainty. Mathematical proof is all or nothing. Evidential support comes in degrees. The point of evidential arguments is to increase the degree of support for a claim, in this case for the claim that the flagellum is intelligently designed. A dispassionate reader would regard my conclusion here as measured and modest. Rosenhouse’s refutation, by contrast, is to set up a strawman, so overstating the argument that it can’t have any merit.

The reference to perturbation tolerance and perturbation identity factors here refers to the types of neighborhoods that are relevant to evolutionary pathways. Such neighborhoods and pathways were the subject of the two previous sections of this review. These perturbation factors are probabilistic tools for investigating the evolvability of systems like the flagellum. They presuppose some technical sophistication, but their point is to try honestly to come to terms with the probabilities that are actually involved with real biological systems. 

At this point, Rosenhouse might feign shock, suggesting that I give the impression of presenting a bulletproof argument for the design of the flagellum, but that I’m now backpedaling, only to admit that the probabilistic evidence for the design of the flagellum is tentative. But here’s what’s actually happening. Mike Behe, in defining irreducible complexity, has identified a class of biological systems (those that are irreducibly complex) that resist Darwinian explanations and that implicate design. At the same time, there’s also this method for inferring design developed by Dembski. What happens if that method is applied to irreducibly complex systems? Can it infer design for such systems? That’s the question I’m trying to answer, and specifically for the flagellum.

Since the design inference, as a method, infers design by identifying what’s called specified complexity (more on this shortly), Rosenhouse claims that my argument begs the question. Thus, I’m supposed to be presupposing that irreducible complexity makes it impossible for a system to evolve by Darwinian means. And from there I’m supposed to conclude that it must be highly improbable that it could evolve by Darwinian means (if it’s impossible, then it’s improbable). But that’s not what I’m doing. Instead, I’m using irreducible complexity as a signpost of where to look for biological improbability. Specifically, I’m using particular features of an irreducibly complex system like the bacterial flagellum to estimate probabilities related to its evolvability. I conclude, in the case of the flagellum, that those probabilities seem low and warrant a design inference. 

Now I might be wrong (that’s why I say the numbers need to be firmed up and we need to make sure no one is cheating). To this day, I’m not totally happy with the actual numbers in the probability calculation for the bacterial flagellum as presented in my book No Free Lunch. But that’s no reason for Rosenhouse and his fellow Darwinists to celebrate. The fact is that they have no probability estimates at all for the evolution of these systems. Worse yet, because they are so convinced that these systems evolved by Darwinian means, they know in advance, simply from their armchairs, that the probabilities must be high. The point of that section in No Free Lunch was less to do a definitive calculation for the flagellum as to lay out the techniques for calculating probabilities in such cases (such as the perturbation probabilities). 

In his book, Rosenhouse claims that I have “only once tried to apply [my] method to an actual biological system” (p. 137), that being to the flagellum in No Free Lunch. And, obviously, he thinks I failed in that regard. But as it is, I have applied the method elsewhere, and with more convincing numbers. See, for instance, my analysis of Doug Axe’s investigation into the evolvability of enzyme folds in my 2008 book The Design of Life (co-authored with Jonathan Wells; see ch. 7). My design inferential method yields much firmer conclusions there than for the flagellum for two reasons: (1) the numbers come from the biology as calculated by biologists (in this case, the biologist is Axe), and (2) the systems in question (small enzymatic proteins with 150 or so amino acids) are much easier to analyze than big molecular machines like the flagellum, which have tens of thousands of protein subunits. 

Darwinists have always hidden behind the complexities of biological systems. Instead of coming to terms with the complexities, they turn the tables and say: “Prove us wrong and show that these systems didn’t evolve by Darwinian means.” As always, they assume no burden of proof. Given the slipperiness of the Darwinian mechanism, in which all interesting evolution happens by co-option and coevolution, where structures and functions must both change in concert and crucial evolutionary intermediates never quite get explicitly identified, Darwinists have essentially insulated their theory from challenge. So the trick for design theorists looking to apply the design inferential method to actual biological systems is to find a Goldilocks zone in which a system is complex enough to yield design if the probabilities can be calculated and yet simple enough for the probabilities actually to be calculated. Doug Axe’s work is, in my view, the best in this respect. We’ll return to it since Axe also comes in for criticism from Rosenhouse.

10 Specified Complexity

The method for inferring design laid out in The Design Inference amounts to determining whether an event, object, or structure exhibits specified complexity or, equivalently, specified improbability. The term specified complexity does not actually appear in The Design Inference, where the focus is on specified improbability. Specified improbability identifies things that are improbable but also suitably patterned, or specified. Specified complexity and specified improbability are the same notion. 

To see the connection between the two terms, imagine tossing a fair coin. If you toss it thirty times, you’ll witness an event of probability 1 in 2^30, or roughly 1 in a billion. At the same time, if you record those coin tosses as bits (0 for tails, 1 for heads), that will require 30 bits. The improbability of 1 in 2^30 thus corresponds precisely to the number of bits required to identify the event. The greater the improbability, the greater the complexity. Specification then refers to the right sort of pattern that, in the presence of improbability, eliminates chance. 

Not all patterns eliminate chance in the presence of improbability. Take an arrow shot at a target. Let’s say the target has a bullseye. If the target is fixed and the arrow is shot at it, and if the bullseye is sufficiently small so that hitting it with the arrow is extremely improbable, then chance may rightly be eliminated as an explanation for the arrow hitting the bullseye. On the other hand, if the arrow is shot at a large wall, where the probability of hitting the wall is large, and the target is then painted around the arrow sticking in the wall so that the arrow is squarely in the bullseye, then no conclusion about whether the arrow was or was not shot by chance is possible. 

Specified improbability, or specified complexity, calls on a number of interrelated concepts. Besides a way of calculating or estimating probability and a criterion for determining whether a pattern is indeed a specification, the notion requires factoring in the number of relevant events that could occur, or what are called probabilistic resources. For example, multiple arrows allowing multiple shots will make it easier to hit the bullseye by chance. Moreover, the notion requires having a coherent rationale for determining what probability bounds may legitimately be counted as small enough to eliminate chance. Also, there’s the question of factoring in other specifications that may compete with the one originally identified, such as having two fixed targets on a wall and trying to determine whether chance could be ruled out if either of them were hit with an arrow. 

The basic theory for explaining how specified improbability/complexity is appropriately used to infer design was laid out in The Design Inference, and then refined (in some ways simplified, in some ways extended) over time. The notion was well vetted. It was the basis for my doctoral dissertation in the philosophy of science and the foundations of probability theory—this dissertation was turned into The Design Inference. I did this work in philosophy after I had already done a doctoral dissertation in mathematics focusing on probability and chaos theory (Leo Kadanoff and Patrick Billingsley were the advisors on that dissertation). 

The manuscript for The Design inference went past a stringent review with academic editors at Cambridge University Press, headed by Brian Skyrms, a philosopher of probability at UC Irvine, and one of the few philosophers to be in the National Academy of Sciences. When I was a postdoc at Notre Dame in 1996–97, the philosopher Phil Quinn revealed to me that he had been a reviewer, giving Cambridge an enthusiastic thumbs up. He also told me that he had especially liked The Design Inference’s treatment of complexity theory (chapter 4 in the book). 

But there’s more. With my colleagues Winston Ewert and Robert Marks, we’ve given specified complexity a rigorous formulation in terms of Kolmogorov complexity/algorithmic information theory:

Winston Ewert, William Dembski, and Robert J. Marks II (2014). “Algorithmic Specified Complexity.” In J. Bartlett, D. Hemser, J. Hall, eds., Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft (Broken Arrow, Okla.: Blyth Institute Press).

Ewert, W., Dembski, W., & Marks, R. J. (2015). “Algorithmic Specified Complexity in the Game of Life.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 584–594.

True to form, critics of the concept refuse to acknowledge that specified complexity is a legitimate well-defined concept. Go to the Wikipedia entry on specified complexity, and you’ll find the notion dismissed as utterly bogus. Publications on specified complexity by colleagues and me, like those just listed, are ignored and left uncited. Rosenhouse is complicit in such efforts to discredit specified complexity. 

But consider, scientists must calculate, or at least estimate, probability all the time, and that’s true even of evolutionary biologists. For instance, John Maynard Smith, back in his 1958 The Theory of Evolution, concludes that flatworms, annelids, and molluscs, representing three different phyla, must nonetheless descend from a common ancestor because their common cleavage pattern in early development “seems unlikely to have arisen independently more than once.” (Smith, pp. 265–266) “Unlikely” is, of course, a synonym for “improbable.” 

Improbability by itself, however, is not enough. The events to which we assign probabilities need to be identified, and that means they must match identifiable patterns (in the Smith example, it’s the common cleavage pattern that he identified). Events exhibiting no identifiable pattern are events over which we can exercise no scientific insight and about which we can draw no scientific conclusion.

Even so, Rosenhouse seems especially hung up on my notion of specification, which he mistakenly defines as “independently describable” (p. 133) or “describable without any reference to the object itself” (p. 141). But nowhere does he give the actual definition of specification. To motivate our understanding of specification, I’ve used such language as “independently given” or “independently identifiable.” But these are intuitive ways of setting out the concept. Specification has a precise technical definition, of which Rosenhouse seems oblivious.

In The Design Inference, I characterized specification precisely in terms of a complexity measure that “estimates the difficulty of formulating patterns.” This measure then needs to work in tandem with a complexity bound that “fixes the level of complexity at which formulating such patterns is feasible.” (TDI, p. 144) That was in 1998. By 2005, this core idea stayed unchanged, but I preferred to use the language of descriptive complexity and minimum description length to characterize specification (see my 2005 article on Specification, published in Philosophia Christi, which Rosenhouse cites but without, again, giving the actual definition of the term specification). 

So, what’s the upshot of specification according to this definition? Essentially, specified complexity or specified improbability involves two notions of complexity, one probabilistic, the other linguistic or descriptive. Thus we can speak of probabilistic complexity and descriptive complexity. Events become probabilistically more complex as they become more improbable (this is consistent with, as pointed out earlier, longer, more improbable sequences of coin tosses requiring longer bit strings to be recorded). At the same time, descriptive complexity characterizes patterns that describe events via a descriptive language. Descriptive complexity differs from probabilistic complexity and denotes the shortest description that will describe an event. The specification in specified complexity thus refers to patterns with short descriptions, and specified complexity refers to events that have high probabilistic complexity but whose identifying patterns have low descriptive complexity. 

To appreciate how probabilistic and descriptive complexity play off each other in specified complexity, consider the following example from poker. Take the hands corresponding to “royal flush” and “any hand.” These descriptions are roughly the same length and very short. Yet “royal flush” refers to 4 hands among 2,598,960 total number of poker hands and thus describes an event of probability 4/2,598,960 = 1/649,740. “Any hand,” by contrast, allows for any of the total number of 2,598,960 poker hands, and thus describes an event of probability 1. Clearly, if we witnessed a royal flush, we’d be inclined, on the basis of its short description and the low probability event to which it corresponds, to refuse to attribute it to chance. Now granted, with all the poker that’s played worldwide, the probability of 1/649,740 is not small enough to decisively rule out its chance occurrence (in the history of poker, royal flushes have appeared by chance). But certainly we’d be less inclined to ascribe a royal flush to chance than we would any hand at all.

The general principle illustrated in this example is that large probabilistic complexity (or low probability) and small descriptive complexity combine to yield specified complexity. Specifications are then those patterns that have small descriptive complexity. Note that it can be computationally intractable to calculate minimum description length exactly, but that often we can produce an effective estimate for it by finding a short description, which, by definition, will then constitute an upper bound for the absolute minimum. As it is, actual measures of specified complexity take the form of a negative logarithm applied to the product of a descriptive complexity measure times a probability. Because a negative logarithm makes small things big and big things small, high specified complexity corresponds to small probability multiplied with small descriptive complexity. This is how I find it easiest to keep straight how to measure specified complexity. 

Rosenhouse, however, gives no evidence of grasping specification or specified complexity in his book (pp. 137–146). For instance, he will reject that the flagellum is specified, claiming that it is not “describable without any reference to the object itself,” as though that were the definition of specification. (See also p. 161.) Ultimately, it’s not a question of independent describability, but of short or low-complexity describability. I happen to think that the description “bidirectional motor-driven propeller” is an independent way of describing the flagellum because humans invented bidirectional motor-driven propellers before they found them, in the form of flagella, on the backs of E. coli and other bacteria (if something has been independently identified, then it is independently identifiable). But what specifies it is that it has a short description, not that the description could or could not be identified independently of the flagellum. By contrast, a random assortment of the protein subunits that make up the flagellum would be much harder to describe. The random assortment would therefore require a much longer description, and would thus not be specified. 

The mathematical, linguistic, and computer science literature is replete with complexity measures that use description length, although the specific terminology to characterize such measures varies with field of inquiry. For instance, the abbreviation MDL, or minimum description length, has wide currency; it arises in information theory and merits its own Wikipedia entry. Likewise AIT, or algorithmic information theory, has wide currency, where the focus is on compressibility of computer programs, so that highly compressible programs are the ones with shorter descriptions. In any case, specification and specified complexity are well defined mathematical notions. Moreover, the case for specified complexity strongly implicating design when probabilistic complexity is high and descriptive complexity is low is solid. I’m happy to dispute these ideas with anyone. But in such a dispute, it will have to be these actual ideas that are under dispute. Rosenhouse, by contrast, is unengaged with these actual ideas, attributing to me a design inferential apparatus that I do not recognize, and then offering a refutation of it that is misleading and irrelevant. 

As a practical matter, it’s worth noting that most Darwinian thinkers, when confronted with the claim that various biological systems exhibit specified complexity, don’t challenge that the systems in question (like the flagellum) are specified (Dawkins in The Blind Watchmaker, for instance, never challenges specification). In fact, they are typically happy to grant that these systems are specified. The reason they give for not feeling the force of specified complexity in triggering a design inference is that, as far as they’re concerned, the probabilities aren’t small enough. And that’s because natural selection is supposed to wash away any nagging improbabilities. 

In a companion essay to his book for The Skeptical Inquirer, Rosenhouse offers the following coin tossing analogy to illustrate the power of Darwinian processes in overcoming apparent improbabilities:

[Creationists argue that] genes and proteins evolve through a process analogous to tossing a coin multiple times. This is untrue because there is nothing analogous to natural selection when you are tossing coins. Natural selection is a non-random process, and this fundamentally affects the probability of evolving a particular gene. To see why, suppose we toss 100 coins in the hopes of obtaining 100 heads. One approach is to throw all 100 coins at once, repeatedly, until all 100 happen to land heads at the same time. Of course, this is exceedingly unlikely to occur. An alternative approach is to flip all 100 coins, leave the ones that landed heads as they are, and then toss again only those that landed tails. We continue in this manner until all 100 coins show heads, which, under this procedure, will happen before too long. 

The latter approach to coin tossing, which retosses only the coins that landed tails, corresponds, for Rosenhouse, to Darwinian natural selection making probable for evolution what at first blush might seem improbable. Of course, the real issue here is to form reliable estimates of what the actual probabilities are even when natural selection is thrown into the mix. The work of Mike Behe and Doug Axe argues that for some biological systems (such as molecular machines and individual enzymes), natural selection does nothing to mitigate what, without it, are vast improbabilities. Some improbabilities remain extreme despite natural selection. 

One final note before leaving specification and specified complexity. Rosenhouse suggests that in defining specified complexity as I did, I took a pre-theoretic notion as developed by origin-of-life researcher Leslie Orgel, Paul Davies and others, and then “claim[ed] to have developed a mathematically rigorous form of the concept.” In other words, he suggests that I took a track 1 notion and claimed to turn it into a track 2 notion. Most of the time, Rosenhouse gives the impression that moving mathematical ideas from track 1 to track 2 is a good thing. But not in this case. Instead, Rosenhouse faults me for claiming that “this work constitutes a genuine contribution to science, and that [ID proponents] can use [this] work to prove that organisms are the result of intelligent design.” For Rosenhouse, “It is these claims that are problematic, to put it politely, for reasons we have already discussed.” (p. 161) 

The irony here is rich. Politeness aside, Rosenhouse’s critique of specified complexity is off the mark because he has mischaracterized its central concept, namely, specification. But what makes this passage particularly cringeworthy is that Leslie Orgel, Paul Davies, Francis Crick, and Richard Dawkins have all enthusiastically endorsed specified complexity, in one form or another, sometimes using the very term, at other times using the terms complexity and specification (or specificity) in the same breath. All of them have stressed the centrality of this concept for biology and, in particular, for understanding biological origins. 

Yet according to Rosenhouse, “These authors were all using ‘specified complexity’ in a track one sense. As a casual saying that living things are not just complex, but also embody independently-specifiable patterns, there is nothing wrong with the concept.” (p. 161) But in fact, there’s plenty wrong if this concept must forever remain merely at a pre-theoretic, or track 1, level. That’s because those who introduced the term “specified complexity” imply that the underlying concept can do a lot of heavy lifting in biology, getting at the heart of biological innovation and origins. So, if specified complexity stays forcibly confined to a pre-theoretic, or track 1, level, it becomes a stillborn concept—suggestive but ultimately fruitless. Yet given its apparent importance, the concept calls for a theoretic, or track 2, level of meaning and development. According to Rosenhouse, however, track 2 has no place for the concept. What a bizarre, unscientific attitude. 

Consider Davies from The Fifth Miracle (1999, p. 112): “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Or consider Richard Dawkins in The Blind Watchmaker (1986, pp. 15–16): “We were looking for a precise way to express what we mean when we refer to something as complicated. We were trying to put a finger on what it is that humans and moles and earthworms and airliners and watches have in common with each other, but not with blancmange, or Mont Blanc, or the moon. The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.” How can any scientist who takes such remarks seriously be content to leave specified complexity at a track 1 level?

Frankly, Rosenhouse should thank me for taking specified complexity from a track 1 concept and putting it on solid footing as a track 2 concept, clarifying what was vague and fuzzy in the pronouncements of Orgel and others about specified complexity, thereby empowering specified complexity to become a precise tool for scientific inquiry. But I suspect in waiting for such thanks, I would be waiting for the occurrence of a very small probability event. And who in their right mind does that? Well, Darwinists for one. But I’m not a Darwinist.

11 Evolution With and Without Multiple Simultaneous Changes

Darwinism is committed to evolution happening gradually, one step at a time, by single mutational changes. There’s a sound probabilistic rationale for this view, underwritten by, or one might say in reaction to, specified complexity. The alternative to single mutational changes is multiple simultaneous mutational changes. If simultaneous changes were required of evolution, then the steps along which Darwinian processes move would become improbable, so much so that Darwinian evolution itself would no longer be plausible. Darwin put it this way in the Origin of Species: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case,”

Of course, Darwin didn’t just mean numerous, successive slight modifications as such. What, after all, can’t be formed gradually in the absence of any constraints—any system of parts can, in principle, be built up one part at a time, and thus gradually. For Darwin, the constraint, obviously, was natural selection. The slight modifications acceptable to Darwin were those where each modification confers a selective advantage. Darwin was, after all, in this passage from the Origin defending his theory. Like his modern-day disciples, he was convinced that all evolutionary change happens gradually, with natural selection approving every step of the process. In our day, those changes are seen as mutational, and the most gradual of these is the single mutational change. Rosenhouse buys into this view when he remarks, in a passage already quoted, that all adaptations in biology “can be broken down into small mutational steps.” ( p. 178)

To overturn the view that all biological adaptations can occur through single mutational changes, one baby step after another, it is therefore enough to show that some adaptations resist gradual formation in this way and instead require multiple simultaneous changes. Earlier in this review, I pointed to irreducible and specified complexity as providing counterexamples to Darwinian evolution, urging that these pose a convincing barrier to natural selection acting on random variations. I want in this section to probe deeper into this barrier, and specifically how the work of some of my colleagues and me regarding the need for multiple simultaneous changes in evolution rebuts Rosenhouse’s case for Darwinian gradualism. 

To understand the challenge that multiple simultaneous mutational changes pose to Darwinian evolution, let’s revisit evolution on the discrete hypercube (recall section 8). Evolution in this case starts from the state of all zeros, i.e., (0, 0, …, 0) and ends at the state of all ones, i.e., (100,100, …, 100). Evolution proceeds by going from one path element of the hypercube to the next, querying up to  200 neighbors, with the probability of finding the next path element being 1 in 200 for each query. As noted, in section 8, this is a geometric progression, so the average number of queries per successful evolutionary step  is 200, and the since the total path has 10,000 steps, the average number of queries, or the waiting time, to go from (0, 0, …, 0) to (100,100, …, 100) is 2,000,000. There are bacteria that replicate every 20 minutes. So with life on earth lasting close to 4 billion years, that puts an upper limit of about 100 trillion generations on any evolutionary lineage on planet earth. So 2,000,000 is doable.

But what if evolution on the hypercube required two simultaneous successful queries—of one neighbor and then the next—for the next successful evolutionary step? Because the queries must be successful simultaneously, they are probabilistically independent, and the probabilities multiply. So, the probability of two simultaneous queries is 1/200 x 1/200, 1 in 40,000. Granted, each step now traverses two neighbors, so the total number of steps needed to get from (0, 0, …, 0) to (100,100, …, 100) drops in half to 5,000. But the total waiting time to get from (0, 0, …, 0) to (100,100, …, 100) is now 40,000 x 5,000, or 200,000,000. That’s a hundred fold increase over non-simultaneous queries, but still less than 100 trillion, the maximum number of generations in any evolutionary lineage on earth.

But let’s now ramp up. What if evolution on the hypercube required five simultaneous successful queries—of one neighbor, then another, and another, and another, and still one more—for the next successful evolutionary step? Because the queries must be successful simultaneously, they are probabilistically independent, and the probabilities multiply. So, the probability of five successful simultaneous queries is 1/200 x 1/200 x 1/200 x 1/200 x 1/200, 1 in 320 billion. Each step now traverses five neighbors, so the total number of steps needed to get from (0, 0, …, 0) to (100,100, …, 100) drops to 2,000. But the total waiting time to get from (0, 0, …, 0) to (100,100, …, 100) is now 2,000 times 320 billion, or 640 trillion. That’s a 320 million-fold increase over non-simultaneous queries. Moreover, this number well exceeds 100 trillion, the maximum number of generations in an evolutionary lineage on earth.

As is evident from this example, if evolution requires simultaneous successful queries (aka simultaneous mutational changes), then Darwinian evolution is dead in the water. In that case, the waiting times (which correlate precisely with improbabilities) simply become too great for evolution to do anything interesting. Now granted, the hypercube is a toy example. But real-life examples of evolution exist that seem to require not successive, but simultaneous mutational changes. Mike Behe examined one such example in detail in his second book, The Edge of Evolution (2007). There he noted that two particular amino acid changes, which call for associated genetic changes, were implicated in the malarial parasite plasmodium acquiring chloroquine resistance. These changes are rare, and in line with the need for simultaneous mutational changes. In particular, resistance to chloroquine occurs in these malarial parasites roughly 1 in 10^20. Those numbers are disconcerting for a smooth-sailing Darwinian evolutionary process. 

Even so, Rosenhouse, citing Kenneth Miller, remains unconvinced (see pp. 157–158). But as is Rosenhouse’s habit in Anti-Mathematical Evolutionism, he always gives critics of intelligent design the last word. Miller charges Behe with artificially ruling out cumulative selection and thus with failing to nail down the case for simultaneous mutations. But Darwinists, in defending cumulative selection, require critics to identify and rule out every possible evolutionary pathway, an effectively infinite and therefore impossible task. Behe’s response to Miller, which Rosenhouse leaves unmentioned is instructive. I quote it at length:

In general Darwinists are not used to constraining their speculations with quantitative data. The fundamental message of The Edge of Evolution, however, is that such data are now available. Instead of imagining what the power of random mutation and selection might do, we can look at examples of what it has done. And when we do look at the best, clearest examples, the results are, to say the least, quite modest. Time and again we see that random mutations are incoherent and much more likely to degrade a genome than to add to it — and these are the positively-selected, “beneficial” random mutations.

Miller asserts that I have ruled out cumulative selection and required Plasmodium falciparum to achieve a predetermined result. I’m flattered that he thinks I have such powers. However, the malaria parasite does not take orders from me or anyone else. I had no ability to rule out or require anything. The parasite was free in the wild to come up with any solution that might help it, by any mutational pathway that was available. I simply reported the results of what the parasite achieved. In 10^20 chances, it would be expected to have undergone huge numbers of all types of mutations — substitutions, deletions, insertions, gene duplications, and more. And in that astronomical number of opportunities, at best a handful of mutations were useful to it.

Doug Axe’s research on enzymes evolving new protein folds elicits the same pattern of criticism from Rosenhouse. Axe highlights the need for multiple coordinated mutations in the evolution of TEM-1 beta-lactamase, computing some jaw-droppingly small probabilities (on the order of 1 in 10^77). Rosenhouse cites a critic (p. 187), in this case plant biologist Arthur Hunt, who claims Axe got it all wrong by focusing on the wrong strain of the enzyme (a weakened form rather than the wild type). But then Rosenhouse forgoes citing Axe’s response to Hunt. Here’s a relevant portion of Axe’s response: 

In the work described in the 2004 JMB paper [to which Hunt and Rosenhouse were responding], I chose to apply the lowest reasonable standard of function, knowing this would produce the highest reasonable value for P, which in turn provides the most optimistic assessment of the feasibility of evolving new protein folds. Had I used the wild-type level of function as the standard, the result would have been a much lower P value, which would present an even greater challenge for Darwinism. In other words, … the method I used was deliberately generous toward Darwinism.

Now it may seem that I’m just doing what Rosenhouse does, namely, giving my guys the final word. But if I am, it’s not that I’m trying prematurely to end the discussion—it’s just that I’m unaware of any further replies by Miller and Hunt. On the other hand, Rosenhouse, in writing his “state of the art” book on mathematical anti-evolutionism should, presumably, be up on the latest about where the debate stands. Miller’s review of Behe in Nature was back in 2007. And Hunt’s response to Axe occurred on a blog (pandasthumb.org), also back in 2007. Behe responded to Miller right away, in 2007, and Axe’s response here appeared in Bio-Complexity in 2011. So if Rosenhouse were fair-minded, he could at least have noted that Behe and Axe had responded to the criticisms he cited. The fact that Rosenhouse didn’t suggests that he has a story to tell and that he will tell it regardless of the facts or evidence.

In closing this section, I note that design proponents are not the only ones questioning and rejecting the Darwinian view that selection acting on single mutational changes can drive the evolutionary process. James Shapiro, a biologist at the University of Chicago, represents a group called The Third Way: Evolution in the Era of Genomics and Epigenomics. A decade ago, Shapiro wrote what essentially amounted to a manifesto for The Third Way: Evolution: A View from the 21st Century (2011). There he argued that organisms do their own “natural genetic engineering,” which is teleological and thoroughly non-Darwinian. Granted, Shapiro is not a fan of intelligent design. But in personal conversation I’ve found him more anti-Darwinian, if that were possible, than my intelligent design colleagues. Specifically, I remarked to him that I thought the Darwinian mechanism offered at least some useful insights. Shapiro responded by saying that Darwin’s effect on biology was wholly negative. This exchange happened in his office during my 2014 visit to the University of Chicago, which was arranged by Leo Kadanoff and described earlier. 

12 Conservation of Information—The Idea

Rosenhouse devotes a section of his book (sec. 6.10) to conservation of information, and prefaces it with a section on artificial life (sec. 6.9). These sections betray such ignorance and confusion that it’s best to clean the slate. I’ll therefore highlight some of the key problems with Rosenhouse’s exposition, but focus mainly on providing a brief history and summary of conservation of information, along with references to the literature, so that readers can determine for themselves who’s blowing smoke and who’s got the beef.

Rosenhouse’s incomprehension of conservation of information becomes evident in his run-up to it with artificial life. Anyone who has understood conservation of information recognizes that artificial life is a fool’s errand. Yet Rosenhouse’s support of artificial life is unmitigated. The term artificial life has been around since the late 1980s, when Christopher Langton, working out of the Santa Fe Institute, promoted it and edited a conference proceedings on the topic. I was working in chaos theory at the time. I followed the Santa Fe Institute’s research in that area, and thus as a side benefit (if it may be called that) witnessed first-hand the initial wave of enthusiasm over artificial life. 

Artificial life is computer simulations that produce life-like virtual things, often via a form of digital evolution that mimics selection, variation, and heredity. The field has had its ups and downs over the years, initially generating a lot of enthusiasm, then losing it as people started to ask “what’s this got to do with actual biology,” after which people forgot these nagging concerns, whereupon a new generation of researchers got excited about it, and so on to repeat the cycle. Rosenhouse, it seems, represents the latest wave of enthusiasm. As he writes: “[Artificial life experiments] are not so much simulations of evolution as they are instances of it. In observing such an experiment you are watching actual evolution take place, albeit in an environment in which the researchers control all the variables.” (p. 209) 

Conservation of information, as developed by my colleagues and me, arose in reaction to such artificial life simulations. We found, as we analyzed them (see here for several analyses that we did of specific artificial life programs such as Avida, which Rosenhouse lauds), that the information that researchers claimed to get out of these programs was never invented from scratch and never amounted to any genuine increase in information, but rather always reflected information that was inputted by the researcher, often without the researcher’s awareness. The information was therefore smuggled in rather than created by the algorithm. But if smuggling information is a feature rather than a bug of these simulations (which it is), that undercuts using them to support biological evolution. Any biological evolution worth its salt is supposed to create novel biological information, and not simply redistribute it from existing sources. 

For my colleagues and me at the Evolutionary Informatics Lab (EvoInfo.org), it therefore turned into a game to find where the information supposedly gotten for free in these algorithms had in fact been surreptitiously slipped in (as in the shell game find the pea). The case of Dave Thomas, a physicist who wrote a program to generate Steiner trees (a type of graph for optimally connecting points in certain ways), is instructive. Challenging our claim that programmers were always putting as much information into these algorithms as they were getting out, he wrote: “If you contend that this algorithm works only by sneaking in the answer into the fitness test, please identify the precise code snippet where this frontloading is being performed.”

We found the code snippet, which included the incriminating comment “over-ride!!!” Here is the code snippet:

x = (double)rand() / (double)RAND_MAX; num = (int)((double)(m_varbnodes*x); num = m_varbnodes; // over-ride!!! 

As we explained in an article about Thomas’s algorithm

The claim that no design was involved in the production of this algorithm is very hard to maintain given this section of code. The code picks a random count for the number of interchanges; however, immediately afterwards it throws away the randomly calculated value and replaces it with the maximum possible, in this case, 4. The code is marked with the comment “override!!!,” indicating that this was the intent of Thomas. It is the equivalent of saying “go east” and a moment later changing your mind and saying “go west.” The most likely occurrence is that Thomas was unhappy with the initial performance of his algorithm and thus had to tweak it. 

We saw this pattern, in which artificial life programs snuck in information, repeated over and over again. I had first seen it reading The Blind Watchmaker. There Richard Dawkins touted his famous “Weasel algorithm” (which Rosenhouse embraces without reservation as capturing the essence of natural selection; see pp. 192–194). Taking from Shakespeare’s Hamlet the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins found that if he tried to “evolve” it by randomly varying letters while needing them all to spell the target phrase at once (compare simultaneous mutations or tossing all coins at once), the improbability would be enormous and it would take practically forever. But if instead he could vary letters a few at a time and if intermediate phrases sharing more letters with the target phrase were in turn subject to further selection and variation, then the probability of generating the target phrase in a manageable number of steps would be quite high. Thus Dawkins was able on average to generate the target phrase in under 50 steps, which is far less than the 10^40 steps needed on average if the algorithm had to climb Mount Improbable by jumping it in one fell swoop. 

Dawkins, Rosenhouse, and other fans of artificial life regard Dawkins’ WEASEL as a wonderful illustration of Darwinian evolution. But if it illustrates Darwinian evolution, it illustrates that Darwinian evolution is chock-full of prior intelligently inputted information, and so in fact illustrates intelligent design. This should not be controversial. To the degree that it is controversial, to that degree Dawkins’ WEASEL illustrates the delusional power of Darwinism. To see through this example, ask yourself where the fitness function that evolves intermediate phrases to the target phrase came from? The fitness function in question is one that assigns highest fitness to METHINKS IT IS LIKE A WEASEL and varying fitness to intermediate phrases depending on how many letters they have in common with the target phrase. Clearly, the fitness function was constructed on the basis of the target phrase. All the information about the target phrase was therefore built into—or as computer scientists would say, hard-coded into—the fitness function. And what is hard-coding but intelligent design?

But there’s more. The fitness function in Dawkins’ example is gradually sloping and unimodal, thereby gradually evolving intermediate phrases into the target. But for any letter sequence of the same length as the target phrase, there’s a fitness function exactly parallel to it that will evolve intermediate phrases to this new letter sequence. Moreover, there are many more fitness functions besides these, including multimodal ones where evolution may get stuck on a local maximum, and some that are less smooth but that still get to the target phrase with a reasonably large probability. The point to appreciate here is that rigging the fitness functions to get to a target sequence is even more complicated than simply going at the target sequence directly. It’s this insight that’s key to conservation of information.

I began using the term conservation of information in the late 1990s. Yet the term itself is not unique to me and my colleagues. Nobel laureate biologist Peter Medawar introduced it in the 1980s. In the mid 1990s, computer scientists used that term and also similar language. We may not all have meant exactly the same thing, but we were all in the same ballpark. From 1997 to 2007, I preferred the term displacement to conservation of information. Displacement gets at the problem of explaining one item of information in terms of another, but without doing anything to elucidate the origin of the information in question. For instance, if I explain a Dürer woodcut by reference to an inked woodblock, I haven’t explained the information in the woodcut but merely displaced it to the woodblock. 

Darwinists are in the business of displacing information. Yet when they do, they typically act all innocent and pretend that they have fully accounted for all the information in question. Moreover, they gaslight anyone who suggests that biological evolution faces an information problem. Information follows precise accounting principles, so it cannot magically materialize in the way that Darwinists desire. What my colleagues and I at the Evolutionary Informatics Lab found is that, apart from intelligent causation, attempts to explain information do nothing to alleviate, and may actually intensify, the problem of explaining the information’s origin. It’s like filling one hole by digging another, but where the newly dug hole is at least as deep and wide as the first one (often more so). The only exception is one pointed out by Douglas Robertson, writing for the Santa Fe Institute journal Complexity back in 1999: the creation of new information is an act of free will by intelligence. That’s consistent with intelligent design. But that’s a no-go for Darwinists.

13 Conservation of Information—The Theorems

Until about 2007, conservation of information functioned more like a forensic tool for discovering and analyzing surreptitious insertions of information: So and so says they got information for nothing. Let’s see what they actually did. Oh yeah, here’s where they snuck in the information. Around 2007, however, a fundamental shift occurred in my work on conservation of information. Bob Marks and I began to collaborate in earnest, and then two very bright students of his also came on board, Winston Ewert and George Montañez. Initially we were analyzing some of the artificial life simulations that Rosenhouse mentions in his book, as well as some other simulations (such as Thomas Schneider’s ev). As noted, we found that the information emerging from these systems was always more than adequately accounted for in terms of the information initially inputted. 

Yet around 2007, we started proving theorems that precisely tracked the information in these systems, laying out their information costs, in exact quantitative terms, and showing that the information problem always became quantitatively no better, and often worse, the further one backtracked causally to explain it. Conservation of information therefore doesn’t so much say that information is conserved as that at best it could be conserved and that the amount of information to be accounted for, when causally backtracked, may actually increase. This is in stark contrast to Darwinism, which attempts to explain complexity from simplicity rather than from equal or greater complexity. Essentially, then, conservation of information theorems argue for an information regress. This regress could then be interpreted in one of two ways: (1) the information was always there, front-loaded from the beginning; or (2) the information was put in, exogenously, by an intelligence. 

Rosenhouse feels the force of the first option. True, he dismisses conservation of information theorems as in the end “merely asking why the universe is as it is.” (p. 217) But when discussing artificial life, he admits, in line with the conservation of information theorems, that crucial information is not just in the algorithm but also in the environment. (p. 214) Yet if the crucial information for biological evolution (as opposed to artificial life evolution) is built into the environment, where exactly is it and how exactly is it structured? It does no good to say, as Rosenhouse does, that “natural selection serves as a conduit for transmitting environmental information into the genomes of organisms.” (p. 215) That’s simply an article of faith. Templeton Prize winner Holmes Rolston, who is not an ID guy, rejects this view outright. Writing on the genesis of information in his book Genes, Genesis, and God (pp. 352–353), he responded to the view that the information was always there:

The information (in DNA) is interlocked with an information producer-processor (the organism) that can transcribe, incarnate, metabolize, and reproduce it. All such information once upon a time did not exist but came into place; this is the locus of creativity. Nevertheless, on Earth, there is this result during evolutionary history. The result involves significant achievements in cybernetic creativity, essentially incremental gains in information that have been conserved and elaborated over evolutionary history. The know-how, so to speak, to make salt is already in the sodium and chlorine, but the know-how to make hemoglobin molecules and lemurs is not secretly coded in the carbon, hydrogen, and nitrogen…. 

So no, the information was not always there. And no, Darwinian evolution cannot, according to the conservation of information theorems, create information from scratch. The way out of this predicament for Darwinists (and I’ve seen this move repeatedly from them) is to say that conservation of information may characterize computer simulations of evolution, but that real-life evolution has some features not captured by the simulations. But if so, how can real-life evolution be subject to scientific theory if it resists all attempts to model it as a search? Conservation of information theorems are perfectly general, covering all search. 

Yet ironically, Rosenhouse is in no position to take this way out because, as noted in the last section, he sees these computer programs “not so much simulations of evolution [but as] instances of it.” (p. 209) Nonetheless, when push comes to shove, Rosenhouse has no choice, even at the cost of inconsistency, but to double down on natural selection as the key to creating biological information. The conservation of information theorems, however, show that natural selection, if it’s going to have any scientific basis, merely siphons from existing sources of information, and thus cannot ultimately explain it. 

As with specified complexity, in proving conservation of information theorems, we have taken a largely pre-theoretic notion and turned it into a full-fledged theoretic notion. In the idiom of Rosenhouse, we have moved the concept from track 1 to track 2. A reasonably extensive technical literature on conservation of information theorems now exists. Here are three seminal peer-reviewed articles addressing these theorems on which I’ve collaborated (for more, go here):

William A. Dembski and Robert J. Marks II “Conservation of Information in Search: Measuring the Cost of Success,” IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, vol.5, #5, September 2009, pp.1051-1061

William A. Dembski and Robert J. Marks II, “The Search for a Search: Measuring the Information Cost of Higher Level Search,” Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol.14, No.5, 2010, pp. 475-486.

William A. Dembski, Winston Ewert and Robert J. Marks II, “A General Theory of Information Cost Incurred by Successful Search,” Biological Information (Singapore: World Scientific, 2013), pp. 26-63.

Rosenhouse cites none of this literature. In this regard, he follows Wikipedia, whose subentry on conservation of information likewise fails to cite any of this literature. The most recent reference in that Wikipedia subentry is to a 2002 essay by Erik Tellgren, in which he claims that my work on conservation of information is “mathematically unsubstantiated.” That was well before any of the above theorems were ever proved. That’s like writing in the 1940s, when DNA’s role in heredity was unclear, that its role in heredity was “biologically unsubstantiated,” and leaving that statement in place even after the structure of DNA (by 1953) and the genetic code (by 1961) had been elucidated. It’s been two decades since Tellgren made this statement, and it remains in Wikipedia as the authoritative smackdown of conservation of information. 

At least it can be said of Rosenhouse’s criticism of conservation of information that it is more up to date than Wikipedia’s account of it. But Rosenhouse leaves the key literature in this area uncited and unexplained (and if he did cite it, I expect he would misexplain it). Proponents of intelligent design have grown accustomed to this conspiracy of silence, where anything that rigorously undermines Darwinism is firmly ignored (much like our contemporary media is selective in its reporting, focusing exclusively on the party line and sidestepping anything that doesn’t fit the desired narrative). Indeed, I challenge readers of this review to try to get the three above references inserted into this Wikipedia subentry. Good luck getting past the biased editors who control all Wikipedia entries related to intelligent design.

So, what is a conservation of information theorem? Readers of Rosenhouse’s book learn that such theorems exist. But Rosenhouse neither states nor summarizes these theorems. The only relevant theorems he recaps are the no free lunch theorems, which show that no algorithm outperforms any other algorithm when suitably averaged across various types of fitness landscapes. But conservation of information theorems are not no free lunch theorems. Conservation of information picks up where no free lunch leaves off. No free lunch says there’s no universally superior search algorithm. Thus, to the degree a search does well at some tasks, it does poorly at others. No free lunch in effect states that every search involves a zero-sum tradeoff. Conservation of information, by contrast, starts by admitting that for particular searches, some do better than others, and then asks what allows one search to do better than another. It answers that question in terms of active information. Conservation of information theorems characterize active information. 

To read Rosenhouse, you would think that active information is a bogus notion. But in fact, active information is a useful concept that all of us understand intuitively, even if we haven’t put a name to it. It arises in search. Search is a very general concept, and it encompasses evolution (Rosenhouse, recall, even characterized evolution in terms of “searching protein space”). Most interesting searches are needle-in-the-haystack problems. What this means is that there’s a baseline search that could in principle find the needle (e.g., exhaustive search or uniform random sampling), but that would be highly unlikely to find the needle in any realistic amount of time. What you need, then, is a better search, one that can find the needle with a higher probability so that it is likely, with the time and resources on hand, to actually find the needle. 

We all recognize active information. You’re on a large field. You know an easter egg is hidden somewhere in it. Your baseline search is hopeless—you stand no realistic chance of finding the easter egg. But now someone tells you warm, cold, warm, warmer, hot, you’re burning up. That’s a better search, and it’s better because you are being given better information. Active information measures the amount of information that needs to be expended to improve on a baseline search to make it a better search. In this example, note that there are many possible directions that easter egg hunters might receive in order to try to find the egg. Most such directions will not lead to finding the egg. Accordingly, if finding the egg is finding a needle in a haystack, so is finding the right directions among the different possible directions. Active information measures the information cost of finding the right directions.

In the same vein, consider a search for treasure on an island. If the island is large and the treasure is well hidden, the baseline search may be hopeless—way too improbable to stand a reasonable chance of finding the treasure. But suppose you now get a treasure map where X marks the spot of the treasure. You’ve now got a better search. What was the informational cost of procuring that better search? Well, it involved sorting through all possible maps of the island and finding one that would identify the treasure location. But for every map where X marks the right spot, there are many where X marks the wrong spot. According to conservation of information, finding the right map faces an improbability no less, and possibly greater, than finding the treasure via the baseline search. Active information measures the relevant (im)probability

We’ve seen active information before in the Dawkins Weasel example. The baseline search for METHINKS IT IS LIKE A WEASEL stands no hope of success. It requires a completely random set of keystrokes typing all the right letters and spaces of this phrase without error in one fell swoop. But given a fitness function that assigns higher fitness to phrases where letters match the target phrase METHINKS IT IS LIKE A WEASEL, we’ve now got a better search, one that will converge to the target phrase quickly and with high probability. Most fitness functions, however, don’t take you anywhere near this target phrase. So how did Dawkins find the right fitness function to evolve to the target phrase? For that, he needed active information.

My colleagues and I have proved several conservation of information theorems, which come in different forms depending on the type and structure of information needed to render a search successful. Here’s the most important conservation of information theorem proved to date. It appears in the third article cited above (i.e., “A General Theory of Information Cost Incurred by Successful Search”):

Even though the statement of this theorem is notation-heavy and will appear opaque to most readers, I give it nonetheless because, as unreadable as it may seem, it exhibits certain features that can be widely appreciated, thereby helping to wrap up this discussion of conservation of information, especially as it relates to Rosenhouse’s critique of the concept. Consider therefore the following three points:

  1. The first thing to see in this theorem is that it is an actual mathematical theorem. It rises to Rosenhouse’s track 2. A peer-reviewed literature now surrounds the work. The theorem depends on advanced probability theory, measure theory, and functional analysis. The proof requires vector-valued integration. This is graduate-level real analysis. Rosenhouse does algebraic graph theory, so this is not his field, and he gives no indication of actually understanding these theorems. For him to forgo providing even the merest sketch of the mathematics underlying this work because “it would not further our agenda to do so” (p. 212–213) and for him to dismiss these theorems as “trivial musings” (p. 269) betrays an inability to grapple with the math and understand its implications, as much as it betrays his agenda to deep-six conservation of information irrespective of its merits.
  2. The Greek letter mu denotes a null search and the Greek letter nu an alternative search. These are respectively the baseline search and the better search described earlier. Active information here, measured as log(r/q), measures the information required in a successful search for a search (usually abbreviated S4S), which is the information to find nu to replace mu. Searches can themselves be subject to search, and it’s these higher level searches that are at the heart of the conservation of information theorems. Another thing to note about mu and nu is that they don’t prejudice the types of searches or the probabilities that represent them. Mu and nu are represented as probability measures. But they can be any probability measures that assign at least as much probability to the target T as uniform probability (the assumption being that any search can at least match the performance of a uniform probability search—this seems totally reasonable). What this means is that conservation of information is not tied to uniform probability or equiprobability. Rosenhouse, by contrast, claims that all mathematical intelligent design arguments follow what he calls the Basic Argument from Improbability, which he abbreviates BAI (p. 126). BAI attributes to design proponents the most simple-minded assignment of probabilities (namely uniform probability or equiprobability). Conservation of information, like specified complexity, by contrast, attempts to come to terms with the probabilities as they actually are. This theorem, in its very statement, shows that it does not fall under Rosenhouse’s BAI. 
  3. The search space Omega (Ω) in this example is finite. Its finiteness, however, in no way undercuts the generality of this theorem. All scientific work, insofar as it measures and gauges physical reality, will use finite numbers and finite spaces. The mathematical models used may involve infinities, but these can in practice always be approximated finitely. This means that these models belong to combinatorics. Rosenhouse, throughout his book, makes off that combinatorics is a dirty word, and that intelligent design, insofar as it looks to combinatorics, is focused on simplistic finite models and limits itself to uniform or equiprobabilities. But this is nonsense. Any object, mathematical or physical, consisting of finitely many parts related to each other in finitely many ways is a combinatorial object. Moreover, combinatorial objects don’t care what probability distributions are placed on them. Protein machines are combinatorial objects. Computer programs (and these include the artificial life simulations with which Rosenhouse is infatuated) are combinatorial objects. The bottom line is that it is no criticism at all of intelligent design to say that it makes extensive use of combinatorics. 

14 Closing Thoughts

Would the world be better off if Jason Rosenhouse had never written The Failures of Mathematical Anti-Evolutionism? I, for one, am happy he did write it. It shows what the current state of thinking is by Darwinists on the mathematical ideas that my colleagues and I in the intelligent design movement have developed over the years. In particular, it shows how little progress they’ve made in understanding and engaging with these ideas. It also alerted me to the resurgence of artificial life simulations. Not that artificial life ever went away. But Rosenhouse cites what is essentially a manifesto by 53 authors (including ID critics Christoph Adami, Robert Pennock, and Richard Lenski) that all is well with artificial life: “The Surprising Creativity of Digital Evolution.” (2020) In fact, conservation of information shows that artificial life is a hopeless enterprise. But as my colleague Jonathan Wells underscored in his book Zombie Science, some disreputable ideas are just too pleasing and comforting for Darwinists to disown, and artificial life is one of them. So it was helpful to learn from Rosenhouse about the coming zombie apocalypse.

As indicated at the start of this review, I’ve been selective in my criticisms of Rosenhouse’s book, focusing especially on where he addressed my work and on where it impinged on that of some of my close colleagues in the intelligent design movement. I could easily have found more to criticize, but this review is already long. Leaving aside his treatment of young-earth creationists and the Second Law of Thermodynamics, he reflexively repeats Darwinian chestnuts, such as that gene duplication increases information, as though a mere increase in storage capacity can explain biologically useful information (“We’ve doubled the size of your hard drive and you now have twice the information!”). And wherever possible, he tries to paint my colleagues as rubes and ignoramuses. Thus he portrays Stephen Meyer as assuming a simplistic probabilistic model of genetic change when in the original source (Darwin’s Doubt) he is clearly citing an older understanding (by the Wistar mathematicians back in the 1960s) and then makes clear that a newer, more powerful understanding is available today. Disinformation is a word in vogue these days, and it characterizes much of Rosenhouse’s book.

In closing, I want to consider an example that appears near the start of The Failures of Mathematical Anti-Evolutionism (p. 32) and reappears at the very end in the “Coda” (pp. 273–274). It’s typical, when driving on a major street, to have cross streets where one side of the cross street is directly across from the other, and so the traffic on the cross street across the major street is direct. Yet it can happen, more often on country roads, that the cross street involves what seem to be two T-intersections that are close together, and so crossing the major street to stay on the cross street requires a jog in the traffic pattern. Here’s an image of the two types of crossings from Rosenhouse’s book (p. 32):

Rosenhouse is offering a metaphor here, with the first option representing intelligent design, the second Darwinism. According to him, the straight path across the major street represents “a sensible arrangement of roads of the sort a civil engineer would devise” whereas the joggy path represents “an absurd and potentially dangerous arrangement that only makes sense when you understand the historical events leading up to it.” (p. 32) Historical contingencies unguided by intelligence, in which roads are built without coordination, thus explain the second arrangement, and by implication explain biological adaptation.

Rosenhouse grew up near some roads that followed the second arrangement. Recently he learned that in place of two close-by T-intersections, the cross street now goes straight across. He writes:

Apparently, in the years since I left home, that intersection has been completely redesigned. The power that be got tired of cleaning up after the numerous crashes and human misery resulting from the poor design of the roads. So they shut it all down for several months and completely redid the whole thing. Now the arrangement of roads makes perfect sense, and the number of crashes there has declined dramatically. The anti-evolutionists are right about one thing: we really can distinguish systems that were designed from those that evolved gradually. Unfortunately for them, the anatomy of organisms points overwhelmingly toward evolution and just as overwhelmingly from design. (p. 273–274)

The blindness on display in this passage is staggering, putting on full display the delusional world of Darwinists and contrasting it with the real world that is chock-full of design.. Does it really need to be pointed out that roads are designed? That where they go is designed? And that even badly laid out roads are nonetheless laid out by design? But as my colleague Winston Ewert pointed out to me, Rosenhouse’s story doesn’t add up even if we ignore the design that’s everywhere. On page 32, he explains that the highway was built first and then later towns arose on either side of the highway, eventually connecting the crossroads to the highway. But isn’t it obvious, upon the merest reflection, that whoever connected the second road to the highway could have built it opposite to the first road that was already there. So why didn’t they do it? The historical timing of the construction of the roads doesn’t explain it. Something else must be going on.

There are in fact numerous such intersections in the US. Typically they are caused by grid corrections due to the earth’s curvature. In other words, they are a consequence of fitting a square grid onto a spherical earth. Further, such intersections can actually be safer, as a report on staggered junctions by the European Road Safety Decision Support System makes clear. So yes, this example is a metaphor, but not for the power of historical contingency to undercut intelligent design, but for the delusive power of Darwinism to look to historical contingency for explanations that support Darwinism but that under even the barest scrutiny fall apart. 

Enough said. Stay tuned for the second edition of The Design Inference!