Retirement ≠ Repudiation

Some critics of intelligent design seem to be taking my retirement to mean that I’ve repudiated, in whole or in part, my work on intelligent design.

So, in the recent recent video conversation by Sean McDowell with Doug Axe and Joshua Swamidass, Swamidass claims (at the point 56:20) that I’ve “backed off” from the argument in my book The Design Inference (1998):

Here’s the dialogue:

Swamidass: “Dembski himself backed off from his book The Design Inference. He’s actually stated that he had it wrong in the explanatory filter. Are you aware of that?”

Axe: “He’s not backed off from the basic…”

Swamidass: “Yeah, I can show you the quotes later…. He’s even stated, I’ll give you the quote, that there was a gap in his argument. He doesn’t think the explanatory filter is the right way to make the ID case.”

In my book, The Design Revolution, which appeared in 2004 six years after The Design Inference, I wrote:

Ultimately, what enables the filter to detect design is specified complexity. The Explanatory Filter provides a user-friendly way to establish specified complexity. For that reason, the only way to refute the Explanatory Filter is to show that specified complexity is an inadequate criterion for detecting design.

My position here hasn’t changed. I’ve beefed up specified complexity and developed it further over the years:

Specification: The Pattern That Signifies Intelligence” (2005)

Algorithmic Specified Complexity” (2014, w/ Ewert and Marks)

Algorithmic Specified Complexity in the Game of Life” (2015, w/ Ewert and Marks)

As I said from the start, the Explanatory Filter was a “rational reconstruction” of how we infer design. But ultimately, the Explanatory Filter depends on specified complexity being a valid criterion for detecting design. And I continue to hold that specified complexity is a legitimate way of detecting design and that the Explanatory Filter is a legitimate way of identifying specified complexity.

In its characterization of specification, The Design Inference included a conditional independence condition that subsequently proved unnecessary (that became clear already in the book’s sequel, No Free Lunch, published in 2001). So the idea of specified complexity, inherent in The Design Inference as specified improbability, needed some refinement and a fuller theoretical development, which it got over time (these days, Conservation of Information in search does the work of specified complexity and with still greater theoretical power).

As I noted in my updated 2019 interview, ” I’m happy with the work I’ve done on intelligent design and repudiate none of it.”

P.S. [6/17/20]: Josh Swamidass wants to revisit some well worn paths: Click here. I’ll pass except to cite Part II (on detecting design) of The Design Revolution and to add, in light of the latter, that I’m happy with the filter and think it holds up nicely.

P.P.S. [6/18/20]: ID supporters continue to send me emails about Swamidass. The latest hammers on a comment I made in 2008 at Uncommon Descent, namely: “I’ve pretty much dispensed with the EF [Explanatory Filter]. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.” I would not write that now. In my view the filter is just fine and it neither conflates nor falsely differentiates the three modes of explanation (chance, necessity, and design). My comment back then should be seen as an unnecessary concession to critics, not as undercutting the filter per se. To properly use the Explanatory Filter, it is vital to identify what exactly one is trying to explain. Take a rusted automobile. In Jonathan Waldman’s wonderful book Rust: The Longest War, one reads that an average car can lose 10 pounds of weight per year. So, if one is trying to explain why a car after ten years has lost about 90-100 pounds, that is the result of rust. It’s a high probability event. Is it chance or necessity? Rust, as with the kinetic theory of heat, is at base a probabilistic phenomenon, but when averaged over huge numbers of molecular events, the probabilities come so close to one that necessity becomes a natural explanation as well (it’s no criticism of the filter if it allows chance and necessity to bleed into one another for events with probability extremely close to one). What about the sagging of the car’s shocks over time? It seems that can readily be explained by necessity. What about the pattern or rust on the doors? It seems chance is as good an explanation here as any. What about the structure of the chassis or the differential? Put that into the filter, and you’ll get design. So let me reiterate: I really still like the Explanatory Filter, and any “backing off” I may have made to it reflects an unnecessary concession to critics.