Conservation of Information in Coevolutionary Searches Winston Ewert, *Robert J. Marks II – Sept. 2017

Excerpt: coevolution does not allow an escape from the necessity of exploiting prior information in search processes and remains bounded by conservation of information in general and the No Free Lunch theorem in particular.

Top Ten Questions and Objections to 'Introduction to Evolutionary Informatics' - Robert J. Marks II - June 12, 2017

Excerpt: There exists no model successfully describing undirected Darwinian evolution. Period. By “model,” we mean definitive simulations or foundational mathematics required of a hard science.,,,

We show that no meaningful information can arise from an evolutionary process unless that process is guided. Even when guided, the degree of evolution’s accomplishment is limited by the expertise of the guiding information source — a limit we call Basener’s ceiling. An evolutionary program whose goal is to master chess will never evolve further and offer investment advice.,,,

There exists no model successfully describing undirected Darwinian evolution. Hard sciences are built on foundations of mathematics or definitive simulations. Examples include electromagnetics, Newtonian mechanics, geophysics, relativity, thermodynamics, quantum mechanics, optics, and many areas in biology. Those hoping to establish Darwinian evolution as a hard science with a model have either failed or inadvertently cheated. These models contain guidance mechanisms to land the airplane squarely on the target runway despite stochastic wind gusts. Not only can the guiding assistance be specifically identified in each proposed evolution model, its contribution to the success can be measured, in bits, as active information.,,,

Models of Darwinian evolution, Avida and EV included, are searches with a fixed goal. For EV, the goal is finding specified nucleotide binding sites. Avida’s goal is to generate an EQU logic function. Other evolution models that we examine in Introduction to Evolutionary Informatics likewise seek a prespecified goal.,,,

The most celebrated attempt of an evolution model without a goal of which we’re aware is TIERRA. In an attempt to recreate something like the Cambrian explosion on a computer, the programmer created what was thought to be an information-rich environment where digital organisms would flourish and evolve. According to TIERRA’s ingenious creator, Thomas Ray, the project failed and was abandoned. There has to date been no success in open-ended evolution in the field of artificial life.5,,,

We show that the probability resources of the universe and even string theory’s hypothetical multiverse are insufficient to explain the specified complexity surrounding us.,,,

If a successful search requires equaling or exceeding some degree of active information, what is the chance of finding any search with as good or better performance? We call this a search-for-the-search. In Introduction to Evolutionary Informatics, we show that the search-for-the-search is exponentially more difficult than the search itself!,,,

,,,we use information theory to measure meaningful information and show there exists no model successfully describing undirected Darwinian evolution.,,,

,,, if the fitness continues to change, it is argued, the evolved entity can achieve greater and greater specified complexity,,,

,,, We,, dub the overall search structure 'stair step active information'. Not only is guidance required on each stair, but the next step must be carefully chosen to guide the process to the higher fitness landscape and therefore ever increasing complexity.,,,

Such fine tuning is the case of any fortuitous shift in fitness landscapes and increases, not decreases, the difficulty of evolution of ever-increasing specified complexity. It supports the case there exists no model successfully describing undirected Darwinian evolution.,,,

Turing’s landmark work has allowed researchers, most notably Roger Penrose,26 to make the case that certain of man’s attributes including creativity and understanding are beyond the capability of the computer.,,,

,,, there exists no model successfully describing undirected Darwinian evolution. According to our current understanding, there never will be.,,, 

Robert Marks on Why Darwinists Can’t Dodge the Modeling Problem - podcast - July 2017 

Listen in as Marks shares about the importance of modeling in science and the problems besetting making an evolution model that can explain biodiversity.

The Mind Renewed: "Introduction to Evolutionary Informatics" Dr. Robert J. Marks II - 2017 

Robert J. Marks II recent podcast conversation with UK interviewer Julian Charles of The Mind Renewed.

The discussion is entertaining and very broad, ranging from the meaning of evolutionary informatics, the limits of artificial intelligence, and computer simulations of evolution, to God in mathematics,

podcast - "Why Digital Cambrian Explosions Fizzle … Or Fake It" - June 2017 

Winston Ewert and host Ray Bohlin dive deeper into the sometimes surreal world of computer evolution simulations, taking a closer look at attempts to simulate the Cambrian Explosion

The Dark Secret at the Heart of AI

No one really (precisely) knows how the most advanced algorithms (decide) what they do. That could be a problem.

by Will Knight April 11, 2017 

Why Evolution Simulations Fail: Author of Evolutionary Informatics Book Explains - May 2017

Excerpt: Ewert argues that Richard Dawkins’s “Methinks It Is Like a Weasel” simulation doesn’t prove biological evolution and isn’t even very interesting. Ewert says there are some interesting computer evolution simulations, but he explains that they fail to model anything biologically realistic.

Instead they set up a straw man version of intelligent design, and simultaneously sneak teleology in, which kind of defeats the purpose. Download the podcast here, or listen to it here. (podcast linked on site),,,

  Dr. Bijan Nemati of the Jet Propulsion Laboratory and Caltech (states this about the new book)):

"With penetrating brilliance, and with a masterful exercise of pedagogy and wit, the authors take on Chaitin’s challenge, that Darwin’s theory should be subjectable to a mathematical assessment and either pass or fail. Surveying over seven decades of development in algorithmics and information theory, they make a compelling case that it fails."

Congratulations, Dr. Ewert, Dr. Marks, and Dr. Dembski! 

“What’s the difference between evolution and adaptation? Darwinian evolution requires creation of specified complexity where there was none. Computer programs, including video games like Darwin’s Demons, are incapable of being creative. Adaptation, on the other hand, uses available pre-programmed resources to improve and ultimately optimize performance.”

- Robert Marks 

Refutation Of Evolutionary Algorithms    

A Flash of Insight About Physics, Reality, and DNA Launched Bruce Buff as a Novelist - December 15, 2016

Excerpt: CNA: How did you develop the science behind the book?

Buff: November 1999, sitting in my father-in-law's office, working on my computer, the question of what connects bits inside a computer into words, or how pixels on the screen are transformed into images in our minds, popped into my mind and got me off and running on consciousness. Eventually, I concluded that if physics exists as scientists believe it does, then the material world alone cannot be the source of perceptions, awareness, cognitive thinking, and feeling. Therefore we have immaterial minds and every moment of our lives is our souls in action. I then realized that the immaterial mind challenges the Darwinian view of a completely naturalistic, unguided process as the complete explanation for human origin. 

For Darwinists to appeal to intelligently designed computer algorithms to try to offer support for unguided Darwinian evolution should be the very definition of non-sequitur we find in dictionaries:    

Atheist's logic 101 - cartoon

"If I can only create life here in the lab (or in my computer), it will prove that no intelligence was necessary to create life in the beginning"                 

Robert Marks: Some Things Computers Will Never Do: Nonalgorithmic Creativity and Unknowability - video 

"Captcha" Breakthrough by AI (Artificial Intelligence) Illustrates Biomimetic Design - November 26, 2013

Excerpt: Since intelligent design presupposes a mental act directed toward a purpose, AI is a misnomer. It should more properly be described as "artificial execution of human-designed algorithms."

This is really a story about biomimetics -- a form of intelligent-design science. The engineers looked to the way a brain solves a problem and tried to imitate it. It took human intelligent design to design the computer. It took intelligent design to write the software. It took human ID to test it, tweak it and perfect it till it succeeded. It requires human intelligence to see a good design. It takes ID to formulate a purpose. Then it requires human intelligence and will to move things in a preferred direction for that purpose. Nothing is left to unguided processes. Even selection from random trials (falsely called "Darwinian" algorithms) employs human purposeful choice. 

“The computer is not going to generate anything realistic if it uses Darwinian mechanisms.”

Dr. David Berlinski: Accounting for Variations - video 

"Darwin or Design" with Dr. Tom Woodward with guest Dr. Robert J. Marks II - video 

Digital Irreducible Complexity: A Survey of Irreducible Complexity in Computer Simulations - Winston Ewert - April 2014

Abstract: Irreducible complexity is a concept developed by Michael Behe to describe certain biological systems. Behe claims that irreducible complexity poses a challenge to Darwinian evolution. Irreducibly complex systems, he argues, are highly unlikely to evolve because they have no direct series of selectable intermediates. Various computer models have been published that attempt to demonstrate the evolution of irreducibly complex systems and thus falsify this claim. However, closer inspection of these models shows that they fail to meet the definition of irreducible complexity in a number of ways. In this paper we demonstrate how these models fail. In addition, we present another designed digital system that does exhibit designed irreducible complexity, but that has not been shown to be able to evolve. Taken together, these examples indicate that Behe’s concept of irreducible complexity has not been falsified by computer models. – Cite as: Ewert W (2014) Digital irreducible complexity: A survey of irreducible complexity in computer simulations. BIO-Complexity 2014 (1):1–10. doi:10.5048/BIO-C.2014.1. 

Digital Irreducible Complexity: A Survey of Irreducible Complexity in Computer Simulations - Winston Ewert 

podcast - Dr. Winston Ewert: Irreducible Complexity Remains Unrefuted 

Breaking Sticks - Winston Ewert - December 5, 2015

Conclusion: English and Felsenstein have been engaged in knocking down straw men. Felsenstein attacks a version of specified complexity that Dembski never articulated. He misrepresents the actual idea promoted by Dembski as being pointlessly circular. Both critics misrepresent conservation of information as a simplistic argument that only intelligence can produce active information. They misrepresent us as claiming that Darwinian evolution is only as good as a random guess, despite the explicit published demonstration that repeated queries are a source of active information. English misrepresents our reasons for thinking that birds are more probable than a random configuration of matter. Their arguments are valid objections to these straw men, but our actual arguments lie elsewhere.

What, then, would be necessary to demonstrate that we are wrong? As I've argued, conservation of information shows that evolution requires a source of active information. We have not proven that such a source must be teleological. Nevertheless, we've argued that the sources present in available models of evolution are indeed teleological. Our argument would be refuted by the demonstration of a model with a source that is both non-teleological and provides sufficient active information to account for biological complexity. 

Latest BIO-Complexity Paper Finds that on Irreducible Complexity, Michael Behe Has Not Been Refuted - Casey Luskin April 18, 2014

Excerpt: In a new paper in the journal BIO-Complexity, "Complexity in Computer Simulations," computer scientist Winston Ewert reviews much of the literature claiming to show, including via computer simulations, how irreducible complexity might have evolved by undirected means. He finds that "Behe's concept of irreducible complexity has not been falsified by computer models." The models include Avida, Ev, Steiner trees, geometric models, digital ears, and Tierra. Ewert reports that in many cases, the "parts" that compose the irreducibly complex system are "too simple." The programs are designed such that systems that the programs deem "functional" are very likely to evolve. - 

Conservation of Information on steroids:

The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm - George D. Montanez - 28 Sep 2016

No Free Lunch theorems show that the average performance across any closed-under-permutation set of problems is fixed for all algorithms, under appropriate conditions. Extending these results, we demonstrate that the proportion of favorable problems is itself strictly bounded, such that no single algorithm can perform well over a large fraction of possible problems. Our results explain why we must either continue to develop new learning methods year after year or move towards highly parameterized models that are both flexible and sensitive to their hyperparameters. 

Panda’s Thumb Richard Hoppe forgot about Humpty Zombie - April 15, 2014

Excerpt: I discovered if you crank up Avida’s cosmic radiation parameter to maximum and have the Avida genomes utterly scrambled, the Avidian organisms still kept reproducing. If I recall correctly, they died if the radiation was moderate, but just crank it to the max and the creatures come back to life!

This would be like putting dogs in a microwave oven for 3 days, running it at full blast, and then demanding they reproduce. And guess what, the little Avida critters reproduced. This little discovery in Avida 1.6 was unfortunately not reported in Nature. Why? It was a far more stupendous discovery! Do you think it’s too late for Richard Hoppe and I to co-author a submission?

Hoppe eventually capitulated that there was indeed this feature of Avida. To his credit he sent a letter to Dr. Adami to inform him of the discovery. Dr. Adami sent Evan Dorn to the Access Research Network forum, and Evan confirmed the feature by posting a reply there. 

Podcast: Winston Ewert on computer simulation of evolution (AVIDA) that sneaks in information 

Avida, when using realistic biological parameters as its default settings, instead of using highly unrealistic default settings as it currently does, actually supports Genetic Entropy instead of Darwinian evolution:

Biological Information - Mendel's Accountant and Avida 1-31-2015 by Paul Giem 

Biological Information - Tierra 11-8-2014 by Paul Giem - video 

Arrival of the Fittest: Natural Selection as an Incantation - November 17, 2014

Excerpt: "In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin's theory (funny how Darwinists admit that evolution has a 'missing piece' only when they think they can explain that 'missing piece'). Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take."

Once again, as with Avida and all the other computer models, we find that Wagner has snuck extra information into the system. As Dembski showed in No Free Lunch, no evolutionary algorithm is superior to blind search. Without design, there is no shortcut to the treasure (i.e. to new functional complexity/information). 

On Algorithmic Specified Complexity by Robert J. Marks II - video

paraphrase (All Evolutionary Algorithms have failed to generate truly novel information including ‘unexpected, and interesting, emergent behaviors’) - Robert Marks 

LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13

Excerpt: (Computer) Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and  Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.

Evolutionary Informatics: Marks, Dembski, and Ewert Demonstrate the Limits of Darwinism - Brian Miller - May 2, 2017

Excerpt: Authors Robert Marks, William Dembski, and Winston Ewert bring decades of experience in search algorithms and information theory to analyzing the capacity of biological evolution to generate diverse forms of life. Their conclusion is that no evolutionary process is capable of yielding different outcomes (e.g., new body plans), being limited instead to a very narrow range of results (e.g., finches with different beak sizes). Rather, producing anything of significant complexity requires that knowledge of the outcomes be programmed into the search routines. Therefore, any claim for the unlimited capacity of unguided evolution to transform life is necessarily implausible,,,

,,, the authors demonstrate the limitations of evolutionary algorithms. The general challenge is that all evolutionary algorithms are limited to converging on a very narrow range of results, a boundary known as Basener’s Ceiling. For instance, a program designed to produce an antenna will at best converge to the solution of an optimal antenna and then remain stuck. It could never generate some completely different result, such as a mousetrap. Alternatively, an algorithm designed to generate a strategy for playing checkers could never generate a strategy for playing backgammon. To change outcomes, the program would have to be deliberately adjusted to achieve a separate predetermined goal. In the context of evolution, no unguided process could converge on one organism, such as a fish, and then later converge on an amphibian.

This principle has been demonstrated both in simulations and in experiments. The program Tierra was created in the hope of simulating large-scale biological evolution. Its results were disappointing. Several simulated organisms emerged, but their variability soon hit Basener’s Ceiling. No true novelty was ever generated but simply limited rearrangements of the initially supplied information. We have seen a similar result in experiments on bacteria by Michigan State biologist Richard Lenski. He tracked the development of 58,000 generations of E. coli. He saw no true innovation but primarily the breaking of nonessential genes to save energy, and the rearrangement of genetic information to access pre-existing capacities, such as the metabolism of citrate, under different environmental stresses. Changes were always narrow in scope and limited in magnitude.

The authors present an even more defining limitation, based on the No Free Lunch Theorems, which is known as the Conservation of Information (COI). Stated simply, no search strategy can on average find a target more quickly than a random search unless some information about that target is incorporated into the search process.                                    

Dembski’s conservation of information states that algorithmic searches do not actually perform any better than random searches (examples which suggest otherwise can be, and have been, analyzed to quantify their “active information”). So for all intents and purposes, it is indeed entirely credible to say that the Darwinian mechanism of natural selection acting on random mutation is “random,” in that it performs no better than random search.”

Conservation of Information Made Simple - William A. Dembski - August 28, 2012

Excerpt: For the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins bypasses the Shakespeare hypothesis -- that would be too obvious and too intelligent-design friendly. Instead of positing Shakespeare, who would be an intelligence or designer responsible for the text in question (designers are a no-go in conventional evolutionary theory), Dawkins asks his readers to suppose an evolutionary algorithm that evolves the target phrase. But such an evolutionary algorithm privileges the target phrase by adapting the fitness landscape so that it assigns greater fitness to phrases that have more corresponding letters in common with the target.

And where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.

The bogusness of this example, with its sleight-of-hand misdirection, has been discussed ad nauseam by me and my colleagues in the ID community. We've spent so much time and ink on this example not because of its intrinsic merit, but because the evolutionary community itself remains so wedded to it and endlessly repeats its underlying fallacy in ever increasingly convoluted guises (AVIDA, Tierra, ev, etc.). For a careful deconstruction of Dawkins's WEASEL, providing a precise simulation under user control, see the "Weasel Ware" project on the Evolutionary Informatics website:

How does conservation of information apply to this example? Straightforwardly. Obtaining METHINKS IT IS LIKE A WEASEL by blind search (e.g., by randomly throwing down Scrabble pieces in a line) is extremely improbable. So Dawkins proposes an evolutionary algorithm, his WEASEL program, to obtain this sequence with higher probability. Yes, this algorithm does a much better job, with much higher probability, of locating the target. But at what cost? At an even greater improbability cost than merely locating the target sequence by blind search.

Dawkins completely sidesteps this question of information cost. Foreswearing any critical examination of the origin of the information that makes his simulation work, he attempts instead, by rhetorical tricks, simply to induce in his readers a stupefied wonder at the power of evolution: "Gee, isn't it amazing how powerful evolutionary processes are given that they can produce sentences like METHINKS IT IS LIKE A WEASEL, which ordinarily require human intelligence." But Dawkins is doing nothing more than advise our hapless borrower with the juice loan to suppose a key to a safety deposit box with the money needed to pay it off. Whence the key? Likewise, whence the fitness landscape that rendered the evolution of METHINKS IT IS LIKE A WEASEL probable? In terms of conservation of information, the necessary information was not internally created but merely smuggled in, in this case, by Dawkins himself. 

Despite the stubborn reluctance of some Darwinists to admit to the abject failure that is inherent in Dawkins' “Weasel” computer program for providing any support whatsoever for Darwinian claims, I am grateful for what Dawkins’ “Weasel” computer program has personally taught to novices like me. Because of the simplicity of the program and the rather modest result, i.e. “Methinks it is like a weasel”, that the program was trying to achieve by evolutionary processes, it taught me in fairly short order, in an easy to understand way, that,,

“Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information.”

- William Dembski

In regards to learning about the ‘brick wall’ limitation for unguided material processes ever creating even trivial levels of functional information, I highly recommend Wiker & Witt’s book “A Meaningful World” in which they show, using the “Methinks it is like a weasel” phrase, (a phrase that Richard Dawkins’ infamously used from the Shakespeare’s play Hamlet in order to try to illustrate the feasibility of Evolutionary Algorithms), that the ‘information problem’ is much worse for Darwinists than just finding the “Methinks it is like a weasel” phrase by a unguided search.

Basically this ‘brick wall’ limitation for unguided material processes ever creating even trivial levels of information is found because the “Methinks it is like a weasel” phrase doesn’t make any sense unless the entire context of the play of Hamlet is taken into consideration.

Computers simply ‘can’t do context’! A subjective mind is required in order to take an overall context into consideration.

Moreover the context, in which the weasel phrase finds its meaning, is derived from several different levels of the play. i.e. The ENTIRE play, who said it, why was it said, where was it said, and even nuances of the Elizabethan culture, etc… are taken into consideration to provide proper context to the phrase.

Dawkin’s infamous Weasel phrase simply does not make sense without taking its proper context into consideration

A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature – Book Review

Excerpt: They focus instead on what “Methinks it is like a weasel” really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the “it” refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. 

In fact, it is interesting to note what the specific context is for the “Methinks it is like a weasel” phrase that is used in the Hamlet play.

Richard Dawkins's Weasel Program Is Bad in Ways You Never Dreamed - Jonathan Witt - September 23, 2016


The whole scene and the wider tension between the two men, in other words, actually involves Polonius's refusal to see intelligent design where it actually exists -- namely, in the designed death, the murder, of old King Hamlet. Polonius attributes the old king's death to purely blind, material causes when in fact the king's death was intelligently designed -- that is, foul play.

Richard Dawkins Is Polonius

One parallel to the origins science debate, then, is that Richard Dawkins is a modern day Polonius: He ignores the evidence of intelligent design that should be abundantly clear to him.

And the moral, if we're willing to draw a line so far afield from the original play to our present context: Don't be Richard Dawkins. Don't mistake an intelligent cause for a natural one. Don't miss the wider context:                                    

Moreover, the context, in which the phrase is used, is also used to illustrate the spineless nature of one of the characters of the play. i.e. To illustrate just how easily the spineless character in the play, i.e.  Polonius, can be led around by the nose to say anything that Hamlet wants him to say:

Ham. Do you see yonder cloud that ’s almost in shape of a camel?

Pol. By the mass, and ’t is like a camel, indeed.

Ham. Methinks it is like a weasel.

Pol. It is backed like a weasel.

Ham. Or like a whale?

Pol. Very like a whale. 

I.e. The phrase, when taken into proper context, reveals deliberate, nuanced, deception and manipulation of another person.

After realizing what the actual context of the ‘Methinks it is like a weasel’ phrase was, I remember thinking to myself that it was perhaps the worse possible phrase that Dawkins could have possibly chosen to use to try to illustrate his point.

I’m sure deception and manipulation of other people is hardly the point that Dawkins was trying to convey with his infamous ‘Weasel’ program.

Of supplemental note as to the brick wall limitation that ‘context’ places on AI:

What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014

Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough.

Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,,

,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014.

Erik J. Larson - Founder and CEO of a software company in Austin, Texas 

Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence - July 9, 2014

"We don’t yet understand how brains work, so we can’t build one.",,,

[IBM's "Jeopardy!"-winning supercomputer] Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, "read" is the wrong word. It's not reading anything because it's not comprehending anything.  Watson  is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous. 

But is this context dependency that is found in literature also found in life? Yes! Starting at the amino acids of proteins we find context dependency:

Fred Sanger, Protein Sequences and Evolution Versus Science – Are Proteins Random? Cornelius Hunter – November 2013

Excerpt: Standard tests of randomness show that English text, and protein sequences, are not random.,,, 

(A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012

Excerpt (Page 4): The Probabilities Get Worse

This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. 

Moreover, context dependency is found on at least three different levels of the protein structure:

“Why Proteins Aren’t Easily Recombined, Part 2″ – Ann Gauger – May 2012

Excerpt: “So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required.” 

Moreover, it is interesting to note that many (most?) proteins are now found to be multifunctional depending on the overall context (i.e. position in cell, cell type, tissue type, etc..) that the protein happens to be involved in. Thus, the sheer brick wall that Darwinian processes face in finding ANY novel functional protein to perform any specific single task in a cell in the first place (Axe; Sauer) is only exponentially exasperated by the fact that many proteins are multifunctional and, serendipitously, perform several different ‘context dependent’ functions within the cell:

Human Genes: Alternative Splicing (For Proteins) Far More Common Than Thought:

Excerpt: two different forms of the same protein, known as isoforms, can have different, even completely opposite functions. For example, one protein may activate cell death pathways while its close relative promotes cell survival. 

The Gene Myth, Part II - August 2010

Excerpt: “It was long believed that a protein molecule’s three-dimensional shape, on which its function depends, is uniquely determined by its amino acid sequence. But we now know that this is not always true – the rate at which a protein is synthesized, which depends on factors internal and external to the cell, affects the order in which its different portions fold. So even with the same sequence a given protein can have different shapes and functions. Furthermore, many proteins have no intrinsic shape (Intrinsically Disordered Proteins), taking on different roles in different molecular contexts. So even though genes specify protein sequences they have only a tenuous (very weak or slight) influence over their functions.

,,,,So, to reiterate, the genes do not uniquely determine what is in the cell, but what is in the cell determines how the genes get used. Only if the pie were to rise up, take hold of the recipe book and rewrite the instructions for its own production, would this popular analogy for the role of genes be pertinent.

Stuart A. Newman, Ph.D. – Professor of Cell Biology and Anatomy

podcast - Dr. Jonathan Wells: Biology’s Quiet Revolution - September 17, 2014

"We are talking about 1/3 of the proteins in our body, (may be Intrinsically Disordered Proteins)" - Jonathan Wells 

Genes Code For Many Layers of Information – They May Have Just Discovered Another – Cornelius Hunter – January 21, 2013

Excerpt: “protein multifunctionality is more the rule than the exception.” In fact, “Perhaps all proteins perform many different functions by employing as many different mechanisms.” 

Context dependency, and the problem it presents for ‘bottom up’ Darwinian evolution is perhaps most dramatically illustrated by the following examples in which ‘form’ dictates how the parts are used:

An Electric Face: A Rendering Worth a Thousand Falsifications – Cornelius Hunter – September 2011

Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peek at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.” 

The (Electric) Face of a Frog - video 

Timelapse Video Reveals Electric Face in Embryonic Tadpole - July 2011

Excerpt: "When a frog embryo is just developing, before it gets a face, a pattern for that face lights up on the surface of the embryo. We believe this is the first time such patterning has been reported for an entire structure, not just for a single organ. I would never have predicted anything like it. It's a jaw dropper." 

What Do Organisms Mean? Stephen L. Talbott – Winter 2011

Excerpt: Harvard biologist Richard Lewontin once described how you can excise the developing limb bud from an amphibian embryo, shake the cells loose from each other, allow them to reaggregate into a random lump, and then replace the lump in the embryo. A normal leg develops. Somehow the form of the limb as a whole is the ruling factor, redefining the parts according to the larger pattern. Lewontin went on to remark: “Unlike a machine whose totality is created by the juxtaposition of bits and pieces with different functions and properties, the bits and pieces of a developing organism seem to come into existence as a consequence of their spatial position at critical moments in the embryo’s development. Such an object is less like a machine than it is like a language whose elements … take unique meaning from their context.[3]“,,, 

Multidimensional Genome – Dr. Robert Carter – video (Notes in video description)   

3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009

Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication. 

Tissue-specific spatial organization of genomes - 2004

Excerpt: Using two-dimensional and three-dimensional fluorescence in situ hybridization we have carried out a systematic analysis of the spatial positioning of a subset of mouse chromosomes in several tissues. We show that chromosomes exhibit tissue-specific organization. Chromosomes are distributed tissue-specifically with respect to their position relative to the center of the nucleus and also relative to each other. Subsets of chromosomes form distinct types of spatial clusters in different tissues and the relative distance between chromosome pairs varies among tissues. Consistent with the notion that nonrandom spatial proximity is functionally relevant in determining the outcome of chromosome translocation events, we find a correlation between tissue-specific spatial proximity and tissue-specific translocation prevalence.

Conclusion: Our results demonstrate that the spatial organization of genomes is tissue-specific and point to a role for tissue-specific spatial genome organization in the formation of recurrent chromosome arrangements among tissues.   

Scientists' 3-D View of Genes-at-Work Is Paradigm Shift in Genetics - Dec. 2009

Excerpt: Highly coordinated chromosomal choreography leads genes and the sequences controlling them, which are often positioned huge distances apart on chromosomes, to these 'hot spots'. Once close together within the same transcription factory, genes get switched on (a process called transcription) at an appropriate level at the right time in a specific cell type. This is the first demonstration that genes encoding proteins with related physiological role visit the same factory. 

Refereed scientific article on DNA argues for irreducible complexity - October 2, 2013

Excerpt: This paper published online this summer is a true mind-blower showing the irreducible organizational complexity (author’s description) of DNA analog and digital information, that genes are not arbitrarily positioned on the chromosome etc.,,

,,,First, the digital information of individual genes (semantics) is dependent on the the intergenic regions (as we know) which is like analog information (syntax). Both types of information are co-dependent and self-referential but you can’t get syntax from semantics. As the authors state, “thus the holistic approach assumes self-referentiality (completeness of the contained information and full consistency of the different codes) as an irreducible organizational complexity of the genetic regulation system of any cell”. In short, the linear DNA sequence contains both types of information. Second, the paper links local DNA structure, to domains, to the overall chromosome configuration as a dynamic system keying off the metabolic signals of the cell. This implies that the position and organization of genes on the chromosome is not arbitrary,,, 

I think pastor Joe Boot, although he is talking about the universe as a whole in the following quote, illustrates the insurmountable problem that ‘context dependency’ places on reductive materialism very well:

“If you have no God, then you have no design plan for the universe. You have no preexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It’s just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There’s no design plan. It’s like my kids do ‘join the dots’ puzzles. It’s just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no pre-established pattern (to connect the facts given atheism).”

Pastor Joe Boot – 13:20 minute mark of the following video

Defending the Christian Faith – Pastor Joe Boot – video 

Rob Sheldon on why a “bottom up” cosmos just doesn’t work - September 2, 2014

Excerpt: “Emergence” is the mantra of the bottom-up school.,, Did it work?

Not really. It stalled out on the same reasons the cosmology stalled out–all the information was encoded in the fine tuning of the starting conditions. The behavior you wanted to see emerge had to be encoded into the start-up sequence. This would be a version of Dembski’s “No Free Lunch” or “Being as Communion” thesis.,,,

,,,You have to know what is going on everywhere, in order to know what is going on locally.

That’s a big deal. You can’t do a random search if you have to have a map before you take your first step. Because that map is teleology, it is purpose, it is global information. And that is the one thing random searches and Darwin exclude. In other words, all those cosmology models, all those Stuart Kauffman “emergent” physics models,,, had to have GLOBAL boundary conditions if any fine-tuned excitement is going to occur. 

Supplemental quote:

‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ -

Stephen Meyer – (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate – 2009)

Stephen Meyer – Functional Proteins And Information For Body Plans – video 

Conservation of Information Made Simple - William A. Dembski - August, 2012

Excerpt: Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe. ,,,

,,,Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere. 


Artificial Intelligence is Law/Mechanism - Winston Ewert - November 2013 

The Tragedy of Two CSIs - Winston Ewert - November 25, 2013 

"Captcha" Breakthrough by AI (Artificial Intelligence) Illustrates Biomimetic Design - November 26, 2013

Excerpt: Since intelligent design presupposes a mental act directed toward a purpose, AI is a misnomer. It should more properly be described as "artificial execution of human-designed algorithms."

This is really a story about biomimetics -- a form of intelligent-design science. The engineers looked to the way a brain solves a problem and tried to imitate it. It took human intelligent design to design the computer. It took intelligent design to write the software. It took human ID to test it, tweak it and perfect it till it succeeded. It requires human intelligence to see a good design. It takes ID to formulate a purpose. Then it requires human intelligence and will to move things in a preferred direction for that purpose. Nothing is left to unguided processes. Even selection from random trials (falsely called "Darwinian" algorithms) employs human purposeful choice. 

A.I. Has Grown Up and Left Home - Dec. 19, 2013

Excerpt: The history of Artificial Intelligence,” said my computer science professor on the first day of class, “is a history of failure.” This harsh judgment summed up 50 years of trying to get computers to think. Sure, they could crunch numbers a billion times faster in 2000 than they could in 1950, but computer science pioneer and genius Alan Turing had predicted in 1950 that machines would be thinking by 2000: Capable of human levels of creativity, problem solving, personality, and adaptive behavior. Maybe they wouldn’t be conscious (that question is for the philosophers), but they would have personalities and motivations, like Robbie the Robot or HAL 9000. Not only did we miss the deadline, but we don’t even seem to be close. And this is a double failure, because it also means that we don’t understand what thinking really is. 

Google co-founder on why our neurons are not like a computer neural network - February 12, 2015

[Caleb Garling] Often people conflate the wiring of our biological brains with that of a computer neural network. Can you explain why that’s not accurate?

[Andrew Ng] A single neuron in the brain is an incredibly complex machine that even today we don’t understand. A single “neuron” in a neural network is an incredibly simple mathematical function that captures a minuscule fraction of the complexity of a biological neuron. So to say neural networks mimic the brain, that is true at the level of loose inspiration, but really artificial neural networks are nothing like what the biological brain does. 

Is Google a Step Away from Developing a Computer that Can "Program Itself"? - Erik J. Larson October 30, 2014

Excerpt: the scientific merit of the purported "breakthroughs" is paltry at best. Notwithstanding the fad and the hype, there's, well, no news here.,,,

*Google is experimenting with Artificial Neural Networks (ANNs) for performing supervised or semi-supervised learning tasks, including those that human programmers undertake when manipulating data or writing code.

*Some variation on the decades-old ANNs showed a slight performance bump on a small, well-defined, carefully chosen and essentially uninteresting problem involving data copy and manipulation.

*Work on learning methods continues, as anyone even moderately active in computer science knows it will, since it's the current dominant paradigm (and has been since the late 1990s).

*Reading between the lines a bit: Nothing much is advancing with supervised learning methods. BetaBeat is reporting on stories that would barely qualify as exciting in university computer labs, where similar research on ANNs and other supervised learning methods continues daily, but without the gee-wiz expectations that go along with the mention of Google and its acquisitions.

*Readers eager to believe that smart, human-like machines are imminent will chatter about and forward and post the story anyway, oblivious to its actual newsworthiness or lack thereof. Hype about AI will go on, unabated. 

Stephen Hawking Overestimates the Evolutionary Future of Smart Machines - May 7, 2014

Excerpt: The methods of Big Data, which I referred to yesterday, all show performance gains for well-defined problems, achieved by adding more and more input data -- right up to saturation. "Model saturation," as it's called, is the eventual flattening of a machine learning curve into an asymptote or a straight line, where there's no further learning, no matter how much more data you provide. Russell (one would hope) knows this, but the problem is not even mentioned in the piece, let alone explained. Instead, front and center is Hawking's ill-defined worry about a future involving "super" intelligence. This is hype, at its best.,,,

Adding more data won't help these learning problems -- performance can even go down. This tells you something about the prospects for the continual "evolution" of smart machines.,,,

Norvig conceded in an article in The Atlantic last year:

"We could draw this curve: as we gain more data, how much better does our system get?" he says. "And the answer is, it's still improving -- but we are getting to the point where we get less benefit than we did in the past."

This doesn't sound like the imminent rise of the machines. 

Yes, "We've Been Wrong About Robots Before," and We Still Are - Erik J. Larson - November 12, 2014

Excerpt: Nothing has happened with IBM's "supercomputer" Watson,,,  Outside of playing Jeopardy -- in an extremely circumscribed only-the-game-of-Jeopardy fashion -- the IBM system is completely, perfectly worthless.,,, IBM, by the way, has a penchant for upping their market cap by coming out with a supercomputer that can perform a carefully circumscribed task with superfast computing techniques. Take Deep Blue beating Kasparov at chess in 1997. Deep Blue, like Watson, is useless outside of the task it was designed for,,,

Self-driving cars are another source of confusion. Heralded as evidence of a coming human-like intelligence, they're actually made possible by brute-force data: full-scale replicas of street grids using massive volumes of location data.,,,

Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all. Take this statement, originally from computer scientist Hector Levesque (it also appears in Nicholas Carr's 2014 book about the dangers of automation, The Glass Cage):

"The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?"

Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researches call "world knowledge" or "common sense knowledge.",,

Having real knowledge about the world and bringing it to bear on our everyday cognitive problems is the hallmark of human intelligence, but it's a mystery to AI scientists, and has been for decades.,,,

Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever. 

For all the hoopla surrounding each amazing AI advance, from IBM's Watson to Google's more recent Go-conquering machine, AlphaGo, we forget one critical detail in our amazement: Each of these machines does just one thing. They may do it remarkably well and fast, but that is all they can do. Watson cannot dance, clap, or take a bow. It cannot write a book, play the piano, or sing a song. It cannot drive a car, mow the lawn, or weed the garden. It cannot tell jokes and it does not laugh. It cannot recognize pictures of cats or identify faces. It cannot play Go or Chess. IBM is hoping it can assist in answering medical questions. We know it can win at Jeopardy! Watson does what all AI systems do: It captures and replays just one human ability. 

The Real Threat Is Machine Incompetence, Not Intelligence -

Michael Byrne - Feb 6 2017

Excerpt: there is a real AI (Artificial Intelligence) threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.,,

Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence. ,,,

He goes on to argue that AI will continue to develop in siloed form, where new and impressive machines continue to scare doomsayers for their abilities within relatively narrow task domains while remaining "incredibly dumb" when it comes to everything else.

The risk remains the same as it was in the 1980s, where the public and policy-makers see machines being amazing within these narrow domains, while never seeing how badly they fail when tasks become general and start to approach the edges of human cognition. 

AI’s Language Problem

Machines that truly understand language would be incredibly useful. But we don’t know how to build them.

by Will Knight August 9, 2016

Excerpt: Systems like Siri and IBM’s Watson can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use.,,,

“There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” ,,,

“It’s one of the most obvious things that set human intelligence apart.”,,,

Basically, Le’s program has no idea what it’s talking about. It understands that certain combinations of symbols go together, but it has no appreciation of the real world. It doesn’t know what a centipede actually looks like, or how it moves. It is still just an illusion of intelligence, without the kind of common sense that humans take for granted.,,,

Cognitive scientists like MIT’s Tenenbaum theorize that important components of the mind are missing from today’s neural networks, no matter how large those networks might be. 


Deep Neural Networks are Easily Fooled

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images - 2015

Artificial Intelligence debunked in one short paragraph:

Your Computer Doesn't Know Anything - Michael Egnor - January 23, 2015

Excerpt: Your computer doesn't know a binary string from a ham sandwich. Your math book doesn't know algebra. Your Rolodex doesn't know your cousin's address. Your watch doesn't know what time it is. Your car doesn't know where you're driving. Your television doesn't know who won the football game last night. Your cell phone doesn't know what you said to your girlfriend this morning. 


"Your Computer Doesn't Know Anything" - Michael Egnor (January 23, 2015). .

Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning. ¶ People know things. Devices like computers and books and Rolodexes and watches and cars and televisions and cell phones don’t know anything. They don’t have minds. They are artifacts — paper and plastic and silicon things designed and manufactured by people — and they provide people with the means to leverage their human knowledge. ¶ Computers (and books and watches and the like) are the means by which people leverage and express knowledge. Computers store and process representations of knowledge. But computers have no knowledge themselves. 

Consciousness provides 'top down' context: An exchange:

Excerpt: Mapou: Certainly. However, while importance is not determined bottom-up, there is no question that learning is bottom up and is done by the brain.

Box: Wrong. There is no rational activity that is a bottom-up process. Understanding anything—foundational to any rational activity— is not possible without context. In fact “understanding” is to place something in a context—top-down.

Box: It follows that any bottom-up (non-contextual) process is not rational. To be sure, the brain is chemistry and chemistry can be neither learning nor thinking.

Mapou: "This is how context is built."

Box: Context is not being built, but is foundational to rationality. Consciousness is the ultimate context.

Mapou: "I do agree that the importance of things is not determined by the brain but by the spirit. But that is not thinking, IMO. That is just a selection process by a top-down authority."

Box: It must be thinking. Determining the importance of e.g. an argument is an irrational act if it is not based on reason. So, if the spirit is unthinking (irrational) then it is not capable of selecting / determining / weighing the importance of an argument.

        Mapou: Thinking is goal-driven planning and goal-oriented behavior. This is what the brain does under the supervision of the spirit.

Box: Matter has no goals. And an irrational unthinking spirit has no goals either.

Mapou: The spirit is helpless without the brain.

Box: Your unthinking spirit is helpless no matter what. Luckily for us, your theory doesn’t make sense and our spirits are profoundly rational. 

The Closing of the Scientific Mind - David Gelernter - January 1, 2014

Excerpt: The Flaws.

But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:

1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.

2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.

3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.

4. Computers can be erased; minds cannot.

5. Computers can be made to operate precisely as we choose; minds cannot.

There are more. Come up with them yourself. It’s easy. 

Complexity by Subtraction Was Weird Enough; Now, Complexity by Harmful Mutations - Sept. 2, 2013

Excerpt:  Naturally, Lenski's lab ignored the critique of Avida published in an IEEE paper in 2009 by Ewert, Dembski and Marks. - 

How Information Theory Is Taking Intelligent Design Mainstream - William Dembski - Casey Luskin - podcast/video 

The Evolutionary Informatics Lab: Putting Intelligent Design Predictions to the Test - Casey Luskin - February, 2012

Excerpt: The work of the Evolutionary Informatics Lab demonstrates that ID proponents are capable of producing innovative techniques for tackling questions related to intelligent design and evolution. First, the lab developed a methodology for studying the degree to which information is smuggled into evolutionary algorithms. Then, the researchers applied that methodology to various well-known programs like ev, Avida, and Dawkins' "Weasel Simulation," and successfully identified sources of "active information" in each. As the lab's website promised, their research has shown that even the best efforts of ID-critics cannot escape the fact that intelligence is required to generate new information.

Climbing the Steiner Tree--Sources of Active Information in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem - 2012 - Winston Ewert, William A Dembski, Robert J Marks II 

Applied Darwinism: A New Paper from Bob Marks (W. Dembski) and His Team, in BIO-Complexity - Doug Axe - 2012

Excerpt: Furthermore, if you dig a bit beyond these papers and look at what kinds of problems this technique (Steiner Tree) is being used for in the engineering world, you quickly find that it is of extremely limited applicability. It works for tasks that are easily accomplished in a huge number of specific ways, but where someone would have to do a lot of mindless fiddling to decide which of these ways is best.,, That's helpful in the sense that we commonly find computers helpful -- they do what we tell them to do very efficiently, without complaining. But in biology we see something altogether different. We see elegant solutions to millions of engineering problems that human ingenuity cannot even begin to solve.

Time and Information in Evolution: Winston Ewert, William A. Dembski, Ann K. Gauger, and Robert J. Marks II - December 2012 

Here are all the main publications (which are linked) at evoinfo lab:

Main Publications - Evolutionary Informatics 

Here is the search for a search paper:

The Search for a Search: Measuring the Information Cost of Higher Level Search William A. Dembski and Robert J. Marks II

Abstract: Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought. 

Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013

Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,,

,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks:

"The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486

"Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061

For other papers that Marks, his students, and I have done to extend the results in these papers, visit the publications page at   

Dr. Werner Gitt, starting around the 2:00 minute mark of the following video, touches on how the infinite regress of information implies Theism:

Dr.Werner Gitt Ph.D.”In The Beginning was Information” Part 3 of 3 – video

(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)

William A. Dembski, "Conservation of Information in Evolutionary Search" - (2014) video 

Conservation of Information and the End of Materialism - William Dembski, PhD - 2014 - video 

Responding to My Talk at the University of Chicago, Joe Felsenstein's Argument by Misdirection - William A. Dembski - October 7, 2014

Excerpt: Because Felsenstein's critique bears no resemblance to what I was actually doing in my University of Chicago talk, let me summarize what I did say there.

Briefly, I started by assuming that if biological evolution is to be an exact science, then it must be possible to model it on search. I then considered search at its most general, laying out its key components. I then presented the key result of Conservation of Information, namely, that for any search space with a target of small probability p, if one wants a search that will find that same target with a probability q (greater than p), the probabilistic cost of finding such a search is at least p/q. What this means is that, at the end of the day, one hasn't gained anything because if finding such a search has probability p/q or less, and then once one has found such a search, one only has q probability of finding the target, then the total probability of finding the target with such a staggered search is still p or less. This result, I argued, holds with perfect generality. It does not assume anything about the nature of the fitness surfaces, or working off a full set of genotypes, or any other such limitation on search as Felsenstein suggests.

Let me urge fair-minded people who have read Felsenstein's criticisms to listen to my actual talk and then read the three papers cited. Alternatively, if such fair-minded individuals lack the technical background to appreciate these papers, let them read the last few chapters of Being as Communion, which summarizes the significance of these papers for evolutionary theory in a more user-friendly way. 

A.I. Has Grown Up and Left Home - Dec. 19, 2013

Excerpt: For example, genetic approaches represent algorithms with varying parameters as chromosomal “strings,” and “breed” successful algorithms with one another. These approaches do not improve through better understanding of the problem. All that matters is the fitness of the algorithm with respect to its environment—in other words, how the algorithm behaves. This black-box approach has yielded successful applications in everything from bioinformatics to economics, yet one can never give a concise explanation of just why the fittest algorithm is the most fit.

Neural networks are another successful subsymbolic technology, and are used for image, facial, and voice recognition applications. No representation of concepts is hardcoded into them, and the factors that they use to identify a particular subclass of images emerge from the operation of the algorithm itself. 

Of related note:

"Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source from a mind or personal agent."

(Stephen C. Meyer, "The origin of biological information and the higher taxonomic categories," Proceedings of the Biological Society of Washington, 117(2):213-239 (2004).)

Kurt Gödel - Incompleteness Theorem - video 

Alan Turing & Kurt Godel - Incompleteness Theorem and Human Intuition - video 

"Either mathematics is too big for the human mind or the human mind is more than a machine"

Kurt Gödel -  As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p. 13

Here is what Gregory Chaitin said, in 2011, about the limits of the computer program he was trying to develop to prove that Darwinian evolution was mathematically feasible:

At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011

Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.”

Here is the video where, at the 30:00 minute mark, you can hear the preceding quote from Chaitin's own mouth in full context:

Life as Evolving Software, Greg Chaitin at PPGC UFRGS 

Moreover, at the 40:00 minute mark of the video Chaitin readily admits that Intelligent Design is the best possible way to get evolution to take place, and at the 43:30 minute mark Chaitin even tells of a friend pointing out that the idea Evolutionary computer model that Chaitin has devised does not have enough time to work. And Chaitin even agreed that his friend had a point, although Chaitin still ends up just ‘wanting’, and not ever proving that his mathematical model was true!

Chaitin is quoted, by Marks, at 10:00 minute mark of following video in regards to Darwinism lack of a mathematical proof – Dr. Marks also comments on the honesty of Chaitin in personally admitting that his long sought after mathematical proof for Darwinian evolution failed to deliver the goods that he thought it had.

On Algorithmic Specified Complexity by Robert J. Marks II - 2014 - video 

Here is the paper that Marks confronted Chaitin with:

Active Information in Metabiology - Winston Ewert, William A. Dembski,  Robert J. Marks II - 2013

Excerpt: Introduction: Chaitin’s description of metabiology [3] is casual, clear, compelling, and mind-bending. Yet in the end, although the mathematics is beautiful, our analysis shows that the metabiology model parallels other attempts to illustrate undirected Darwinian evolution using computer models [10–13]. All of these models depend on the principle of conservation of information [14–21], and all have been shown to incorporate knowledge about the search derived from their designers; this knowledge is measurable as active information [14,22–25].

Except page 9: Chaitin states [3], “For many years I have thought that it is a mathematical scandal that we do not have proof that Darwinian evolution works.” In fact, mathematics has consistently demonstrated that undirected Darwinian evolution does not work. 

podcast: "Dr. Robert Marks: Active Information in Metabiology" - May 2014 

Dr. Robert Marks: Active Information in Metabiology - video 

Related quotes from Chaitin:  

The Limits Of Reason – Gregory Chaitin – 2006

Excerpt: Unlike Gödel’s approach, mine is based on measuring information and showing that some mathematical facts cannot be compressed into a theory because they are too complicated. This new approach suggests that what Gödel discovered was just the tip of the iceberg: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms. 

To the skeptic, the proposition that the genetic programmes of higher organisms, consisting of something close to a thousand million bits of information, equivalent to the sequence of letters in a small library of one thousand volumes, containing in encoded form countless thousands of intricate algorithms controlling, specifying and ordering the growth and development of billions and billions of cells into the form of a complex organism, were composed by a purely random process is simply an affront to reason. But to the Darwinist the idea is accepted without a ripple of doubt – the paradigm takes precedence!”

Michael Denton, Evolution: A Theory In Crisis, London: Burnett Books, 1985, p. 351

“For many years I have thought that it is a mathematical scandal that we do not have a proof that Darwinian evolution works.”

Gregory Chaitin

Dennett on Competence without Comprehension – William A. Dembski – June 2012

Excerpt: As it turns out, there are problems in mathematics that can be proved to be beyond resolution by any algorithm (e.g., the halting problem). 


Dennett on Competence without Comprehension – William A. Dembski – June 2012

Excerpt: In 1936 Turing proposed a universal mechanism for performing any and all computations, since dubbed a Turing machine. In the last seventy-plus years, many other formal systems have been proposed for performing any and all computations (cellular automata, neural nets, unlimited register machines, etc.), and they've all been shown to perform the same -- no less and no more -- computations as Turing's originally proposed machine.,,,

Something is a Turing machine if it has a "tape" that extends infinitely in both directions, with the tape subdivided into identical adjacent squares, each of which can have written on it one of a finite alphabet of symbols (usually just zero and one). In addition, a Turing machine has a "tape head," that can move to the left or right on the tape and erase and rewrite the symbol that's on a current square. Finally, what guides the tape head is a finite set of "states" that, given one state, looks at the current symbol, keeps or changes it, moves the tape head right or left, and then, on the basis of the symbol that was there, makes active another state. In modern terms, the states constitute the program and the symbols on the tape constitute data.

From this it's obvious that a Turing machine can do nothing unless it is properly programmed to do so.,,,

Once a Turing machine is properly programmed, it will produce the solution to any computational problem. But humans -- read "intelligent designers" -- invariably do the programming. Turing, far from having obviated the "trickle-down theory of intelligence," actually underscores its preeminent role in the field of computation. 

Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson

Excerpt: For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. 

The Turing Test is Dead. Long Live the Lovelace Test. - Robert J. Marks II - July 3, 2014 

Of related interest, robots learn to lie to each other by 'losing' information:

The evolution of information suppression in communicating robots with conflicting interests - 2009

Excerpt: We can read in the abstract that: Because robots were competing for food, they were quickly selected to conceal this information. However, they never completely ceased to produce information. Detailed analyses revealed that this somewhat surprising result was due to the strength of selection on suppressing information declining concomitantly with the reduction in information content. 

Of related interest:

Princeton Philosophy Prof Dr. Hans Halvorson speaks on “Quantum Mechanics and Mind” – video 

Of note from preceding video: Introducing quantum information into multiplayer games allows a new type of equilibrium strategy which is not found in traditional (classical) games. The entanglement of player's choices can have the effect of a contract by preventing players from profiting from betrayal.

Roberts Marks on Avida and ev - video - 6:00 minute mark

Information. What is it? - Robert Marks - lecture video (With special reference to ev, AVIDA, and WEASEL)

If Darwinism is true then why can’t we design  ‘super’ programs for computers with it?

Why doesn’t software industry use evolution? - niwrad - Oct. 21, 2013

Excerpt: Computer aided evolution speed:

Consider a single 10^15 flops computer and suppose, for the sake of argument, that a program “mutation” needs an equivalent of 1000 floating-point operations. We get a computer aided evolution speed (CAES) = 10^12 mutations / sec.

Since, according to Darwin, unguided biological evolution was able to spontaneously produce all 500 million species on earth (from bacteria to man) in 3 billion years (biological evolution time = BET), computer aided evolution could automatically produce software containing an equivalent overall amount of functional complex specified information in what we call “computer aided evolution time” (CAET). In other words, we state that the product of “speed x time” is equal for biological evolution and for computer evolution:


CAET is then = (BET x BES) / CAES

in numbers:

CAET = (3×10^9 x 1250) / 10^12 = 3.75 years

Evolution applied to software programming would produce software equivalent to the organizational information that present and past organisms contain in less than 4 years. Then, again, why software houses don’t save billion dollars in employers by applying Darwinian evolution to the software creation?

My short answer: because Darwinian evolution works exactly zero, when the goal is to create systems. It is fully incapable to create the least system in principle. If it were capable to do that just a little, software producers would use it. To put it differently, if Charles Darwin was right Bill Gates would be far richer than he is… 

Dr. David Berlinski: Random Mutations (to computer programs?) - video 

Estimating Active Information in Adaptive Mutagenesis

From David Tyler: How do computer simulations of evolution relate to the real world? - October 2011

Excerpt: These programs ONLY work the way they want because as they admit, it only works because it has pre-designed goals and fitness functions which were breathed into the program by intelligent designers. The only thing truly going on is the misuse and abuse of intelligence itself. 

Conservation of Information in Computer Search (COI) - William A. Dembski - Robert J. Marks II - Dec. 2009

Excerpt: COI puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev.

Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism - Dembski - Marks - Dec. 2009

Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it.

New paper using the Avida “evolution” software shows it doesn’t evolve. - May 2011

No evidence that there is enough time for evolution - Lee Spetner - May 2011

Excerpt: Thus their conclusion that “there’s plenty of time for evolution” is unsubstantiated. The probability calculation to justify evolutionary theory remains unaddressed.

The effects of low-impact mutations in digital organisms (Testing Avida using realistic biological parameters) - Chase W Nelson and John C Sanford

The Problem of Information for the Theory of Evolution – debunking Schneider's ev computer simulation

Excerpt: In several papers genetic binding sites were analyzed using a Shannon information theory approach. It was recently claimed that these regulatory sequences could increase information content through evolutionary processes starting from a random DNA sequence, for which a computer simulation was offered as evidence. However, incorporating neglected cellular realities and using biologically realistic parameter values invalidate this claim. The net effect over time of random mutations spread throughout genomes is an increase in randomness per gene and decreased functional optimality.

The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective

William DeJong and Hans Degens Open Evolution Journal, February 2011,

Abstract: Both digital codes in computers and nucleotide codes in cells are protected against mutations. Here we explore how mutation protection affects the random change and selection of digital and nucleotide codes. We illustrate our findings with a computer simulation of the evolution of a population of self replicating digital amoebae. We show that evolutionary programming of digital codes is a valid model for the evolution of nucleotide codes by random change within the boundaries of mutation protection, not for evolution by unbounded random change. Our mutation protection perspective enhances the understanding of the evolutionary dynamics of digital and nucleotide codes and its limitations, and reveals a paradox between the necessity of dysfunctioning mutation protection for evolution and its disadvantage for survival. Our mutation protection perspective suggests new directions for research into mutational robustness.

The following is a short informative video that accompanies the preceding paper

Contradiction in evolutionary theory - video (The contradiction between extensive DNA repair mechanisms and the necessity of 'random mutations/errors' to DNA for Darwinian evolution to be feasible)

The Capabilities of Chaos and Complexity - David L. Abel

Excerpt: "To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory."

Constraints vs. Controls - Abel - 2010

Excerpt: Classic examples of the above confusion are found in the faulty-inference conclusions drawn from many so-called “directed evolution,” “evolutionary algorithm,” and computer-programmed “computational evolutionary” experimentation. All of this research is a form of artificial selection, not natural selection. Choice for potential function at decision nodes, prior to the realization of that function, is always artificial, never natural.

Arriving At Intelligence Through The Corridors Of Reason (Part II) - April 2010

Excerpt: Summarizing the status quo, Johnson notes for example how AVIDA uses “an unrealistically small genome, an unrealistically high mutation rate, unrealistic protection of replication instructions, unrealistic energy rewards and no capability for graceful function degradation. It allows for arbitrary experimenter-specified selective advantages”. Not faring any better, the ME THINKS IT IS LIKE A WEASEL algorithm is programmed to direct a sequence of letters towards a pre-specified target. 

What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014

Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough.

Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,,

,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014.

Erik J. Larson - Founder and CEO of a software company in Austin, Texas   

Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson

Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. 

The following site has some easy examples of the types of questions that would trip a computer up in a Turing test:

Artificial Intelligence or intelligent artifices? - June 3, 2013 

Moreover, since a computer has no free will in order to create new information, nor consciousness so as to take overall context of information into consideration, then one simple way of defeating the Turing test is to simply tell, or to invent, a new joke:,,,

“(a computer) lacks the ability to distinguish between language and meta-language.,,,

As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer)”

Per niwrad 

Such as this joke:

Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon 

or perhaps this one:

Turing Test - cartoon 

For Artificial Intelligence, Humor Is a Bridge Too Far - November 13, 2014

Excerpt: Thoughtful reader Paul comments on Erik Larson's post "Yes, 'We've Been Wrong About Robots Before,' and We Still Are":

“The article reminded me of an exercise in one of my first programming books that made me aware of the limits of computers and AI. I've forgotten the author of the book, but the problem was something like the following: "Write a program that takes in a stream of characters that represent a joke, reads the input and decides whether it's funny or not."

It's a prefect illustration of Erik's statement, "Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all." Even when my grandchildren were very young I marveled at how they grasped the humor of a joke, even a subtle one.”

Yes, when a computer can identify, tell, or -- even better -- come up with a good joke, I'll look a little less skeptically on claims of machines soon surpassing us other than in, as Erik Larson writes, "brute-force computation of circumscribed tasks." 

Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson

Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomenon: the creation of new information.

,,,The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. 

At last, a Darwinist mathematician tells the truth about evolution - November 2011

Excerpt: 7. Chaitin looks at three kinds of evolution in his toy model: exhaustive search (which stupidly performs a search of all possibilities in its search for a mutation that would make the organism fitter, without even looking at what the organism has already accomplished), Darwinian evolution (which is random but also cumulative, building on what has been accomplished to date) and Intelligent Design (where an Intelligent Being selects the best possible mutation at each step in the evolution of life). All of these – even exhaustive search – require a Turing oracle for them to work – in other words, outside direction by an Intelligent Being. In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.”

 8. Of the three kinds of evolution examined by Turing (Chaitin), Intelligent Design is the only one guaranteed to get the job done on time. Darwinian evolution is much better than performing an exhaustive search of all possibilities, but it still seems to take too long to come up with an improved mutation.

Also Per Chaitin; Oracle must possess infinite information for ‘unlimited evolution’ of a evolutionary algorithm; i.e. The Oracle must be God!

"Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not."

David Berlinski

GAs (Genetic Algorithms) come in different flavours. Some, like the weasel, are only ill inspired propaganda. Others are useful and serious computational tools. But no human made GA says anything about the “spontaneous” GA which is modeled in neo-darwinism. So, if darwinists want to show what their model can really do, they should really analyze the RV (Random variation) + NS (Natural Selection) algorithm, and not others which are completely different.

In computer science we recognize the algorithmic principle described by Darwin - the linear accumulation of small changes through random variation - as hill climbing, more specifically random mutation hill climbing. However, we also recognize that hill climbing is the simplest possible form of optimization and is known to work well only on a limited class of problems.

Watson R.A. - 2006 - Compositional Evolution - MIT Press - Pg. 272

Dennett’s Algorithm: An Exercise in Circularity (What survives survives) - Tom Bethell October 20, 2014 

Evolutionary Computation: A Perpetual Motion Machine for Design Information? By Robert J. Marks II

Final Thoughts: Search spaces require structuring for search algorithms to be viable. This includes evolutionary search for a targeted design goal. The added structure information needs to be implicitly infused into the search space and is used to guide the process to a desired result. The target can be specific, as is the case with a precisely identified phrase; or it can be general, such as meaningful phrases that will pass, say, a spelling and grammar check. In any case, there is yet no perpetual motion machine for the design of information arising from evolutionary computation. 

In the following podcast, Robert Marks gives a very informative talk as to the strict limits we can expect from any evolutionary computer program (evolutionary algorithm):

Darwin as the Pinball Wizard: Talking Probability with Robert J. Marks II - video 

How Information Theory Is Taking Intelligent Design Mainstream - William Dembski PhD 

7:00 minute mark

 "For many years I thought that it is a mathematical scandal that we do not have a proof that Darwinian evolution works."

Gregory Chaitin - Proving Darwin 2012 - Highly Respected Mathematician

Darwin as the Pinball Wizard: Talking Probability with Robert Marks - podcast

Here are a few quotes from Robert Marks from the preceding podcast, as well as link to further quotes by Dr. Marks:

* [Computer] programs to demonstrate Darwinian evolution are akin to a pinball machine. The steel ball bounces around differently every time but eventually falls down the little hole behind the flippers.

* It's a lot easier to play pinball than it is to make a pinball machine.

* Computer programs, including all of the models of Darwinian evolution of which I am aware, perform the way their programmers intended. Doing so requires the programmer infuse information about the program's goal. You can't write a good program without [doing so].

Robert J. Marks II - Distinguished Professor of Electrical and Computer Engineering at Baylor University

Adaptive Robots: Yet More Evidence for Evolution? - November 2010

Excerpt: It may have been a nifty bit of engineering work, but this is hardly evolution in action. If you randomize aspects of pre supplied functionality, and select for certain outcomes, then you will end up with those outcomes.

Do Robots Have Feelings? Dr. Rosalind Picard (MIT) at The Veritas Forum at Rice - video

podcast - Dr. Neil Steiner: Comparing Natural and Human-Engineered Systems 

listen in as Casey Luskin talks with Dr. Neil Steiner, an engineer who works on computer and engineering research with the Information Sciences Institute at University of Southern California. Dr. Steiner offers his expertise to give unique insight into the debate over intelligent design and evolution, comparing natural biological systems to human designed technology.

Can a Computer Think? - Michael Egnor - March 31, 2011

Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.,,, It's such irony that the first personal computer was an Apple.


Do Computers Store Memories? - December 10, 2014

Excerpt: "A singular consequence of the materialist-mechanical metaphysics that permeates our culture and our sciences is that we commonly hold basic beliefs that are abject nonsense. One such belief is the almost ubiquitous one -- among ordinary folks as well as neuroscientists and surprisingly many philosophers -- that the brain "stores" memories. The fact is that the brain doesn't store memories, and can't store memories.

The reality is that computers store electrons, not memories. Memories are psychological things that pertain only to man (and animals), not to machines.

We use computers -- and books and file cabinets and rolodexes, etc. -- as tools to cue our own memories. There are no memories in a computer, except in a metaphorical sense. Computers are devices made of metal and electrons that we configure to aid our own memories. Computers no more have memories than chessboards play chess or televisions watch sitcoms or cameras look at pictures or CD players listen to music." 

Your Computer Can't Remember a Darned Thing - December 12, 2014

Excerpt: The attribution of memory — in a psychological sense — to a computer is just nonsense, a bizarre confusion of metaphor and reality. A computer no more has memory, in the sense of remembering things, than you can catch a train at your computer terminal.

So a computer itself doesn’t have memories, in the sense of remembering anything. But can a computer store memories? Of course not. Memories are not the kind of things for which the verb "store" has any sense. Nothing — neither we nor a computer — can store a psychological thing. "I can’t store any more memories in my psychology, because I’m already full of propositions" doesn’t even make sense. I can have memories, I can like or dislike memories, I can tell other people about my memories, but I can’t store memories. And of course, neither can my computer store memories. My computer can store electrons, or data understood as patterns of electrons on the hard drive. But memories can’t be stored on computers, because memories can’t be stored at all. The assertion is nonsense.

Now, what is true is that representations of memories can be stored on a computer. 

"Your Computer Doesn't Know Anything" (January 23, 2015). .

Excerpt: Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning. ¶ People know things. Devices like computers and books and Rolodexes and watches and cars and televisions and cell phones don’t know anything. They don’t have minds. They are artifacts — paper and plastic and silicon things designed and manufactured by people — and they provide people with the means to leverage their human knowledge. ¶ Computers (and books and watches and the like) are the means by which people leverage and express knowledge. Computers store and process representations of knowledge. But computers have no knowledge themselves. 

Modular Biological Complexity - Christof Koch - August 2012

Summary: It has been argued that the technological capability to fully simulate the human brain on digital computers will exist within a decade. This is taken to imply that we will comprehend its functioning, eliminate all diseases, and “upload” ourselves to computers (1). Although such predictions excite the imagination, they are not based on a sound assessment of the complexity of living systems. Such systems are characterized by large numbers of highly heterogeneous components, be they genes, proteins, or cells. These components interact causally in myriad ways across a very large spectrum of space-time, from nanometers to meters and from microseconds to years. A complete understanding of these systems demands that a large fraction of these interactions be experimentally or computationally probed. This is very difficult.,,,

This is bad news. Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology (in computers used to try to understand the biological interactions) speeds up by an order of magnitude each year. ,,,

Improved technologies for observing and probing biological systems has only led to discoveries of further levels of complexity that need to be dealt with. This process has not yet run its course. We are far away from understanding cell biology, genomes, or brains, and turning this understanding into practical knowledge. 

Mathematicians Offer Elegant Solution to Evolutionary Conundrum

Excerpt: UBC researchers have proffered a new mathematical model that seeks to unravel a key evolutionary riddle–namely what factors underlie the generation of biological diversity both within and between species.,,, existing mathematical models that incorporate these ‘rare type’ advantages tend to have some serious shortcomings,”,,,

translation,,, all the calculus level math, that has been taught to intimidated freshman biology students for decades, does not explain the origination of biological information. Thus, massive bandages were applied to the existing evolutionary equations to hide the fact that evolution cannot explain the generation, nor spread, of novel functional information in biological forms. Worse still, this 'new' mathematical model has not even been rigorously tested in the real world though it is offered as a 'elegant solution'.

Here are a few computer programmer articles on the absurdity of Darwinism accounting for programming logic (active information):

Darwinism from an informatics point of view - May 2010

The Genius Behind the Ingenious - Evolutionary Computing

Excerpt: The field dedicated to this undertaking is known as evolutionary computing, and the results are not altogether encouraging for evolutionary biology.

Signature In The Cell - Review

Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs.

Software Engineer - quoted to Stephen Meyer

The Fairyland of Evolutionary Modeling - May 7, 2013

Excerpt: Salazar-Ciudad and Marín-Riera have shown that not only are suboptimal dead ends an evolutionary possibility, but they are also exceedingly likely to occur in real, developmentally complex structures when fitness is determined by the exact form of the phenotype. 

A Darwinian Enigma: Defending The Preposterous After Having Been Informed

Excerpt: I’m thoroughly familiar with Monte Carlo methods. Trial and error can be a useful tool in an intelligently designed computer program, given a limited search space, sufficient computational resources, and a goal in mind.

None of this has anything to do with extrapolating Monte Carlo methods in computation to the origin of information in biological systems.

Unsupported extrapolations such as this are the hallmark of Darwinian speculation, which is the antithesis of rigorous scientific investigation. -

Gil Dodgen - Programmer of 'Perfect Play Checkers'

World Championship Checkers - Perfect Play - Gil Dodgen

Another reason why the human mind is not like a computer - June 2012

Excerpt: In computer chess, there is something called the “horizon effect”. It is an effect innate in the algorithms that underpin it. Due to the mathematically staggering number of possibilities, a computer by force has to restrict itself, to establish a fixed search depth. Otherwise the calculations would never end. This fixed search depth means that a ‘horizon’ comes into play, a horizon beyond which the software engine cannot peer.

Anand has shown time and again that he can see beyond this algorithm-imposed barrier, to find new ways, methods of changing the game. 

Evolutionary Algorithms: Are We There Yet? - Ann Gauger

Excerpt: In the recent past, several papers have been published that claim to demonstrate that biological evolution can readily produce new genetic information, using as their evidence the ability of various evolutionary algorithms to find a specific target. This is a rather large claim.,,,,, As perhaps should be no surprise, the authors found that ev uses sources of active information (meaning information added to the search to improve its chances of success compared to a blind search) to help it find its target. Indeed, the algorithm is predisposed toward success because information about the search is built into its very structure.

These same authors have previously reported on the hidden sources of information that allowed another evolutionary algorithm, AVIDA [3-5], to find its target. Once again, active information introduced by the structure of the algorithm was what allowed it to be successful.

 These results confirm that there is no free lunch for evolutionary algorithms. Active information is needed to guide any search that does better than a random walk.

Here is a far more accurate computer simulation for what we find in life than these severely misleading Evolutionary algorithms::

To Model the Simplest Microbe in the World, You Need 128 Computers - July 23, 2012

Excerpt: Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That's a fraction of the size of even another bacterium like E. coli, which has 4,288 genes.,,,

The bioengineers, led by Stanford's Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What's fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell's lifecycle processes.,,,

,,the depth and breadth of cellular complexity has turned out to be nearly unbelievable, and difficult to manage, even given Moore's Law. The M. genitalium model required 28 subsystems to be individually modeled and integrated, and many critics of the work have been complaining on Twitter that's only a fraction of what will eventually be required to consider the simulation realistic.,,, 

Here is the image, from the preceding article, on the integrated processes of M. genitalium 

Of related interest, although I consider this particular computer simulation to be a far more accurate reflection of reality than Dawkin's weasel program (and other such Evolutionary Algorithms) Gil Dodgen, who works building accurate computer models/simulations for a living, recently posted on the inherent limits, and reliability, of computer simulations:

All Claims Made as the Result of a Computer Simulation Should be Considered BS, Until Proven Otherwise - July 20, 2012 - GilDodgen

Excerpt from comment section: I’ve written software of all kinds for almost 40 years, I’ve taught a range of undergraduate CS and CIS courses, and consulted in many areas including software quality assurance. No non-trivial program is bug-free; no, not one. Two things cause people to earnestly believe that their simulations are reliable – hubris and agreeable results. 

The reason why computers are overwhelmed by the game of Go is that the probabilistic base (19x19) to be calculated in Go greatly exceeds current computational capacity (and will remain far beyond the reach of the computational capacity of computers in the foreseeable future):  

Epicycling Through The Materialist Meta-Paradigm Of Consciousness - May 2010

GilDodgen: One of my AI (artificial intelligence) specialties is games of perfect knowledge.

See here:

In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board (compared to 8x8 for chess and checkers), with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player.,,, 

Of note, a computer beat the top Go player in March 2016


Krauss v. Meyer: Computer Algorithms as a Fair Model of Darwinian Processes? - David Klinghoffer - March 21, 2016

Excerpt: The only way that AlphaGo trained itself was to, relying on the rules of Go, repeatedly play itself (that is, move pieces about using its own evaluation function) and then use those results to further tune the network. It did not change its rules. It did not learn new rules. It did not learn strategy. It only got better and better and better at recognizing a high-quality board positions and high-value moves.

To play Go, AlphaGo continues to use Classic AI but augmented it with a useful evaluation function of Go board positions. That's it. It is impressive programming. But it is a Universe away from any kind of knowledgeable system. Further, their approach works only for, what's termed, "Perfect Knowledge" systems...such as games. The real world is not a perfect knowledge far.

AlphaGo, apparently, during the match made similarly goofy moves, just like IBM's Watson can do, because it lacks intelligence and insight. 

Nature "Learns" to Create Complexity? Think Again - Brendan Dixon - March 28, 2016

Excerpt: Neural networks were invented in the 1940s with the roots of the more modern form dating to the late 1980s. But those attempts failed. Why? Because they lack the data sufficient to train the network. AlphaGo succeeded by training their single network with millions and millions of Go boards (more or less). Anything less than that and AlphaGo would have failed to win. Since GRNs (Genetic Regulatory Networks) are distributed across time and space, no one network can receive the necessary data to succeed. No matter how similar they might be to neural networks, without concentrated training they'll learn and remember nothing on their own. 

In the Physorg write up of this computer simulation study they stated:

Researchers produce first complete computer model of an organism - July 20, 2012

Excerpt: Most biological experiments, however, still take a reductionist approach to this vast array of data: knocking out a single gene and seeing what happens. "Many of the issues we're interested in aren't single-gene problems," said Covert. "They're the complex result of hundreds or thousands of genes interacting.",,,

To integrate these disparate data points into a unified machine, the researchers modeled individual biological processes as 28 separate "modules," each governed by its own algorithm. These modules then communicated to each other after every time step, making for a unified whole that closely matched M. genitalium's real-world behavior.,,,

Consulting the model, the researchers hypothesized that the overall cell cycle's lack of variation was the result of a built-in negative feedback mechanism. 

And indeed the 'negative feedback' of their 28 separate "modules," each governed by its own algorithm. computer model of the Mycoplasma is due to what can be termed ' the poly-constraint of poly-functionality' they are dealing with in their model of Mycoplasma:

The primary problem that poly-functional complexity presents for neo-Darwinism is this:

To put it plainly, the finding of a severely poly-functional/polyconstrained genome has put the odds, of what was already astronomically impossible for finding a single gene, to what can only be termed fantastically astronomically impossible. To illustrate the monumental brick wall any evolutionary scenario must face when I say genomes are poly-constrained by poly-functionality, I will use a puzzle:

Instead of searching for a single gene/protein, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford.






Which is translated ;


This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation.

This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations. Thus that is why there is an inherent 'negative feedback' in Mycoplasma as well as, by default, in their model.

Here is a brutally honest admission that neo-Darwinism has no mathematical foundation from a job description from Oxford university, seeking a mathematician to ‘fix’ the ‘mathematical problems’ of neo-Darwinism:  

Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe

Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. …

of related interest:  

Since neo-Darwinists believe that Evolutionary Algorithms, programmed by brilliant engineers, are fully capable of mimicking evolutionary processes, and even eventually reaching the point of ‘self-evolving’ to greater and greater heights of undreamed computational power, then, according to their reasoning, it is entirely plausible that we are now living in some type of gigantic Evolutionary Algorithm computer simulation that was programmed by some future humans???

,,,for a clear example of the absurdity that neo-Darwinism leads to, this following philosophical argument closely parallels what we should expect to see if evolutionary processes were truly unbounded in their information generation capacity, as neo-Darwinists hold, in future computational evolutionary algorithms,,,:


Department of Philosophy, Oxford University


A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).

Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.

Who'd Have Thought A Man Talking About His Arm Would Be So Interesting? - video 

Refutation of ‘We Are Living In A Computer Simulation’ Argument 

Thus, according to neo-Darwinian reasoning of virtually unlimited computational power in the future for Evolutionary Algorithms, either we are currently living in a computer simulation, or future humanity becomes extinct so as to not run the simulation!!!,,, or, an option that was not mentioned in the above philosophical argument, Evolutionary Algorithms are, in reality, extremely limited in their ability to optimize computer programs above what man has currently programmed them to achieve;

My bet is on the latter,,,  Of related note on the absurdities that neo-Darwinism leads to::

Anthropic Principle – God Created The Universe – Michael Strauss PhD. – video

In the preceding video, at the 6:48 minute mark, Dr. Strauss states:

‘So what are the theological implications of all this? Well Barrow and Tippler wrote this book, ‘The Anthropic Cosmological Principle’, and they saw the design of the universe. But they are atheists basically, there’s no god. And they go through some long arguments to describe why humans are the only intelligent life in the universe. That’s what they believe. And, so they got a problem. If the universe is clearly the product of design, but humans are the only intelligent life in the universe, who creates the universe? So you know what Barrow and Tippler’s solution is? Heh, It makes perfect sense. Humans evolve to a point, someday, where they reach back in time and they create the universe for themselves. (audience laughs) Hey, these guys are respected scientists. So what brings them to that conclusion. It is because the evidence for design is so overwhelming that if you don’t have God, you have humans creating the universe, back in time, for themselves.’

- Michael Strauss PhD. Particle Physics

William Dembski comments on Elizabeth Liddle’s probabilistic objection - Excerpt: Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH.

She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it).

Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification.



Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.