Vindiciae contra tyrannos

Evolution And Probability

F. Michael Zimmerman

© F. Michael Zimmerman, 2005, 2007, 2008

The following is a critique of contemporary evolutionary theory.  This author’s understanding of Genesis is capable of accomodating evolution as a secondary theory (not doctrine) of process if the facts warrant it.  However, as the following will make clear, there seems to be less and less of that warrant when you compare theory with the fossil record.  Nevertheless, these criticisms are primarily scientific, mathematical, and statistical; not theological.  The most important debt that this treatise owes to so-called “scientific creationism” is a cultivation of the ability to think outside the box.  As will become clear, it is not necessary to step outside it very far to uncover some glaring deficiencies in accepted evolutionary theory.

It becomes more and more amazing that so much theorizing on evolution could ignore one of the most basic concepts of statistics as applied to random phenomena.  That concept is the frequency distribution.


When evolutionists defend the theory of evolution, they most commonly point to clear examples of natural selection.  This makes a degree of rhetorical sense.  Natural selection is the strongest portion of the case for evolution.  Natural selection has been proven so many times in so many situations that it meets all the requirements to be regarded as scientific law, not mere theory.  If the entire case for the theory of evolution rested on the proof of natural selection, that case would be open and shut.  But sadly for evolutionists, this is not the case.  In logical terms, proof of the law of natural selection is necessary but not sufficient to prove the theory of evolution.

Problems begin to arise when we examine natural selection and compare it with genetics.  By itself, natural selection cannot bring new genetic traits into existence.  It can only weed out undesirable traits.  And once a trait has been weeded out of a species, it is no longer available to subsequent generations, even if changing conditions should render it favorable.  Take Darwin's well-known observations in the Galapagos Islands.  He theorizes that species that are similar, but adapted to specialized conditions have descended from common ancestors.  Those common ancestors must have had a gene pool that encompassed all of the characteristics found in their more specialized descendants.  Evolutionary pressures then eliminated from each strain those traits that were not conducive to the survival of that strain in its more specialized surroundings.  So from a species X we get strains A, B, C, D, and E.  Each of these has a gene pool that is narrower than that of the original X.  And X may have contained characteristics not found in any of these strains.  Now let us suppose that some catastrophe wipes out strain C.  It's habitat is then colonized by the remaining strains.  Can any of them adapt to that habitat the way C did?  Probably not!  The genes that enabled C to adapt have probably been weeded out of the remaining strains.  If they face competition within that ecological niche, none of them will survive there.

This is but one example of a problem with pure natural selection.  As it weeds out traits undesirable for current conditions, it limits the ability of subsequent generations to evolve further.  The reader can illustrate this principle with a deck of cards.  Let us let clubs represent defective genes whose characteristics are not favored by natural selection.  So as to prolong the process, let us stipulate that these genes are Mendelian recessives.  They will harm their carriers, but only if doubled.  Shuffle the deck and deal out two cards.  If both of them are clubs, discard both.  If at least one is not a club, deal the next hand.  When all the cards have been dealt (or when only one remains for subsequent deals), this represents one generation.  Take the cards not discarded and shuffle.  They will have a diminished number of clubs.  Repeat the process.  As time goes on, there will be fewer and fewer clubs.  Eventually, there will be only one.  And that one will always be paired with a non-club and will not appear.  The "species" represented will now be club-free!  It will be unable to display characteristics associated with clubs even if clubs later become a favorable trait!

Natural selection makes far more sense when viewed as a preserver of existing species than as an originator of new ones.  By itself, the phenomenon commonly known as natural selection or "the survival of the fittest" cannot bring any new species into existence.  The most it can do is to weed out the unfit within a species.  In this manner it can retard the genetic deterioration of species already in existence.  However, natural selection does explain why so many really nasty congenital defects come from Mendelian recessive genes and why those defects that are not recessives (such as Huntington's chorea) allow those afflicted with them to live long enough to procreate before they start to kill.  These characteristics permit genetic defects to slip in "under the radar" of natural selection and to spread through the gene pool.

By itself, natural selection cannot expand the gene pool of a species.  It can only shrink it.  For evolution to proceed, the process requires a source of new genetic traits.  Without new genes the entire process will peter out eventually.  Furthermore, these new traits must enter the gene pool at a rate of speed fast enough to enable a species to keep up with changing conditions.  That source is commonly conceived of as random genetic mutation.  But this has some serious problems.

Only a miniscule fraction of mutations will enhance survival probability for those that carry them.  Most will be harmful or lethal to their carriers.  But it is commonly hypothesized that the harmful and lethal genes will be weeded out of the gene pool in short order by natural selection, leaving only the genetic improvements.  However, this concept ignores some well-known and easily verifiable characteristics of random phenomena.


Evolutionary theory that repudiates any idea of a "designer" finds itself totally dependent on random mutations as a source of new traits.  But there are certain laws that pertain to random phenomena, phenomena in which a single event can result in any of a number of events, each one occurring unpredictably from the same causal sequence as all others.    All such random phenomena are

governed by the concept of a frequency distribution.  The bell curve is a well-known frequency distribution.  Basically, the frequency distribution dictates that when random phenomena occur, we should observe all possible outcomes, in frequencies corresponding to the probability of each outcome.  From the viewpoint of the frequency distribution, the only thing "random" about such phenomena is the sequence of these events.

Their frequencies are knowable and accurately predictable.  The most elementary concept in statistics and probability is that events that may be individually unpredictable become highly predictable when taken in the aggregate.  This is the principle that keeps casinos and life insurance companies in business.  Any life insurance agent can predict accurately how many people of a demographic profile (such as that of a person reading this essay) will die in the coming twelve months.  But only God knows whether the reader of this essay will be one of them.  Any player in a casino has a theoretical chance to become rich.  But if the number of players is known, the casino management knows how many players will become rich and how many will make the casino rich!  Events that may be completely unpredictable individually become highly predictable in the aggregate!  The greater the number of these events becomes, the more accurate the predictions of the aggregate become.  And the aggregate that evolutionists must discuss is the entire history of our planet’s entire biosphere.


Take the craps table at any casino.  The probabilities of the different outcomes of the roll of two dice are well known.  The probability that a player will win on 7 or 11 are 2/9 or 2 in 9.  The other probabilities are also well known.  So if we know the number of times a player rolled 7/11, we also have reliable estimates of every other possible event in a crap game.  Multiply the number of 7/11 rolls by 9/2 and we know the number of rounds of craps that were rolled over that same period of time.  From this we can compute a

reliable estimate of the number of time a player lost on 2, 3, or 12, the number of times he/she had to make a point, the number of times a player made the point and the number of times he/she did not.  By keeping track of the amount paid out on 7/11 rolls, we can compute the average bet, and from that we know what should have been the house's take at that table.  Comparison of this with the reported take will tell us if any employees are skimming from the house at that table.  We can do all this from only three pieces of


1. The number of times a player rolled 7/11 over the time period (say a week) in question.

2. The total amount paid out by the house on 7/11 rolls during that same period.

3. The reported take by the house during the same period in question.


In Table 1 is a brief summary of the hands in poker, along with their probabilities.  These probabilities are percentages.  To compute the actual probability, divide the numbers by 100.  The probabilities are valid on the initial deal in the game of draw poker, and are the actual values in 5-card stud poker, where what you have is what you hold.  Note the probability of a Royal Flush.  The percentage given translates to an actual probability of 0.000001539.  Although this is enormous compared the the chances of a favorable mutation, it is comprehensible and will serve our purposes in a demonstration.

Evolutionists rely on the cumulative probability of events that are highly improbable in an individual trial.  To demonstrate this concept, the reader may use a deck of cards, as above.  Deal a poker hand of 5 cards.  Note its value according to the table.  Then replace the cards in the deck, reshuffle, and deal another hand.  The replacement and reshuffle guarantees that the trials are independent of one another, that one outcome will have no effect on subsequent outcomes.  This greatly simplifies our calculations.  And it affords us an adequate model of random mutation, since a random mutation in one zygote has no known effect on a random mutation in any other zygote.

Table 1 -- A list of hands in poker, with probabilities on the deal.




Number of ways

 % Probability


Royal Flush

A-K-Q-J-10, all of one suit




Straight Flush

Any other consecutive spots, all of one suit




Four Of A Kind

All four suits of a spot




Full House

Three of one spot and a pair of another





All cards of one suit -- excepting straights counted above





Any consecutive spots, any combination of suits -- excepting flushes counted above




Three Of A Kind

Three suits of a spot




Two Pair

Two pairs of two different spots




One Pair

One pair of a spot




High Card aka "garbage"

Any other combination








Although a Royal Flush is highly improbable, it becomes far more likely with huge numbers of hands dealt in this manner.  For practical purposes, the reader who wishes to try these events might be well advised to use a computer simulation.  The chances of a Royal Flush are only 1 in 649740.  Does this mean that if  we deal out that many hands we will surely get a Royal Flush?  No.  No matter how many independent trials we perform, the probability of any given even never becomes a certainty!  Repeated trials do increase the likelihood that even the most improbable events will occur at least once.  But certainty (P = 1) is what is known as a mathematical limit.  Our repeated trials can bring us as close to it as we please.  But they will never reach it.  We may find the cumulative probability of  any event according to the algebraic formula:

P = 1 - (1 - pn)t


P = cumulative probability of an event,

p = probability of that event in an individual independent trial,

n = the number of occurrences of that event we are investigating,

t = the number of trials.

For our look into the odds of a Royal Flush:

p ≈ 0.000001539,

n = 1.

So for t = 649740,

P ≈ 0.6321,

or 63.21%. 

If we raise t to 1000000 or 1 million trials, our cumulative probability rises to 78.54%.

How many trials does it take to get a cumulative probability of 50%?

To answer this question, we must take a couple of logarithms.  Any scientific calculator will allow us to compute them.  We perform a few elementary algebraic manipulations on our initial equation:

1 - P = (1 - pn)t

Taking the logarithms gives us:

ln(1 - P) = t ln(1 - pn)


t = ln(1 - P) / ln(1 - pn)

so when P = 0.5 with p and n as above,

t ≈ 450365.


These illustrations make the point that for improbable events to occur purly by chance, there must be huge numbers of failures. 

It may be possible for a crew of monkeys banging on typewriters to turn out the complete works of Shakespeare.  But it should also be possible to compute the number of monkey-hours necessary for this task.  Additionally, we can calculate the amount of landfill necessary to dispose of failed attempts, dead monkeys, worn-out typewriters, and other associated detritus. 

These gedanken experiments all have their analogues in an evolutionary theory based on pure chance.  Chance implies probability.  Probability implies a frequency distribution.  And a frequency distribution implies a huge number of probable events for every highly improbable event that happens even once!  As applied to random mutation, these principles lead to expectations about what should have happened if genetic innovation took place purely by random chance.  And if the fossil record is at all a reliable guide to the past, we have every right to expect that it will contain evidence of these events, if they in fact took place.


The classical theory of random mutation as the driving force behind evolutionary change acknowledges that favorable mutations are rare.  Most mutations are either harmful or lethal, and the individuals having these mutations do not live long enough to pass them on.  This is consistent with the "winnowing" effect of natural selection, which we may readily concede.  But how frequent does that make the favorable mutations?  And what about the rest of the mutations that occurred during that same period?

Conceptually, the more complex the DNA of an organism, the less likely it becomes that a random change will be favorable.  We can illustrate this principle with a gedanken experiment.

If you fire a .22 slug into an old-fashioned (crank style) telephone, there is a certain (low) probability that this treatment will improve its functioning.  But the probability drops if you try this with a crank-style Victrola.  And it drops still further if you try it with an old-fashioned tube-style radio.  We can go on up to modern-day digital electronic equipment, but the pattern is clear.  The more complex the system, the less becomes the likelihood that a random change will be for the better.

Now when we transfer this concept to evolution, we get a distribution curve in which the favorable mutations needed to drive the evolutionary process are concentrated at the far end of the distribution curve of all mutations.  The probability of such a mutation would make your local state lottery look like a blue-chip stock investment!  We can sum up these relative likelihoods in a retort to store clerks who try to sell lottery tickets.  "I have never met anybody who won the lottery, but I HAVE met people who have seen UFO's!  YOU figure out which event is more likely!"

But if we know the frequency of such favorable mutations that must have been taking place according to evolutionary theory, we can form a pretty good estimate of the number of harmful mutations that must have been turning up in that species during that same period.  The predominant event taking place during that epoch would have been some agent playing havoc with the DNA.  Likewise, we should be able to estimate the population of the species that would have been necessary for it to be able to sustain the genetic carnage that would have accompanied the appearance of the favored few.  With a smaller population, there would not have been enough individuals for the favorable mutations to occur at all.  Instead, the species would have simply become extinct as a result of the carnage being wrought to its DNA.  The only way to avoid this would have been for the mutations to have occurred over many more generations, much more slowly.

The above should allow us to get some idea of the rate at which evolution could proceed if its sole source of genetic innovation is random mutation.  The model predicted would be one of diminishing returns.  As the genetic base of an evolutionary line becomes more complex, evolution should proceed more and more slowly.  Eventually, the complexity of a species will prevent it from being able to evolve fast enough to keep up with changing conditions.  At that point it will simply become extinct.


When we consider the rarity of favorable mutations compared with the overall frequency distribution, one fact becomes apparent.  The losers will outnumber the winners by such a huge margin that their presence should be the predominant evidence that this process is taking place.  Very few of us have ever won a major prize ($1 million or more) in a state lottery.  A larger, but still miniscule, percentage of the populace may have some personal acquaintance with a lottery winner.  So if we ignore reports in the media and other sources of input other than eyewitness testimony, very few of us have any evidence that anybody has ever won a lottery.  But to find evidence of the losers, all that anyone has to do is to go to the nearest convenience store and take a look in the wastebasket!  So the most highly visible evidence that a lottery is taking place comes not from the winners but from the losers.  And the same kind of evidence is what we should expect in the fossil record.  Verification that genetic innovation results solely from random mutation should come not from the appearance of new traits, but from evidence of massive genetic carnage that must have accompanied their appearance! 


Consideration of the implications of the frequency distribution on random mutation also means that Hitler, and other apologists and prophets of and for a “master race”, are dead wrong!  According to these considerations, the most important factor governing evolutionary advancement is the big litter.  At any stage, evolution will favor not the most advanced races or species, but the most prolific breeders and the biggest populations!  A “superman” is far more likely to appear in China than in Germany, among Asians than among Aryans!  More likely still, such a superbeing would not be human at all.  It might not even be hominid but would evolve from rabbits, rats, or perhaps cockroaches!  Evolutionary processes dependent solely on random mutation will of necessity favor big litters rather than the most advanced results of previous evolution.  If a species’ reproductive rate slows, or its population shrinks, this will impair its ability to evolve further.  Random mutation acting on a species with such diminished population and reproduction will not cause its further evolution, but its extinction.


This will show that the evolution of bacteria to resist antibiotics is really not a valid comparison with more complex life forms.  Bacteria have simpler DNA, and they have population sizes large enough to sustain the wholesale genetic carnage that must accompany the appearance of the favored few.  The same is true of the evolution of strains of insects with immunities to pesticides.  With simpler DNA and enormous populations, germs and insects could well have the numbers to make evolution work.  But that does not mean that the same is true for larger organisms, with more complex DNA and relatively smaller populations.


The current interpretation seems to show evolutionary change occurring far too fast with populations of hominids that are far too small to sustain the genetic carnage that would have accompanied the appearance of new and favorable traits at such a rate.  To evolve at the rate claimed by evolutionists, Homo erectus must have had a global population that would have dwarfed today's figures for Homo sapens!  And that would be a population of hunter-gatherers without agriculture to support such a huge population!  It doesn't seem to add up.

There is also another factor that does not add up.  Where in the fossil record is the evidence of the genetic carnage that must have accompanied the appearance of new and favorable traits?  Remember that large number of unfavorable mutations whose appearance would accompany the few favorable ones means that this carnage would have been the dominant event of that time.  The evidence of it should be all over the place!  Granted, most of the unfortunates would have died shortly after birth or hatching.  But there should be some evidence of what was going on!  We might expect to see fossilized eggs containing deformed embryos, or pregnant females with deformed young.  At the very least, we might expect to see mysterious drops in the population of the species in question!

Some might point to mass extinctions.  But unless they can show that a killer asteroid will raise the rate of mutations, this misses the point.  Of all the causes of mass extinctions so far hypothesized, virtually none of them have any known mutagenic properties.  And if this is so, they would actually retard genetic innovation since the random mutations that drive evolutionary advancement must be restricted to the survivors of the mass extinctions.


There are at least five explanations for this hole in evolutionary theory  Four of these are scientifically testable:

1.     The appearance of new species was not by chance.  This is the logical conclusion to draw when we have shown that the probability of a purely random event is sufficiently low. This conclusion includes creationism, but is not limited to it.  Remember that Christians consider it primarily of importance that it is God Himself Who authored creation.  There are those who would deny this, while claiming that the process was driven by UFO's, aliens, or other entities equally antipathetic to Biblical doctrine.

2.     The evidence of the carnage that accompanied the emergence of the favored few has been edited out of the fossil record.  But since this carnage would have been the dominant event of those times, this would amount to censorship of the fossil record!  That would force us to call into question the validity of the fossil record as proof of anything!  This would be like a book that purported to be a history of the 20th century, but carefully omitted any reference to World War I, World War II, the Cold War, and all events that would lead the reader to wonder if any of these events took place!  We would say that this book had been so highly censored that we cannot rely on anything it does say!  Evolutionists would probably rather not go there.  This author concurs with them on this, unless forced to do so by other evidence.

3.     The evidence of the carnage is there, but we have missed it because nobody was asking the question.  This is remotely plausible.  The evidence would probably be subtle since most of the victims of the genetic havoc would die young.  However, I would like to see what kind of explantions emerge for how scientists missed the evidence of the predominant event taking place at the time new traits appeared!  Watching to see how evolutionists treat this one ought to be fun!  The huypothesis of mass extinctions holds some promise here.  However, none of the hypothesized causes of mass extinctions has known mutagenic properties.  And deaths from harmful mutations would be in addition to deaths from other causes.

4.     Neo-Lamarckism.  This has only arisen recently. Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck was a French biologist who put forth a theory of evolution prior to Darwin.  Unlike Darwin, Lamarck suggested that an organism could pass on acquired traits to its offspring.[1]  This theory, known as Lamarckism was once influential, but ran afoul of the facts of genetics. [2]  However, it has enjoyed a recent revival from research into factors governing the expression of different genes.[3]  The modern revival, which this author calls Neo-Lamarckism, seems to make some intuitive sense.  An organism facing challenges to its survival might search its genetic archives for material that once enabled its ancestors to survive similar challenges.  However, we simply know too little about the factors that switch particular genes on or off to know whether evolutionary pressures have anything at all to do with it.  It is far from self-evident that Darwinian forces have any impact on genetic expressions.  The Neo-Lamarckian hypothesis would require rigorous scientific testing before it could be taken seriously by anyone!  However, it does have one important scientific virtue.  It is scientifically testable.

5.     (Not scientifically testable) "Out of all the billions of galaxies, with their stars capable of supporting planets like ours, out of all the uncountable planets capable of supporting life as we know it, one was bound to beat the odds.  Guess what?  We're it."  This author actually saw an evolutionist post this one.  The trouble, of course, is that we have no way to tabulate the number of those galaxies, stars, and planets.  We have only the roughest idea how common or rare these planets are.  So this hypothesis is untestable, and we might expect that it will remain so for the foreseeable future.

None of this proves that God is the Author of creation.  We should not be surprised if we never do manage to prove that point (Hebrews 11:3).  But ruling out random chance as the sole factor is another matter.  That is relatively easy.





Article posted by Jack Kettler, Owner of the Undergroundnotes web site.