| | Merlin Jetton wrote:
In post #91 I made three points against Grammarian's assumptions. About the first, which exposed one (implicitly, a second) of his patently false claims, he granted its validity, and followed with a parody.
Here was your 1st point:
For a given sized packet, more randomness in it implies less information. However, this does not imply that for a given packet, a small random addition or change to it decreases the information. It may increase it (or stay the same).
Your 2nd and 3rd sentences contradict your 1st sentence. You give no example of what you mean by a small random change adding information to a packet (a bit-string), or leaving the information measure of the packet unchanged. Give a concrete a example.
Take the following sentence (a "bit string"):
"I love the color of her hair."
We have 29 characters (letters+spaces+ period). I omit the quotes from the count.
Through a copying error by a typist, a "u" was accidentally inserted into the word "color," so that we have "I love the colour of her hair." We have a random error that left the meaning of the sentence, its functionality, unchanged. Obviously, there's some (not much) leeway in the construction of some (but not most) words. I agree that this is a happy example of a random addition NOT destroying the amount of information in the string. Now, repeat the process billions of times, over billions of copying errors, and try to show me how, over time, proofreading errors - whether random additions, random transpositions, or random deletions -- can lead either to increase or stasis of information in this sentence. If we change "i" to "r" and "r" to "e" in "hair," we get "hare," which destroys the original meaning. If we rearrange the words as in "of I color hair love the" we get the text equivalent of organic "tar" that often precipitates out in test tubes in so many origin-of-life experiments: syntactic junk. If we make random changes to the spelling, we will get orthographic junk.
Which of the following sentences is more specific:
"I love the color of her hair." "I love the color of her hare." or, "I love the color of her [hair or hare]."
The first two are highly specific, though they mean radically different things. The third, because it could either be "hair" or "hare" is less specific in meaning than the first two, and therefore contains less information. It obviously contains less information, because we can't determine the exact meaning intended. This is a simple example of the relation between specificity and information.
Obviously, substitutions of the first kind don't add information, though they don't subtract it either. But how many words are really susceptible to those sorts of changes? Not many. We can replace "judgment" with "judgement" and "supersede" with "supercede" (the latter being a practice I despise, by the way), but the majority of words are pretty invariant in their spelling. We have some leeway with word order but not much. If you believe that through successive random additions to a "packet" of information -- a bit string -- you can continuous, and over time, add more and more information, then you're saying (whether you realize it or not) that Atlas Shrugged -- which is a long, intelligible bit string -- could have been produced by small random changes to a page of already existing text (say the front page of the New York Times). Sorry if you found my example to be a "parody." It's a reductio ad absurdum of your argument.
Finally, I should add that, while randomness will always tend to degrade information, it isn't the only thing that will degrade information. Much of this has to do with how we define "information" to begin with. Consider the following:
You crack open a fortune cookie. The fortune says "Your name is Merlin Jetton." You crack open a fortune cookie. The fortune says "&^%$#jh$r." You crack open a fortune cooke. The fortune says "It will rain when you leave this restaurant; or it will not rain when you leave this restaurant."
The information content of each of the messages is zero, despite the fact that the only truly random message was the second one. Shannon defined (and was able to quantify) information as a message that reduced uncertainty by 1/2. If a message gives you gibberish, it obviously tells you nothing, so there's no information content. If a message tells you what you already know, it also has no information content. If a message simply enumerates all the possibilities of an event ("it will rain or it won't rain) it, again, tells you what you already know, and has no information content.
Just FYI, Shannon defined the unit of information to be a binary digit or bit. Any message, by definition, that halves the uncertainty of any event (as viewed by the receiver of the message) by 1/2, is defined as 1 bit of information. Dawkins, a Darwinist True Believer (and hero to some on this board) has a good example of this. An expectant father is watching the birth of his first child through a hospital window. He has pre-arranged with the OR nurse the following code: if it's a boy, she'll hold up a blue card in front of the window; if it's a girl, a pink card. All things being equal, there's a 50% chance of getting a boy and a 50% chance of getting a girl. The uncertainty involved in this event is 1/2. The moment of truth arrives and the nurse holds up a pink card; "It's a girl!" Uncertainty reduced by 1/2. Information content of the message = 1 bit. Obviously, if the nurse held up a sign that said "It's either a boy or a girl," the message hasn't reduced the uncertainty, and there was no information sent in the message.
The second he granted its truth, called it irrelevant and followed with another parody.
The example of the archers was an analogy, and I think a good one, for your position. Sorry if you took it as parody. I used exactly the same example with Teresa Isanhart, who responded favorably to it; she didn't take it as parody. Perhaps you're overly sensitive. You brought up a well known issue in probability, and proceeded to draw the wrong conclusions from it. Here's another example, analogy or parody, as you wish:
It's New Year's Eve. A million people are crowded into Time Square. The mayor and the cops have advance warning from the Dept. of Homeland Security that there is definitely a bomb that will go off at midnight. So as not to cause panic, they don't release the news to the public, but you (being well connected) get wind of it. Your son is one of the spectators in Times Square, and you enlist the help of a sympathetic MTA worker to find your son and get him out of there. We'll skip subtleties of dividing the group according to sex: there's a 1/1,000,000 chance of randomly finding your son in the crowd. The MTA worker grabs the first person he sees and brings him to your home. "Here he is," says the friendly man from the MTA. "That's not him," you say. He replies "What difference does it make? He's probably someone's son, and, besides, the chances of finding this guy were exactly the same as the chances of finding your son. I beat 1/1,000,000 odds by bringing you someone." You retort "I don't need someone; I need a specific person."
It's the specificity that's important, and not just the "odds."
So you enlist the help of a friendly FBI man, who uses sophisticated image-recognition technology from a secret bunker. He grabs your son and brings him to you. (1) if you're honest and good (I was about to say "God fearin', but I won't) you say "THANK YOU, SIR!! How did you do it?"
The word "how," in this context is a request for an explanation of intelligent design: what form of intelligence did you bring to bear on this problem in order to target a particular goal; i.e., finding one specific target, your son, out of all of the other possibilities, none of which had any purpose for you.
(2) if you're an intellectually dishonest Darwinian, you say "The MTA worker before you told me that finding my son had exactly the same odds as finding any other other target, so don't expect me to be grateful or say 'thank you' or anything like that. And don't walk across my lawn when you go back to your car, OK?"
Sorry if you don't understand specificity and how it relates to probability. The analogy/parody is the best I can do (for now, anyway).
So in reply I return the favor to his response -- irrelevant. To the third he did not respond.
Your analogy does not mention a key difference. Homeostasis, metabolism, and reproduction are essential properties of all organisms. (First two individual level, 3rd species level.) These are *internal* purposes, and there is no evidence they have been put there by an external, intelligent being. These are rather broad strokes. Some things about metabolism might have been intelligently designed; some things might be a product of contingency. ID can (and does) accommodate both; it's only Darwinism that's intellectually bigoted in favor of only undirected processes (because other processes conflict with its apriori materialist worldview). I don't know what you are willing to accept as "evidence." If certain systems (like ATP synthesis) appear to be complex in a way that resists logically breaking them down into component parts that appeared sequentially over time, then that is, at the very least, prima facie evidence that the system bears marks of design -- because other things that are designed (like mouse traps) cannot be so broken down either. The purpose of an outboard motor is clearly *externally* placed by an intelligent being.
The purpose of an outboard motor is locomotion. The construction of the outboard motor clearly bears marks of intelligence. The fact that, in addition to its purpose, it so happily found its way onto a boat, in just the right spot is another mark of intelligence. Both of these apply to the bacterial flagellum. Its purpose is locomotion. It requires 50 specific proteins to function or it doesn't function at all. The notion that it could have come about through random mutation + selection is absurd: if the odds of contructing one specific protein bit string of residue length 100 is 1/20^100, then the odds of constructing 50 different specific proteins of the same length are (1/20^100)^50. Mathematicians don't even have names for numbers this big. Consider, too that each protein must be precisely regulated in order for the whole system to function.
What ID is willing to do, of course, is to consider that pre-formed instructions for flagella found their way into pre-formed instructions for bacteria (or into an actual bacterium, for that matter), through some process of lateral gene transfer. Motorboats are built exactly by such processes: a pre-formed design for an outboard motor gets inserted into a pre-formed design for a boat; or an already built outboard motor is physically brought and attached to an already existing rowboat.
Grammarian has heavily used the arguments of William Dembski here. So I post these links: http://www.csicop.org/si/2002-11/dembski.html http://www.csicop.org/sb/2000-12/reality-check.html
I'll deal only with the first link on this post.
Mark Perakh is, presumably, a physicist who likes to haunt the "letters to the editor" pages of "Commentary" magazine after it publishes articles by David Berlinski. The latter has to take precious time and space in his replies to the letters in order to debunk the many errors that Perakh unfailingly makes in regard to Berlinksi's articles re Darwinism, information theory, etc. Perakh is, quite simply, a moron and a pest (though not necessarily in that order; in that regard, he shows a great lack of specificity). I used to subscribe to the Skeptical Inquirer myself many years ago; I'm sorry to see that its standards have dropped so miserably low. Here are some of the errors Mark Perakh makes:
Continuing in the same vein, Dembski repeats his often-stated thesis that what he calls "specified complexity" is a necessary indicator of design. True. The fallacy of that statement has been demonstrated more than once (for example, Edis 2001, Wilkins and Elsberry 2001, Perakh 2001 and 2002, Wein 2001 and 2002, Fitelson et al. 1999, Pennock 2000, Elsberry 2002, and others). Indeed, consider an example discussed several times before (Perakh 2001): Uh, oh. He's going to quote himself . . . Imagine a pile of pebbles found on a river shore. Usually each of them has an irregular shape, its color varying over its surface, and often its density also varying over its volume. There are no two pebbles which are identical in shape, color, and density distribution. I guess even Dembski would not argue that the irregular shape, color, and density distribution of a particular pebble resulted from intelligent design, regardless of how complex these shapes and distributions may happen to be. Each pebble formed by chance. Probably true, but not necessarily true. Dembski readily admits that his "explanatory filter," which is meant to filter out chance in order to perceive design might yield a "false negative"; i.e., it might accidentally attribute chance to something that was in fact designed. For example, we might accidentally infer that a painting with drips and blobs on it was the product of some stochastic process, such as throwing paint at a canvas and letting the drops settle where they may. If we later learn that the drips and blobs were lovingly put precisely where they are by a Jackson Pollack, then we were wrong: what appeared to be chance was actually a product of intention. However, for the sake of argument we'll assume that we take the shape of the pebbles, and their distribution, to be a product of chance. Now, what if among the pebbles we find one that has a perfectly spherical shape, with an ideally uniform distribution of color and density? Not too many people would deny that this piece in all likelihood is a product of design. True. However, it is much simpler than any other pebble, if, of course, complexity is defined in a logically consistent manner rather than in Dembski’s idiosyncratic way. A logically consistent definition of complexity is given, for example, in the algorithmic theory of randomness-probability-complexity (and is often referred to as Kolmogorov complexity). Dembski takes ample account of Kolmogorov-Chaitin complexity. I was hoping to post on it later, but I'll try to explain it (briefly) now. Kolmogorov and Chaitin were two computer scientists studying recursion theory. They tried to explain the notion of "complexity" by reference to the so-called "compressibility" or "incompressibility" of a computer algorithm. What does it mean?
Suppose you flip a coin 100 times and on each flip that is not heads, you turn the coin so that it reads "heads." Your sequence is HHH . . . n n=100. Suppose you wanted to instruct a computer to reproduce that sequence (and it is, after all, a highly specific sequence, whose odds of appearing are 1/2^100). What would the instruction consist of? Probably this:
"Print 'H' 100 times."
You've taken a sequence that is 100 characters long and "compressed" it into a single instruction with 15 characters. If the sequence were 1,000 "H's" we have relatively more compression, the algorithm being merely "Print 'H' 1,000 times." This characteristic of "compressibility" is taken as indicating an inherent simplicity to the sequence.
Conversely, if you genuinely toss the coin 100 times, you'll get a random sequence of H's and T's, perhaps something like HHTHTTTHTHHHHTHTHHTHHTHTHTTTTTTHTHHH, etc. Suppose you wanted to instruct a computer to reproduce that sequence (and, again, it is also highly specific; it's one unique sequence out of all possible sequences of H's and T's in 100 tosses, whose odds are exactly what we have with the first example, i.e., 1/2^100). Is there any way of compressing that pattern into a single algorithmic step? Apparently, the answer is no. In order to get a computer to reproduce that specific pattern, you have to instruct it with all the original characters of the pattern:
1. print H 2. print H 3. print T 4. print H 5. print T 6 print T etc. etc. until the entire original pattern is given. You could, of course, also have one uncompressed step, such as:
1. Print H, then H, then T, then H, then T, then T . . . until you've given it the entire original sequence to reproduce.
This characteristic of "incompressibility" is taken as indicating an inherent complexity to the sequence.
Kolmogorov complexity is not concerned with questions about design. We all know that a painter could paint like Vermeer or he could paint like Jackson Pollack, and that the works of both are products of design. A person could honestly flip a random sequence, or, by design, purposely turn over a coin so that the first letter is H, then the second is H, then the 3rd is T, etc., until the entire 100-bit sequence looks random but is not. Kolmogorov complexity is good at showing the structural differences between two sequences that have the same odds of appearing (such as HHHH or HTHH) but which differ in terms of relative simplicity and complexity. It has nothing to say about the issue of goals, purposes, design, or intelligence.
The Kolmogorov complexity of a perfectly spherical piece of stone is much lower than it is for any other pebble having irregular shape and non-uniform distribution of density and color. True. Indeed, to describe the perfectly spherical piece one needs a very simple program (or algorithm), actually limited to just one number for the sphere’s diameter, one number for density, and a brief indication of color. For a piece of irregular shape, the program necessarily must be much longer, as it requires many numbers to reproduce the complex shape and the distributions of density and of color. True. This is a very simple example of the fallacy of Dembski’s thesis according to which design is indicated by "specified complexity." Actually, in this example (as well as in an endless number of other situations) it is simplicity which seems to point to design while complexity seems to indicate chance as the antecedent cause of the item’s characteristics. What this indicates is that Kolmogorov complexity is unsuited for distinguishing things that were designed from things were undesigned. It wasn't meant for that. It was meant to distinguish simple things from complex things, irrespective of the source of the simplicity or the complexity. We all know that a perfect sphere amongst pebbles would probably indicate design, and so does Dembski. No ID person, and certainly not Dembski, has ever said that intelligence only produces complicated things; it can also produce simple things. It can also produce things that look as if they were produced randomly. The gravamen of Dembski's argument is that there is a peculiar species of complexity called "specified complexity" -- in his nomenclature, "Complex Specified Information" or "CSI" -- which is only found in designed things, even designed things that are rather simple (such as mousetraps). Dembski has never said that all designed things exhibit CSI; rather, all instances if CSI point to design.
This supposed refutation of Dembski by Perakh in the Skeptical Inquirer simply shows how far the latter will go to misunderstand, misstate, and mischaracterize the issues involved in the ID/ND debate.
(Edited by The Grammarian on 8/21, 8:57am)
(Edited by The Grammarian on 8/21, 8:58am)
(Edited by The Grammarian on 8/21, 9:00am)
(Edited by The Grammarian on 8/21, 9:34am)
|
|