C
Charlie Hatchett
Guest
- Thread starter
- #61
The formation of new allele gets observed all the time, there are tons of references about this on the web.
Explain to me how this is new information?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Join For His Glory for a discussion on how
https://christianforums.net/threads/a-vessel-of-honor.110278/
https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/
Strengthening families through biblical principles.
Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.
Read daily articles from Focus on the Family in the Marriage and Parenting Resources forum.
The formation of new allele gets observed all the time, there are tons of references about this on the web.
The Barbarian just did that. What part of his explaination do you disagree with?
Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.The part about a new "allele" (aka-a mutation) containing 20% of the information in the genetic package.
Let's see your math then. Just saying so doesn't make it so.The mutation (aka- a new allele) is actually a decrease in information.
barbarian:
The informatin for a given gene in a population is:
-1 ∑ p(i) log(p(i))
Where P(i) is the frequency of the ith allele.
So, if there are four alleles, each 25%, then the information for that gene is about 0.6 ( - .25 X log(.25)X4)
Suppose a new allele appears and eventually five allelles comprise 20% each.
The information for that gene will then be just a bit less than 0.7, a significant increase in information.
His calculation says otherwise, and it seems watertight.
Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package. The % was just arbitrarily assigned. And how do you know that “allele†is new? It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation. Information does not increase, just because there is more code.Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.
Which of these two blocks has more information?
abcdefg
abcdefg
abcdefg
abcdefg
abcdefg
abcdefg
abmdefg
abcdefg
Let's see your math then. Just saying so doesn't make it so.
Completed in 2003, the Human Genome Project (HGP) was a 13-year project coordinated by the U.S. Department of Energy and the National Institutes of Health. During the early years of the HGP, the Wellcome Trust (U.K.) became a major partner; additional contributions came from Japan, France, Germany, China, and others. See our history page for more information.
Project goals were to
· identify all the approximately 20,000-25,000 genes in human DNA,
· determine the sequences of the 3 billion chemical base pairs that make up human DNA,
· store this information in databases,
· improve tools for data analysis,
· transfer related technologies to the private sector, and
· address the ethical, legal, and social issues (ELSI) that may arise from the project
http://www.ornl.gov/sci/techresources/H ... home.shtml
In the example, your saying you know with relative certainty, that a new allele possesses 20% of the total genetic information?
Remember the saying: output from an equation is only as good as the input (aka- trash in, trash out).
Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package.
The % was just arbitrarily assigned.
And how do you know that “allele†is new?
It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation.
Information does not increase, just because there is more code.
It’s impossible to know without knowing the “languageâ€Â.
The whole batch could represent only 1 bit of information...or none at all. Again, I think your confusing “code†with information
Quote:
In the example, your saying you know with relative certainty, that a new allele possesses 20% of the total genetic information?
Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.
Quote:
Remember the saying: output from an equation is only as good as the input (aka- trash in, trash out).
So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?
Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.
Quote:
Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package.
Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.
Quote:
The % was just arbitrarily assigned.
Feel free to use other frequencies. I'll be glad to calculate those, and show you that there will still be an increase in information.
Quote:
And how do you know that “allele†is new?
We can sometimes know, by tracing down the individual in which the mutation occured.
Quote:
It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation.
We have quite a few of those identified. Would you like some examples?
Quote:
Information does not increase, just because there is more code.
Not more. Different code. That increases information.
Which of these two blocks has more information?
abcdefg
abcdefg
abcdefg
abcdefg
abcdefg
abcdefg
abmdefg
abcdefg
Quote:
It’s impossible to know without knowing the “languageâ€Â.
No. The second one has more information, because the uncertainty is increased.
Quote:
The whole batch could represent only 1 bit of information...or none at all. Again, I think your confusing “code†with information
Again, you should give us your definition of "information." It appears you don't know what the word means.
Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.
So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?
Not more. Different code. That increases information.
Allele are not a different term for mutations.
Quote:
And how do you know that “allele†is new?
We can sometimes know, by tracing down the individual in which the mutation occured.
Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.
No. The second one has more information, because the uncertainty is increased.
Then apply this to genetics - just like you asked us to do, "applying it to a real world scenario" - and give us an actual value for the information content of the genome before and after a new allele has appeared, as well as the way how you calculated it.Information does not = entropy. Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine). Your equating randomness with information...that's wrong.
R = Hbefore - Hafter.
where H is the Shannon uncertainty:
H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)
...and surprising things constitute a greater decrease of uncertainty as expected ones. Hence the second block as more information, as in the first one the "c" already was more or less expected to come next.Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
We already went through all this a month ago
Information does not = entropy.
Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
Your equating randomness with information...that's wrong.
I never claimed that new "alleles" don't appear in populations, but I maintain their mutations, which degrade the original information content.
Again, your confusing randomness with information. That's wrong.
And how do you know that “allele†is new?
Right...a mutation.
Information does not = entropy.
By stating information = randomness, you are demonstrating that you don't have a grip on information theory.
jwu:
Then apply this to genetics - just like you asked us to do, "applying it to a real world scenario" - and give us an actual value for the information content of the genome before and after a new allele has appeared, as well as the way how you calculated it.
Show us with cold hard math how this is a decrease of information, so that we finally know how you want to calculate the information content of a genome.
And then show us a process that is proposed by the theory of evolution which requires an increase of information according to your understanding of it. I think i've asked this before...you didn't identify any such process yet. I wonder why.
Quote:
Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
...and surprising things constitute a greater decrease of uncertainty as expected ones. Hence the second block as more information, as in the first one the "c" already was more or less expected to come next.
(Barbarian demonstrates how a new allele increases information by increasing uncertainty)
Yep, but you need to be reminded, it seems. Let's get you a definition of "information" that can actually be tested...
The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy. It is expressed in terms of the various probabilities involved--those of getting to certain stages in the process of forming messages, and the probabilities that, when in those stages, certain symbols be chosen next. The formula, moreover, involves the logarithm of probabilities, so that it is a natural generalization of the logarithmic measure spoken of above in connection with simple cases.
To those who have studied the physical sciences, it is most significant that an entropy-like expression appears in the theory as a measure of information. Introduced by Clausius nearly one hundred years ago, closely associated with the name of Boatzmann, and given deep meaning by Gibbs in his classic work on statistical mechanics, entropy has become so basic and pervasive a concept that Eddington remarks The law that entropy always increases - the second law of thermodynamics - holds, (Charlie: This is true) I think, the supreme position among the laws of Nature.
http://www.uoregon.edu/~felsing/virtual_asia/info.html
Now, you may not like that definition, but it turns out to be the right one. It works. This understandning of information allows us to measure information in things like genetics, data transmission, and so on, and it tells us how to make low-powered transmitters that work over millions of miles, and how to stuff the maxium amount of reliable messages into a data channel.
Given that Shannon's definition works, and yours does not, I don't think youi're going to have much luck in convincing engineers and scientists to go with yours.
Barbarian on why a new allele produces new information:
Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.
Charlie:
Information does not = entropy.
Barbarian:
Yep, it does. See above.
Charlie:
Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
Barbarian:
The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.
Quote:
Your equating randomness with information...that's wrong.
You're confusing entropy with randomness. That's wrong. A system at maxium entropy cannot have any randomness whatever, since that would decrease entropy. Remember in a physical sense, entropy is the lack of any useful heat to do work. And if the thermal energy in a system is randomly distributed, it is not evenly distributed, and therefore heat continues to flow.
Barbarian asks:
So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?
Quote:
I never claimed that new "alleles" don't appear in populations, but I maintain their mutations, which degrade the original information content.
Barbarian:
As you just learned, a new allele increases information. Always does. This is what Shannon was pointing out. Information is the measure of uncertainty in a message. The entropy, in other words.
Charlie:
Again, your confusing randomness with information. That's wrong.
Barbarian:
No. Rather, you've conflated entropy and randomness.
Quote:
And how do you know that “allele†is new?
Barbarian observes:
We can sometimes know, by tracing down the individual in which the mutation occured.
Quote:
Right...a mutation.
Remember, a mutation is not an allele.
Barbarian observes:
Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.
No. The second one has more information, because the uncertainty is increased.
Charlie:
Information does not = entropy.
Barbarian:
If that were so, you wouldn't be communicating over this line. Engineers built that system, with the understanding that information = entropy. That works. Yours doesn't.
Charlie:
By stating information = randomness, you are demonstrating that you don't have a grip on information theory.
Barbarian:
See above. Entropy is not randomness.
Barbarian:
One more time, just so you remember:
The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.
_________________
"Beware of gifts bearing Greeks." - Laocoon
Information Is Not Entropy,
Information Is Not Uncertainty!
Dr. Thomas D. Schneider
National Institutes of Health
National Cancer Institute
Center for Cancer Research Nanobiology Program
Molecular Information Theory Group
Frederick, Maryland 21702-1201
toms@ncifcrf.gov
http://www.ccrnp.ncifcrf.gov/~toms/
There are many many statements in the literature which say that information is the same as entropy. The reason for this was told by Tribus. The story goes that Shannon didn't know what to call his measure so he asked von Neumann, who said `You should call it entropy ... [since] ... no one knows what entropy really is, so in a debate you will always have the advantage' (Tribus1971).
Shannon called his measure not only the entropy but also the "uncertainty". I prefer this term because it does not have physical units associated with it. If you correlate information with uncertainty, then you get into deep trouble. Suppose that:
information ~ uncertainty
but since they have almost identical formulae:
uncertainty ~ physical entropy
so
information ~ physical entropy
BUT as a system gets more random, its entropy goes up:
randomness ~ physical entropy
so
information ~ physical randomness
How could that be? Information is the very opposite of randomness!
The confusion comes from neglecting to do a subtraction:
Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
If you use this definition, it will clarify all the confusion in the literature.
Note: Shannon understood this distinction and called the uncertainty which is subtracted the 'equivocation'. Shannon (1948) said on page 20:
R = H(x) - Hy(x)
"The conditional entropy Hy(x) will, for convenience, be called the equivocation. It measures the average ambiguity of the received signal."
The mistake is almost always made by people who are not actually trying to use the measure.
http://www.lecb.ncifcrf.gov/~toms/infor ... ainty.html
I'm Confused: How Could Information Equal Entropy?
If someone says that information = uncertainty = entropy, then they are confused, or something was not stated that should have been. Those equalities lead to a contradiction, since entropy of a system increases as the system becomes more disordered. So information corresponds to disorder according to this confusion.
If you always take information to be a decrease in uncertainty at the receiver and you will get straightened out:
R = Hbefore - Hafter.
where H is the Shannon uncertainty:
H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)
and Pi is the probability of the ith symbol. If you don't understand this, please refer to "Is There a Quick Introduction to Information Theory Somewhere?".
Imagine that we are in communication and that we have agreed on an alphabet. Before I send you a bunch of characters, you are uncertain (Hbefore) as to what I'm about to send. After you receive a character, your uncertainty goes down (to Hafter). Hafter is never zero because of noise in the communication system. Your decrease in uncertainty is the information (R) that you gain.
Since Hbefore and Hafter are state functions, this makes R a function of state. It allows you to lose information (it's called forgetting). You can put information into a computer and then remove it in a cycle.
Many of the statements in the early literature assumed a noiseless channel, so the uncertainty after receipt is zero (Hafter=0). This leads to the SPECIAL CASE where R = Hbefore. But Hbefore is NOT "the uncertainty", it is the uncertainty of the receiver BEFORE RECEIVING THE MESSAGE.
A way to see this is to work out the information in a bunch of DNA binding sites.
Definition of "binding": many proteins stick to certain special spots on DNA to control genes by turning them on or off. The only thing that distinguishes one spot from another spot is the pattern of letters (nucleotide bases) there. How much information is required to define this pattern?
Here is an aligned listing of the binding sites for the cI and cro proteins of the bacteriophage (i.e., virus) named lambda:
alist 5.66 aligned listing of:
* 96/10/08 19:47:44, 96/10/08 19:31:56, lambda cI/cro sites
piece names from:
* 96/10/08 19:47:44, 96/10/08 19:31:56, lambda cI/cro sites
The alignment is by delila instructions
The book is from: -101 to 100
This alist list is from: -15 to 15
------ ++++++
111111--------- +++++++++111111
5432109876543210123456789012345
...............................
OL1 J02459 35599 + 1 tgctcagtatcaccgccagtggtatttatgt
J02459 35599 - 2 acataaataccactggcggtgatactgagca
OL2 J02459 35623 + 3 tttatgtcaacaccgccagagataatttatc
J02459 35623 - 4 gataaattatctctggcggtgttgacataaa
OL3 J02459 35643 + 5 gataatttatcaccgcagatggttatctgta
J02459 35643 - 6 tacagataaccatctgcggtgataaattatc
OR3 J02459 37959 + 7 ttaaatctatcaccgcaagggataaatatct
J02459 37959 - 8 agatatttatcccttgcggtgatagatttaa
OR2 J02459 37982 + 9 aaatatctaacaccgtgcgtgttgactattt
J02459 37982 - 10 aaatagtcaacacgcacggtgttagatattt
OR1 J02459 38006 + 11 actattttacctctggcggtgataatggttg
J02459 38006 - 12 caaccattatcaccgccagaggtaaaatagt
^
Each horizontal line represents a DNA sequence, starting with the 5' end on the left, and proceeding to the 3' end on the right. The first sequence begins with: 5' tgctcag ... and ends with ... tttatgt 3'. Each of these twelve sequences is recognized by the lambda repressor protein (called cI) and also by the lambda cro protein.
What makes these sequences special so that these proteins like to stick to them? Clearly there must be a pattern of some kind.
Read the numbers on the top vertically. This is called a "numbar". Notice that position +7 always has a T (marked with the ^). That is, according to this rather limited data set, one or both of the proteins that bind here always require a T at that spot. Since the frequency of T is 1 and the frequencies of other bases there are 0, H(+7) = 0 bits. But that makes no sense whatsoever! This is a position where the protein requires information to be there.
That is, what is really happening is that the protein has two states. In the BEFORE state, it is somewhere on the DNA, and is able to probe all 4 possible bases. Thus the uncertainty before binding is Hbefore = log2(4) = 2 bits. In the AFTER state, the protein has bound and the uncertainty is lower: Hafter(+7) = 0 bits. The information content, or sequence conservation, of the position is Rsequence(+7) = Hbefore - Hafter = 2 bits. That is a sensible answer. Notice that this gives Rsequence close to zero outside the sites.
If you have uncertainty and information and entropy confused, I don't think you would be able to work through this problem. For one thing, one would get high information OUTSIDE the sites. Some people have published graphs like this.
A nice way to display binding site data so you can see them and grasp their meaning rapidly is by the sequence logo method. The sequence logo for the example above is at http://www.lecb.ncifcrf.gov/~toms/galle ... i.fig1.gif. More information on sequence logos is in the section What are Sequence Logos?
More information about the theory of BEFORE and AFTER states is given in the papers http://www.lecb.ncifcrf.gov/~toms/paper/nano2 , http://www.lecb.ncifcrf.gov/~toms/paper/ccmm and http://www.lecb.ncifcrf.gov/~toms/paper/edmm.
http://www.ccrnp.ncifcrf.gov/~toms/bion ... al.Entropy
Barbarian:
Charlie apparently doesn't know what "information" is, much less how to calculate it.
I'm intrigued by the idea that increasing information is a "decrease" in information.
Let's turn this one backwards, Charlie. Suppose God magically created five alleles, instead of doing it the way He did. Then suppose one allele disappears in a population, and there isn't any more.
Is that then a gain in infomation? If not, why is a new allele not an increase in information?
I have to say, Charley, it looks like you're just chanting phrases they taught you in YE indoctrination.
That doesn't answer the question. It's about binding specifity as a measure of information content, not about the change of information if a new allele appears.See references below.
That doesn't answer the question. It's about binding specifity as a measure of information content, not about the change of information if a new allele appears.
You claimed that the emergence of a new allele decreased the information content. I asked you to back this up with actual math, that specific example.
Do it or retract that claim.
1. And if he effect has more entropy than the cause (well...technically one cannot say such a thing, but anyway), then i don't see how anyone could possibly argue that there is a conflict with the 2ndLoT either.
2. In Shannon's terms it is just the degree of alteration of the original message. Evolution actually requires an increase of Shannon entropy.
3. Repeated observance shows us that mutations can form new proteins. If that is not an increase of information, then what is?
4. It seems to me that you equate the usefulness of a mutation with its information content. That however has no foundation in information theory.
5. There is no correlation between the effects of mutation on the individuum's reproductive success and its change of the information content of the genome.
6. By the way...entropy is a measure of complexity. The higher the level of entropy, the more possible microstates, the more complex an entity.
What are you talking about? This still is about binding specifity and has nothing to do with allele! An allele is not a binding site.http://www.ccrnp.ncifcrf.gov/~toms/bionet.info-theory.faq.html#Information.Equal.Entropy
In the example given about midway down the page of the above link, substitute an unknown sequence for a known sequence:
Hbefore = 2 bits.
Hafter = 2 bits= (because the arrangement does not represent a known sequence.)
R=2-2
R=0
There’s a decrease in info of 2 bits because, before the introduction of the new allele:
Hbefore = 2 bits.
Hafter = 0 bits (because it’s a known sequence)
R=2-0
R=2
Your decrease in uncertainty is the information (R) that you gain.
The information content, or sequence conservation, of the position is Rsequence(+7) = Hbefore - Hafter = 2 bits.
Please explain how it makes this experiment conjecture just because that particular run of the simulation took a while to take off? That's handwaving of the worst kind! What particular aspect of the simulation do you disagree with?Note that not even one bit of info is hypothesized until 300 generations have passed. So this experiment is not grounded in observation. It’s just conjecture.
Why should i retract any of them? If i changed my position somewhere, then please point out self-contradictions so that i can correct them.Are you ready to retract the following claims:
jwu:
What are you talking about? This still is about binding specifity and has nothing to do with allele! An allele is not a binding site.
jwu:
Show us with cold hard math how this is a decrease of information, so that we finally know how you want to calculate the information content of a genome.
“Information is ‘always’ a measure of the decrease of uncertainty at a receiver†...
http://www.lecb.ncifcrf.gov/~toms/infor ... ainty.html
jwu:
Oh, and by the way the website which you referenced does not declare that to be a decrease of information, but an increase:
R= 2 bits
Nothing but a fallacy. I can and do use sources for which I don’t 100% agree. The parts in which I’m relying are just the parts concerning Shannon’s Information Theory, which the author very clearly presents. But you would have me discard the nice presentation of Shannon’s Information Theory, only because I don’t agree with the author’s conclusions. Again...fallacy.jwu:
It's not a good idea to use a website as a reference which argues against what you want to demonstrate.
You can keep quoting that website all day, unless you actually show mistakes in its reasoning it is the author's position that you bring into this thread - and he doesn't agree with you. So unless there are mistakes in his works, which you would have to point out, any claim from your side that this website supports your position cannot be anything but a misrepresentation.
[quote:1c160]
charlie:
Note that not even one bit of info is hypothesized until 300 generations have passed. So this experiment is not grounded in observation and repetition. It’s just conjecture.
Please explain how it makes this experiment conjecture just because that particular run of the simulation took a while to take off? That's handwaving of the worst kind! What particular aspect of the simulation do you disagree with?
jwu:
Why should i retract any of them? If i changed my position somewhere, then please point out self-contradictions so that i can correct them.
[/quote:1c160]1. And if the effect has more entropy than the cause...then i don't see how anyone could possibly argue that there is a conflict with the 2ndLoT either.
2. In Shannon's terms it is just the degree of alteration of the original message. Evolution actually requires an increase of Shannon entropy.
3. Repeated observance shows us that mutations can form new proteins. If that is not an increase of information, then what is?
4. It seems to me that you equate the usefulness of a mutation with its information content. That however has no foundation in information theory.
5. There is no correlation between the effects of mutation on the individuum's reproductive success and its change of the information content of the genome.
6. By the way...entropy is a measure of complexity. The higher the level of entropy, the more possible microstates, the more complex an entity.
By measuring its binding specifity...ok. But you cannot take the original binding sequence. It's the whole point that new ones arise.What I’m talking about is whether a mutant allele adds or decreases info:
In that particular run of the simulation there was a period during which there was no significant increase of information at the beginning...albeit there was some minor up and down which alone already kills your argument. However, are you aware that any run is different? The above graph does not in any way demonstrate that there always is 0 information for 200 generations (and it doesn't even say that)Even your graph above indicates a mutant allele contains 0 bits of information for the first 200 generations. Are you claiming the wild allele did not contain any information? If the wild allele contained any information, then the result of a mutant allele is a decrease in info, as far as observable, repeatable experiments are concerned.
And we can observe thousands of generations of bacteria within a year. And even if that wasn't the case, what premises of the simulation might become invalid after a few generations? Please be specific. It's just random mutation and selection.Anything beyond 4-5 generations is just conjecture. We’ve only been toying with information theory since the mid 1900’s.
You're begging the question...you initially presuppose that the new variant cannot bind anything.The binding sites decode the allele. In the first example above, the mutant allele binds to the bases, and what is decoded is something that doesn’t make any sense...so the uncertainty after receipt is the same as before receipt:
It's not a fallacy because i asked you with what of his conclusions you disagree, and why you do so. The way how you are doing it right now is the same as a quote mine, you take the parts that suit you (but they don't even do that) and use them to say something that the original author did not intend to say.Nothing but a fallacy. I can and do use sources for which I don’t 100% agree. The parts in which I’m relying are just the parts concerning Shannon’s Information Theory, which the author very clearly presents. But you would have me discard the nice presentation of Shannon’s Information Theory, only because I don’t agree with the author’s conclusions. Again...fallacy.
Why isn't it repeatable? I do it all the time when i run that applet, with similar results. And if you use the same seed for the random number generator, then you can repeat that exact particular run too. And actually the emergence of new binding sites is frequently observed.The experiment is not based on what observations reveal, and it’s certainly not repeatable. So the conclusion is invalid at the very core of The Scientific Method. Again, it’s wand waving. Please explain to me how my reasoning is hand waving? My reasoning is based on The Scientific Method...the core of science. Remember, what we’re talking about is a “simulationâ€Â, certainly not based on reality as far as observations go.
I said it would take an increase of shannon entropy (i.e. a change of the original message) if the original genome is defined to have the highest possible information content.1. Here you equate evolution with a decrease in information. Can you honestly say it takes less info to replicate, repeatedly, a human compared to a single cell organism?
See 1. From a conservation point of view, the ToE does require a decrease of information as it requires change, and any change is a decrease by definition there.2. Here you equate an increase in uncertainty with an increase in information, or your admitting ToE is impossible, which, in that case, I agree. To state that ToE requires less and less information is nonsensical.
There i was giving you the opportunity to explain why these are not an increase of information...i did not assert anything there but merely asked you something, so i cannot retract any assertion. Besides, i do not recall an answer to that question yet.3. Here you claim mutations are an increase in information and also state
increases in entropy equate to more information.
4. Here you state “useful mutations†are not information, and, therefore, claim that
ToE requires no new information. How do you form an eye without information,
especially repetitively?
That's correct. E.g. some mutation which has a huge new binding specifity of a protein - which i think would constitute a increase of information according to your understanding - is not necessarily beneficial for the organism.5. Here you state there’s no correlation between damaging mutations and genetic
information.
Yes, it takes more bits to describe and has higher overall entropy. I don't think you will disagree...6. Here you state the more disordered a mass is, the more complex it is. This is the
equivalent of saying a bunch of gravel in a creek bed is more complex than a
human.
By measuring its binding specifity...ok. But you cannot take the original binding sequence. It's the whole point that new ones arise.
If a bacteria is defined to have the highest possible information content and any alteration is considered a decrease by definition, then yes, it would only take loss of information to get from there to a human.
_____________________________________________________________________
See 1. From a conservation point of view, the ToE does require a decrease of information as it requires change, and any change is a decrease by definition there.
There i was giving you the opportunity to explain why these are not an increase of information...i did not assert anything there but merely asked you something, so i cannot retract any assertion. Besides, i do not recall an answer to that question yet.
That's correct. E.g. some mutation which has a huge new binding specifity of a protein - which i think would constitute a increase of information according to your understanding - is not necessarily beneficial for the organism.
Yes, it takes more bits to describe and has higher overall entropy. I don't think you will disagree...
But you presupposed that that would happen...But, using Shannon's theory, the uncertainty has increased, therefore reducing the amount of information.
From a conservational point of view it does make sense. If the sender did not intend these features to appear, then they are a disturbance of the message and thus a loss of information.But, that doesn't make any sense. Your equating a decrease in information with the appearance of new, complex, and replicated structures. Surely, to accurately replicate an eye, for example, much information is needed.
But then you contradict yourself! If you argue that something which decreases binding specificy is a loss of information, how can something which increases binding specifity not be a gain then?I don't agree that the mutation is an increase in information. Uncertainty is increased, therefore it's less information.
And without intelligence, information means nothing.To describe is an act of intelligence, though.
Again, your own sources (well..their own sources) contradict you.The higher the entropy in, and of itself, is a decrease in information.