Christian Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • Are you taking the time to pray? Christ is the answer in times of need

    https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/

  • The Gospel of Jesus Christ

    Heard of "The Gospel"? Want to know more?

    There is salvation in no other, for there is not another name under heaven having been given among men, by which it behooves us to be saved."

  • Looking to grow in the word of God more?

    See our Bible Studies and Devotionals sections in Christian Growth

  • Focus on the Family

    Strengthening families through biblical principles.

    Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.

  • Have questions about the Christian faith?

    Come ask us what's on your mind in Questions and Answers

  • How are famous preachers sometimes effected by sin?

    Join Sola Scriptura for a discussion on the subject

    https://christianforums.net/threads/anointed-preaching-teaching.109331/#post-1912042

[_ Old Earth _] Five Pillars of Evolution Compared with Creation

The formation of new allele gets observed all the time, there are tons of references about this on the web.

Explain to me how this is new information?
 
The Barbarian just did that. What part of his explaination do you disagree with?

The part about a new "allele" (aka-a mutation) containing 20% of the information in the genetic package. That's just wand-waving. The mutation (aka- a new allele) is actually a decrease in information.
 
His calculation says otherwise, and it seems watertight.

The part about a new "allele" (aka-a mutation) containing 20% of the information in the genetic package.
Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.

Which of these two blocks has more information?

abcdefg
abcdefg
abcdefg
abcdefg

abcdefg
abcdefg
abmdefg
abcdefg

The mutation (aka- a new allele) is actually a decrease in information.
Let's see your math then. Just saying so doesn't make it so.
 
Charlie apparently doesn't know what "information" is, much less how to calculate it.

I'm intrigued by the idea that increasing information is a "decrease" in information.

Let's turn this one backwards, Charlie. Suppose God magically created five alleles, instead of doing it the way He did. Then suppose one allele disappears in a population, and there isn't any more.

Is that then a gain in infomation? If not, why is a new allele not an increase in information?

I have to say, Charley, it looks like you're just chanting phrases they taught you in YE indoctrination.
 
barbarian:
The informatin for a given gene in a population is:

-1 ∑ p(i) log(p(i))

Where P(i) is the frequency of the ith allele.

So, if there are four alleles, each 25%, then the information for that gene is about 0.6 ( - .25 X log(.25)X4)

Suppose a new allele appears and eventually five allelles comprise 20% each.

The information for that gene will then be just a bit less than 0.7, a significant increase in information.
His calculation says otherwise, and it seems watertight.

In the example, your saying you know with relative certainty, that a new allele possesses 20% of the total genetic information? Remember the saying: output from an equation is only as good as the input (aka- trash in, trash out).


Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.
Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package. The % was just arbitrarily assigned. And how do you know that “allele†is new? It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation. Information does not increase, just because there is more code.

Which of these two blocks has more information?

abcdefg
abcdefg
abcdefg
abcdefg

abcdefg
abcdefg
abmdefg
abcdefg

It’s impossible to know without knowing the “languageâ€Â. The whole batch could represent only 1 bit of information...or none at all. Again, I think your confusing “code†with information


Let's see your math then. Just saying so doesn't make it so.

It’s impossible to quantify, with the limited knowledge we have concerning the particular code with which we’re dealing. It’s only been fully mapped in the past 3 years. So know we know the characters of the “alphabetâ€Â, but we’re far from understanding the “languageâ€Â, and it’s intricacies. No one has even come close to understanding the code for even the simplest organism. If so, we would be able to create life. That’s why the “trash in-trash out†saying has to be kept in mind when using mathematical constructs.


Completed in 2003, the Human Genome Project (HGP) was a 13-year project coordinated by the U.S. Department of Energy and the National Institutes of Health. During the early years of the HGP, the Wellcome Trust (U.K.) became a major partner; additional contributions came from Japan, France, Germany, China, and others. See our history page for more information.
Project goals were to
· identify all the approximately 20,000-25,000 genes in human DNA,
· determine the sequences of the 3 billion chemical base pairs that make up human DNA,
· store this information in databases,
· improve tools for data analysis,
· transfer related technologies to the private sector, and
· address the ethical, legal, and social issues (ELSI) that may arise from the project
http://www.ornl.gov/sci/techresources/H ... home.shtml
 
In the example, your saying you know with relative certainty, that a new allele possesses 20% of the total genetic information?

Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.

Remember the saying: output from an equation is only as good as the input (aka- trash in, trash out).

So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?

Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.

Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package.

Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.

The % was just arbitrarily assigned.

Feel free to use other frequencies. I'll be glad to calculate those, and show you that there will still be an increase in information.

And how do you know that “allele†is new?

We can sometimes know, by tracing down the individual in which the mutation occured.

It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation.

We have quite a few of those identified. Would you like some examples?

Information does not increase, just because there is more code.

Not more. Different code. That increases information.

Which of these two blocks has more information?

abcdefg
abcdefg
abcdefg
abcdefg

abcdefg
abcdefg
abmdefg
abcdefg


It’s impossible to know without knowing the “languageâ€Â.

No. The second one has more information, because the uncertainty is increased.

The whole batch could represent only 1 bit of information...or none at all. Again, I think your confusing “code†with information

Again, you should give us your definition of "information." It appears you don't know what the word means.
 
Quote:
In the example, your saying you know with relative certainty, that a new allele possesses 20% of the total genetic information?


Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.

Quote:
Remember the saying: output from an equation is only as good as the input (aka- trash in, trash out).


So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?

Allele are not a different term for mutations. Moreover, they are used in the context of population genetics, not individual genetics, the size of the genetic package (overall genome of the population) actually increases, as the new allele doesn't replace something else that gets lost, but reduces redundancy. The old allele remains extant in the genomes of other individualy of the population after all, unless the new one is more suitable for the population's niche and takes over.

Quote:
Again, wand-waving. You have no way of knowing that the “extra†“allele†added to the population contains 20% of the information for the total population genetic package.


Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.

Quote:
The % was just arbitrarily assigned.


Feel free to use other frequencies. I'll be glad to calculate those, and show you that there will still be an increase in information.

Quote:
And how do you know that “allele†is new?


We can sometimes know, by tracing down the individual in which the mutation occured.

Quote:
It very easily (and most likely is) could be a trait has always been possessed in the species, but rarely expressed or a new mutation.


We have quite a few of those identified. Would you like some examples?

Quote:
Information does not increase, just because there is more code.


Not more. Different code. That increases information.

Which of these two blocks has more information?

abcdefg
abcdefg
abcdefg
abcdefg

abcdefg
abcdefg
abmdefg
abcdefg

Quote:
It’s impossible to know without knowing the “languageâ€Â.


No. The second one has more information, because the uncertainty is increased.

Quote:
The whole batch could represent only 1 bit of information...or none at all. Again, I think your confusing “code†with information


Again, you should give us your definition of "information." It appears you don't know what the word means.


We already went through all this a month ago:

ttp://www.christianforums.net/viewtopic ... c&start=30



Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.


Information does not = entropy. Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine). Your equating randomness with information...that's wrong.

R = Hbefore - Hafter.

where H is the Shannon uncertainty:

H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)

So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?

I never claimed that new "alleles" don't appear in populations, but I maintain their mutations, which degrade the original information content.

Not more. Different code. That increases information.

Again, your confusing randomness with information. That's wrong.

Allele are not a different term for mutations.


Quote:
And how do you know that “allele†is new?


We can sometimes know, by tracing down the individual in which the mutation occured.

Right...a mutation.
Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.



No. The second one has more information, because the uncertainty is increased.

Information does not = entropy. Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine). Your equating randomness with information...that's wrong.

R = Hbefore - Hafter.

where H is the Shannon uncertainty:

H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)

By stating information = randomness, you are demonstrating that you don't have a grip on information theory.

All right guys, I'm done for the week...back to work we go. See ya' next weekend.

Peace 8-)
 
Information does not = entropy. Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine). Your equating randomness with information...that's wrong.

R = Hbefore - Hafter.

where H is the Shannon uncertainty:

H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)
Then apply this to genetics - just like you asked us to do, "applying it to a real world scenario" - and give us an actual value for the information content of the genome before and after a new allele has appeared, as well as the way how you calculated it.
Show us with cold hard math how this is a decrease of information, so that we finally know how you want to calculate the information content of a genome.

And then show us a process that is proposed by the theory of evolution which requires an increase of information according to your understanding of it. I think i've asked this before...you didn't identify any such process yet. I wonder why.


Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
...and surprising things constitute a greater decrease of uncertainty as expected ones. Hence the second block as more information, as in the first one the "c" already was more or less expected to come next.
 
(Barbarian demonstrates how a new allele increases information by increasing uncertainty)

We already went through all this a month ago

Yep, but you need to be reminded, it seems. Let's get you a definition of "information" that can actually be tested...

The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy. It is expressed in terms of the various probabilities involved--those of getting to certain stages in the process of forming messages, and the probabilities that, when in those stages, certain symbols be chosen next. The formula, moreover, involves the logarithm of probabilities, so that it is a natural generalization of the logarithmic measure spoken of above in connection with simple cases.

To those who have studied the physical sciences, it is most significant that an entropy-like expression appears in the theory as a measure of information. Introduced by Clausius nearly one hundred years ago, closely associated with the name of Boatzmann, and given deep meaning by Gibbs in his classic work on statistical mechanics, entropy has become so basic and pervasive a concept that Eddington remarks The law that entropy always increases - the second law of thermodynamics - holds, I think, the supreme position among the laws of Nature.

http://www.uoregon.edu/~felsing/virtual_asia/info.html

Now, you may not like that definition, but it turns out to be the right one. It works. This understandning of information allows us to measure information in things like genetics, data transmission, and so on, and it tells us how to make low-powered transmitters that work over millions of miles, and how to stuff the maxium amount of reliable messages into a data channel.

Given that Shannon's definition works, and yours does not, I don't think youi're going to have much luck in convincing engineers and scientists to go with yours.

Barbarian on why a new allele produces new information:
Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.

Information does not = entropy.

Yep, it does. See above.

Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).

From above:
The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.

Your equating randomness with information...that's wrong.

You're confusing entropy with randomness. That's wrong. A system at maxium entropy cannot have any randomness whatever, since that would decrease entropy. Remember in a physical sense, entropy is the lack of any useful heat to do work. And if the thermal energy in a system is randomly distributed, it is not evenly distributed, and therefore heat continues to flow.

Barbarian asks:
So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?

I never claimed that new "alleles" don't appear in populations, but I maintain their mutations, which degrade the original information content.

As you just learned, a new allele increases information. Always does. This is what Shannon was pointing out. Information is the measure of uncertainty in a message. The entropy, in other words.

Again, your confusing randomness with information. That's wrong.

No. Rather, you've conflated entropy and randomness.

And how do you know that “allele†is new?

Barbarian observes:
We can sometimes know, by tracing down the individual in which the mutation occured.

Right...a mutation.

Remember, a mutation is not an allele.

Barbarian observes:
Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.

No. The second one has more information, because the uncertainty is increased.

Information does not = entropy.

If that were so, you wouldn't be communicating over this line. Engineers built that system, with the understanding that information = entropy. That works. Yours doesn't.

By stating information = randomness, you are demonstrating that you don't have a grip on information theory.

See above. Entropy is not randomness.

One more time, just so you remember:

The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.
 
jwu:

Then apply this to genetics - just like you asked us to do, "applying it to a real world scenario" - and give us an actual value for the information content of the genome before and after a new allele has appeared, as well as the way how you calculated it.
Show us with cold hard math how this is a decrease of information, so that we finally know how you want to calculate the information content of a genome.

And then show us a process that is proposed by the theory of evolution which requires an increase of information according to your understanding of it. I think i've asked this before...you didn't identify any such process yet. I wonder why.


Quote:
Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).
...and surprising things constitute a greater decrease of uncertainty as expected ones. Hence the second block as more information, as in the first one the "c" already was more or less expected to come next.

See references below.

(Barbarian demonstrates how a new allele increases information by increasing uncertainty)

Yep, but you need to be reminded, it seems. Let's get you a definition of "information" that can actually be tested...

The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy. It is expressed in terms of the various probabilities involved--those of getting to certain stages in the process of forming messages, and the probabilities that, when in those stages, certain symbols be chosen next. The formula, moreover, involves the logarithm of probabilities, so that it is a natural generalization of the logarithmic measure spoken of above in connection with simple cases.

To those who have studied the physical sciences, it is most significant that an entropy-like expression appears in the theory as a measure of information. Introduced by Clausius nearly one hundred years ago, closely associated with the name of Boatzmann, and given deep meaning by Gibbs in his classic work on statistical mechanics, entropy has become so basic and pervasive a concept that Eddington remarks The law that entropy always increases - the second law of thermodynamics - holds, (Charlie: This is true) I think, the supreme position among the laws of Nature.
http://www.uoregon.edu/~felsing/virtual_asia/info.html

Now, you may not like that definition, but it turns out to be the right one.
It works. This understandning of information allows us to measure information in things like genetics, data transmission, and so on, and it tells us how to make low-powered transmitters that work over millions of miles, and how to stuff the maxium amount of reliable messages into a data channel.

Given that Shannon's definition works, and yours does not, I don't think youi're going to have much luck in convincing engineers and scientists to go with yours.

Barbarian on why a new allele produces new information:
Yep. All it has to be, is different than the other four. Before we go any further, perhaps you should tell us what you think "information" is, so we can clear up any misunderstandings.

Charlie:

Information does not = entropy.


Barbarian:

Yep, it does. See above.

Charlie:

Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).

Barbarian:

The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.

Quote:
Your equating randomness with information...that's wrong.


You're confusing entropy with randomness. That's wrong. A system at maxium entropy cannot have any randomness whatever, since that would decrease entropy. Remember in a physical sense, entropy is the lack of any useful heat to do work. And if the thermal energy in a system is randomly distributed, it is not evenly distributed, and therefore heat continues to flow.

Barbarian asks:
So, you're saying that a new allele can't appear in a population? That's demonstrably false. Would you like some examples?

Quote:
I never claimed that new "alleles" don't appear in populations, but I maintain their mutations, which degrade the original information content.

Barbarian:

As you just learned, a new allele increases information. Always does. This is what Shannon was pointing out. Information is the measure of uncertainty in a message. The entropy, in other words.

Charlie:

Again, your confusing randomness with information. That's wrong.


Barbarian:

No. Rather, you've conflated entropy and randomness.

Quote:
And how do you know that “allele†is new?


Barbarian observes:
We can sometimes know, by tracing down the individual in which the mutation occured.

Quote:
Right...a mutation.


Remember, a mutation is not an allele.

Barbarian observes:

Yes, he does. Again, you probably don't know what "information" means in population genetics and information science.

No. The second one has more information, because the uncertainty is increased.

Charlie:

Information does not = entropy.


Barbarian:

If that were so, you wouldn't be communicating over this line. Engineers built that system, with the understanding that information = entropy. That works. Yours doesn't.

Charlie:

By stating information = randomness, you are demonstrating that you don't have a grip on information theory.

Barbarian:

See above. Entropy is not randomness.


Barbarian:

One more time, just so you remember:

The quantity which uniquely meets the natural requirements that one sets up for information turns out to be exactly that which is known in thermodynamics as entropy.
_________________
"Beware of gifts bearing Greeks." - Laocoon



Information Is Not Entropy,
Information Is Not Uncertainty!

Dr. Thomas D. Schneider
National Institutes of Health
National Cancer Institute
Center for Cancer Research Nanobiology Program
Molecular Information Theory Group
Frederick, Maryland 21702-1201
toms@ncifcrf.gov
http://www.ccrnp.ncifcrf.gov/~toms/

There are many many statements in the literature which say that information is the same as entropy. The reason for this was told by Tribus. The story goes that Shannon didn't know what to call his measure so he asked von Neumann, who said `You should call it entropy ... [since] ... no one knows what entropy really is, so in a debate you will always have the advantage' (Tribus1971).

Shannon called his measure not only the entropy but also the "uncertainty". I prefer this term because it does not have physical units associated with it. If you correlate information with uncertainty, then you get into deep trouble. Suppose that:

information ~ uncertainty

but since they have almost identical formulae:
uncertainty ~ physical entropy
so

information ~ physical entropy

BUT as a system gets more random, its entropy goes up:

randomness ~ physical entropy

so

information ~ physical randomness

How could that be? Information is the very opposite of randomness!

The confusion comes from neglecting to do a subtraction:

Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).


If you use this definition, it will clarify all the confusion in the literature.

Note: Shannon understood this distinction and called the uncertainty which is subtracted the 'equivocation'. Shannon (1948) said on page 20:
R = H(x) - Hy(x)

"The conditional entropy Hy(x) will, for convenience, be called the equivocation. It measures the average ambiguity of the received signal."

The mistake is almost always made by people who are not actually trying to use the measure.

http://www.lecb.ncifcrf.gov/~toms/infor ... ainty.html


I'm Confused: How Could Information Equal Entropy?

If someone says that information = uncertainty = entropy, then they are confused, or something was not stated that should have been. Those equalities lead to a contradiction, since entropy of a system increases as the system becomes more disordered. So information corresponds to disorder according to this confusion.

If you always take information to be a decrease in uncertainty at the receiver and you will get straightened out:

R = Hbefore - Hafter.


where H is the Shannon uncertainty:

H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per symbol)

and Pi is the probability of the ith symbol. If you don't understand this, please refer to "Is There a Quick Introduction to Information Theory Somewhere?".

Imagine that we are in communication and that we have agreed on an alphabet. Before I send you a bunch of characters, you are uncertain (Hbefore) as to what I'm about to send. After you receive a character, your uncertainty goes down (to Hafter). Hafter is never zero because of noise in the communication system. Your decrease in uncertainty is the information (R) that you gain.

Since Hbefore and Hafter are state functions, this makes R a function of state. It allows you to lose information (it's called forgetting). You can put information into a computer and then remove it in a cycle.

Many of the statements in the early literature assumed a noiseless channel, so the uncertainty after receipt is zero (Hafter=0). This leads to the SPECIAL CASE where R = Hbefore. But Hbefore is NOT "the uncertainty", it is the uncertainty of the receiver BEFORE RECEIVING THE MESSAGE.

A way to see this is to work out the information in a bunch of DNA binding sites.

Definition of "binding": many proteins stick to certain special spots on DNA to control genes by turning them on or off. The only thing that distinguishes one spot from another spot is the pattern of letters (nucleotide bases) there. How much information is required to define this pattern?

Here is an aligned listing of the binding sites for the cI and cro proteins of the bacteriophage (i.e., virus) named lambda:

alist 5.66 aligned listing of:
* 96/10/08 19:47:44, 96/10/08 19:31:56, lambda cI/cro sites
piece names from:
* 96/10/08 19:47:44, 96/10/08 19:31:56, lambda cI/cro sites
The alignment is by delila instructions
The book is from: -101 to 100
This alist list is from: -15 to 15

------ ++++++
111111--------- +++++++++111111
5432109876543210123456789012345
...............................
OL1 J02459 35599 + 1 tgctcagtatcaccgccagtggtatttatgt
J02459 35599 - 2 acataaataccactggcggtgatactgagca
OL2 J02459 35623 + 3 tttatgtcaacaccgccagagataatttatc
J02459 35623 - 4 gataaattatctctggcggtgttgacataaa
OL3 J02459 35643 + 5 gataatttatcaccgcagatggttatctgta
J02459 35643 - 6 tacagataaccatctgcggtgataaattatc
OR3 J02459 37959 + 7 ttaaatctatcaccgcaagggataaatatct
J02459 37959 - 8 agatatttatcccttgcggtgatagatttaa
OR2 J02459 37982 + 9 aaatatctaacaccgtgcgtgttgactattt
J02459 37982 - 10 aaatagtcaacacgcacggtgttagatattt
OR1 J02459 38006 + 11 actattttacctctggcggtgataatggttg
J02459 38006 - 12 caaccattatcaccgccagaggtaaaatagt
^

Each horizontal line represents a DNA sequence, starting with the 5' end on the left, and proceeding to the 3' end on the right. The first sequence begins with: 5' tgctcag ... and ends with ... tttatgt 3'. Each of these twelve sequences is recognized by the lambda repressor protein (called cI) and also by the lambda cro protein.

What makes these sequences special so that these proteins like to stick to them? Clearly there must be a pattern of some kind.

Read the numbers on the top vertically. This is called a "numbar". Notice that position +7 always has a T (marked with the ^). That is, according to this rather limited data set, one or both of the proteins that bind here always require a T at that spot. Since the frequency of T is 1 and the frequencies of other bases there are 0, H(+7) = 0 bits. But that makes no sense whatsoever! This is a position where the protein requires information to be there.

That is, what is really happening is that the protein has two states. In the BEFORE state, it is somewhere on the DNA, and is able to probe all 4 possible bases. Thus the uncertainty before binding is Hbefore = log2(4) = 2 bits. In the AFTER state, the protein has bound and the uncertainty is lower: Hafter(+7) = 0 bits. The information content, or sequence conservation, of the position is Rsequence(+7) = Hbefore - Hafter = 2 bits. That is a sensible answer. Notice that this gives Rsequence close to zero outside the sites.

If you have uncertainty and information and entropy confused, I don't think you would be able to work through this problem. For one thing, one would get high information OUTSIDE the sites. Some people have published graphs like this.

A nice way to display binding site data so you can see them and grasp their meaning rapidly is by the sequence logo method. The sequence logo for the example above is at http://www.lecb.ncifcrf.gov/~toms/galle ... i.fig1.gif. More information on sequence logos is in the section What are Sequence Logos?

More information about the theory of BEFORE and AFTER states is given in the papers http://www.lecb.ncifcrf.gov/~toms/paper/nano2 , http://www.lecb.ncifcrf.gov/~toms/paper/ccmm and http://www.lecb.ncifcrf.gov/~toms/paper/edmm.

http://www.ccrnp.ncifcrf.gov/~toms/bion ... al.Entropy
Barbarian:


Charlie apparently doesn't know what "information" is, much less how to calculate it.

I'm intrigued by the idea that increasing information is a "decrease" in information.

Let's turn this one backwards, Charlie. Suppose God magically created five alleles, instead of doing it the way He did. Then suppose one allele disappears in a population, and there isn't any more.

Is that then a gain in infomation? If not, why is a new allele not an increase in information?

I have to say, Charley, it looks like you're just chanting phrases they taught you in YE indoctrination.



All right guys I'm hitting the sack. I'll catch up with you next weekend.

Peace 8-)
 
You missed that (-) in front of the equation. Bottom line? Shannon's take on information works. Using his "information is entropy" equation, engineers can more efficiently send data over channels.

Likewise, we can accurately measure the amount of information for specific genes in populations.

You've been badly misled by people who don't understand what information is. None of them actually have to make their theories do anything useful. But Shannon did. He worked for Bell Laboratories, and his equation is used today to make the internet (for example) work.

Electrical engineers began using the term to describe data transmission during the first half of the twentieth century. Instead of providing a definition of information, these engineers focused on measuring information, as they attempted to maximize information transmitted or received, or minimize noise, or both. The social science literature of the 1950s and 1960s used ideas about information measurement developed by Shannon and Weaver in the late 1940s crossing back from engineering to the liberal arts. Outside of electrical engineering, Shannon's formal ideas about information are used most profitably today in the computing and cognitive sciences.
http://ils.unc.edu/~losee/b5/node2.html

The amount of information in the output of a process is proportional to the number of different values that the function might return. Given n different output values, the amount of information (I) may be computed as
img22.gif
. The amount of information in the output of a process is related to the amount of information that is available about the input to the process combined with the information provided by the process itself.

http://ils.unc.edu/~losee/b5/node7.html

Hence my example of a new alllele will return the same increase in information in this formulation.

And by now you see that any successful definition of "information" will show that a new allele will increase information for that gene in a population.
 
See references below.
That doesn't answer the question. It's about binding specifity as a measure of information content, not about the change of information if a new allele appears.
You claimed that the emergence of a new allele decreased the information content. I asked you to back this up with actual math, that specific example.
Do it or retract that claim.

Once again, the very site which you linked continues to explain how information can be formed by mutation and selection. It even has a java applet which simulates the process and graphs like this which show the results:
ev-fig2b.gif


So by referencing this site you continually shoot your own leg. In fact i bookmarked it myself as a reference for future discussions about this.
 
That doesn't answer the question. It's about binding specifity as a measure of information content, not about the change of information if a new allele appears.
You claimed that the emergence of a new allele decreased the information content. I asked you to back this up with actual math, that specific example.
Do it or retract that claim.



http://www.ccrnp.ncifcrf.gov/~toms/bion ... al.Entropy

In the example given about midway down the page of the above link, substitute an unknown sequence for a known sequence:



Hbefore = 2 bits.

Hafter = 2 bits= (because the arrangement does not represent a known sequence.)

R=2-2

R=0

There’s a decrease in info of 2 bits because, before the introduction of the new allele:

Hbefore = 2 bits.

Hafter = 0 bits (because it’s a known sequence)

R=2-0

R=2

The info was conserved before the original allele was replaced with the new allele. When the new allele was introduced, information was lost.



ev-fig2b.gif



Note that not even one bit of info is hypothesized until 300 generations have passed. So this experiment is not grounded in observation. It’s just conjecture.

Are you ready to retract the following claims:


1. And if he effect has more entropy than the cause (well...technically one cannot say such a thing, but anyway), then i don't see how anyone could possibly argue that there is a conflict with the 2ndLoT either.
2. In Shannon's terms it is just the degree of alteration of the original message. Evolution actually requires an increase of Shannon entropy.

3. Repeated observance shows us that mutations can form new proteins. If that is not an increase of information, then what is?

4. It seems to me that you equate the usefulness of a mutation with its information content. That however has no foundation in information theory.

5. There is no correlation between the effects of mutation on the individuum's reproductive success and its change of the information content of the genome.

6. By the way...entropy is a measure of complexity. The higher the level of entropy, the more possible microstates, the more complex an entity.


I should be back on the forum Saturday or Sunday.

Peace Bro
 
http://www.ccrnp.ncifcrf.gov/~toms/bionet.info-theory.faq.html#Information.Equal.Entropy

In the example given about midway down the page of the above link, substitute an unknown sequence for a known sequence:



Hbefore = 2 bits.

Hafter = 2 bits= (because the arrangement does not represent a known sequence.)

R=2-2

R=0

There’s a decrease in info of 2 bits because, before the introduction of the new allele:

Hbefore = 2 bits.

Hafter = 0 bits (because it’s a known sequence)

R=2-0

R=2
What are you talking about? This still is about binding specifity and has nothing to do with allele! An allele is not a binding site.

Oh, and by the way the website which you referenced does not declare that to be a decrease of information, but an increase:
Your decrease in uncertainty is the information (R) that you gain.

...and R=2 bits.

The information content, or sequence conservation, of the position is Rsequence(+7) = Hbefore - Hafter = 2 bits.

It's not a good idea to use a website as a reference which argues against what you want to demonstrate.
You can keep quoting that website all day, unless you actually show mistakes in its reasoning it is the author's position that you bring into this thread - and he doesn't agree with you. So unless there are mistakes in his works, which you would have to point out, any claim from your side that this website supports your position cannot be anything but a misrepresentation.

Note that not even one bit of info is hypothesized until 300 generations have passed. So this experiment is not grounded in observation. It’s just conjecture.
Please explain how it makes this experiment conjecture just because that particular run of the simulation took a while to take off? That's handwaving of the worst kind! What particular aspect of the simulation do you disagree with?

Besides, i have run that applet a few times myself, and it doesn't always take some time to start, i've had significant increases right at the beginning in some runs. Perhaps you should try it yourself.

Are you ready to retract the following claims:
Why should i retract any of them? If i changed my position somewhere, then please point out self-contradictions so that i can correct them.
 
jwu:

What are you talking about? This still is about binding specifity and has nothing to do with allele! An allele is not a binding site.

jwu:
Show us with cold hard math how this is a decrease of information, so that we finally know how you want to calculate the information content of a genome.

What I’m talking about is whether a mutant allele adds or decreases info:

“Information is ‘always’ a measure of the decrease of uncertainty at a receiver†...

http://www.lecb.ncifcrf.gov/~toms/infor ... ainty.html

Even your graph above indicates a mutant allele contains 0 bits of information for the first 200 generations. Are you claiming the wild allele did not contain any information? If the wild allele contained any information, then the result of a mutant allele is a decrease in info, as far as observable, repeatable experiments are concerned. Anything beyond 4-5 generations is just conjecture. We’ve only been toying with information theory since the mid 1900’s.


The binding sites decode the allele. In the first example above, the mutant allele binds to the bases, and what is decoded is something that doesn’t make any sense...so the uncertainty after receipt is the same as before receipt:

Hbefore (uncertainty before receipt) = 2 bits.

Hafter (uncertainty after receipt) = 2 bits (because the mutant allele does not represent a known sequence/ it’s nonsense to the “machineâ€Â)

R (info)=2-2

R (info)=0

Whereas, before the new allele was introduced, the wild allele was completely understood:

Hbefore(uncertainty before receipt) = 2 bits.

Hafter (uncertainty after receipt) = 0 bits (because it’s a known sequence/ it makes sense to the “machineâ€Â)

R (info)=2-0

R (info)=2

So, in this example, the gene started with 2 bits of info (wild allele), but when the mutant allele was introduced, the uncertainty was the same before receipt and after, resulting in a loss of 2 bits of info. That’s about as clear cut (hard math as you called it) as you can get Bro.





jwu:

Oh, and by the way the website which you referenced does not declare that to be a decrease of information, but an increase:
R= 2 bits


With the wild allele, but not the mutant allele:

Mutant Allele

Hbefore = 2 bits.

Hafter = 2 bits (because the allele does not represent a known sequence/ it’s nonsense to the machine)

R=2-2

R=0

One way of making this point very clear is imagining the wild allele for the formation of the brain. Conservation of the sequence is paramount...without the brain, the whole organism is doomed or severely handicapped (hardly an advantage). Conservation and transmission integrity is crucial to survival. Slip in a mutant allele, and even the slightest difference between the mutant and wild allele is deleterious (i.e.- the decrease in info has very profound and negative consequences.) Another good example is cancer.

jwu:
It's not a good idea to use a website as a reference which argues against what you want to demonstrate.
You can keep quoting that website all day, unless you actually show mistakes in its reasoning it is the author's position that you bring into this thread - and he doesn't agree with you. So unless there are mistakes in his works, which you would have to point out, any claim from your side that this website supports your position cannot be anything but a misrepresentation.
Nothing but a fallacy. I can and do use sources for which I don’t 100% agree. The parts in which I’m relying are just the parts concerning Shannon’s Information Theory, which the author very clearly presents. But you would have me discard the nice presentation of Shannon’s Information Theory, only because I don’t agree with the author’s conclusions. Again...fallacy.
[quote:1c160]

charlie:

Note that not even one bit of info is hypothesized until 300 generations have passed. So this experiment is not grounded in observation and repetition. It’s just conjecture.

Please explain how it makes this experiment conjecture just because that particular run of the simulation took a while to take off? That's handwaving of the worst kind! What particular aspect of the simulation do you disagree with?

The experiment is not based on what observations reveal, and it’s certainly not repeatable. So the conclusion is invalid at the very core of The Scientific Method. Again, it’s wand waving. Please explain to me how my reasoning is hand waving? My reasoning is based on The Scientific Method...the core of science. Remember, what we’re talking about is a “simulationâ€Â, certainly not based on reality as far as observations go.



jwu:
Why should i retract any of them? If i changed my position somewhere, then please point out self-contradictions so that i can correct them.

1. And if the effect has more entropy than the cause...then i don't see how anyone could possibly argue that there is a conflict with the 2ndLoT either.

2. In Shannon's terms it is just the degree of alteration of the original message. Evolution actually requires an increase of Shannon entropy.

3. Repeated observance shows us that mutations can form new proteins. If that is not an increase of information, then what is?

4. It seems to me that you equate the usefulness of a mutation with its information content. That however has no foundation in information theory.

5. There is no correlation between the effects of mutation on the individuum's reproductive success and its change of the information content of the genome.

6. By the way...entropy is a measure of complexity. The higher the level of entropy, the more possible microstates, the more complex an entity.
[/quote:1c160]

You made the following claims above:

1. Here you equate evolution with a decrease in information. Can you honestly say it takes less info to replicate, repeatedly, a human compared to a single cell organism?
2. Here you equate an increase in uncertainty with an increase in information, or your admitting ToE is impossible, which, in that case, I agree. To state that ToE requires less and less information is nonsensical.
3. Here you claim mutations are an increase in information and also state
increases in entropy equate to more information.
4. Here you state “useful mutations†are not information, and, therefore, claim that
ToE requires no new information. How do you form an eye without information,
especially repetitively?
5. Here you state there’s no correlation between damaging mutations and genetic
information.
6. Here you state the more disordered a mass is, the more complex it is. This is the
equivalent of saying a bunch of gravel in a creek bed is more complex than a
human.

All right. I couldn’t resist just one more exchange. I love this topic, and the overall topic of human origins.

I’ll catch you over the weekend Bro. Have ya’ll starting getting snow there in Germany yet? It’s finally starting to cool off a bit here (80’s and 90’s instead of 100+).

Peace
 
What I’m talking about is whether a mutant allele adds or decreases info:
By measuring its binding specifity...ok. But you cannot take the original binding sequence. It's the whole point that new ones arise.

Even your graph above indicates a mutant allele contains 0 bits of information for the first 200 generations. Are you claiming the wild allele did not contain any information? If the wild allele contained any information, then the result of a mutant allele is a decrease in info, as far as observable, repeatable experiments are concerned.
In that particular run of the simulation there was a period during which there was no significant increase of information at the beginning...albeit there was some minor up and down which alone already kills your argument. However, are you aware that any run is different? The above graph does not in any way demonstrate that there always is 0 information for 200 generations (and it doesn't even say that)

Anything beyond 4-5 generations is just conjecture. We’ve only been toying with information theory since the mid 1900’s.
And we can observe thousands of generations of bacteria within a year. And even if that wasn't the case, what premises of the simulation might become invalid after a few generations? Please be specific. It's just random mutation and selection.

The binding sites decode the allele. In the first example above, the mutant allele binds to the bases, and what is decoded is something that doesn’t make any sense...so the uncertainty after receipt is the same as before receipt:
You're begging the question...you initially presuppose that the new variant cannot bind anything.

Nothing but a fallacy. I can and do use sources for which I don’t 100% agree. The parts in which I’m relying are just the parts concerning Shannon’s Information Theory, which the author very clearly presents. But you would have me discard the nice presentation of Shannon’s Information Theory, only because I don’t agree with the author’s conclusions. Again...fallacy.
It's not a fallacy because i asked you with what of his conclusions you disagree, and why you do so. The way how you are doing it right now is the same as a quote mine, you take the parts that suit you (but they don't even do that) and use them to say something that the original author did not intend to say.
Basically whenever you use something from that website you appeal to the author's authority. But when you don't accept his conclusions without any explainations about why he is mistaken, then you apparently don't accept his authority on the subject yourself - so why should anyone else?
You indirectly say that there is a mistake in his works somewhere, but you don't point out where. Then, what should stop anyone else from saying that it might be in the part which you worked with? If you don't have to point out where it is, then no-one else has to either and we can both handwave each other's arguments away.

Anyway...i take it that you would accept the observed emergence of new binding sites as increase of information?

The experiment is not based on what observations reveal, and it’s certainly not repeatable. So the conclusion is invalid at the very core of The Scientific Method. Again, it’s wand waving. Please explain to me how my reasoning is hand waving? My reasoning is based on The Scientific Method...the core of science. Remember, what we’re talking about is a “simulationâ€Â, certainly not based on reality as far as observations go.
Why isn't it repeatable? I do it all the time when i run that applet, with similar results. And if you use the same seed for the random number generator, then you can repeat that exact particular run too. And actually the emergence of new binding sites is frequently observed.
Explain what particular premises of the simulation are unrealistic after a few generations. That was your objection, wasn't it? That anything beyond a few generations means nothing anymore....but why is that so? Withiout giving reasons it is nothing but hand waving away of inconvenient data.

And by the way..simulations are perfectly acceptable according to the scientific method. They're used all the time in aero- and fluid dynamics, nuclear science and so on.

1. Here you equate evolution with a decrease in information. Can you honestly say it takes less info to replicate, repeatedly, a human compared to a single cell organism?
I said it would take an increase of shannon entropy (i.e. a change of the original message) if the original genome is defined to have the highest possible information content.

If a bacteria is defined to have the highest possible information content and any alteration is considered a decrease by definition, then yes, it would only take loss of information to get from there to a human.

2. Here you equate an increase in uncertainty with an increase in information, or your admitting ToE is impossible, which, in that case, I agree. To state that ToE requires less and less information is nonsensical.
See 1. From a conservation point of view, the ToE does require a decrease of information as it requires change, and any change is a decrease by definition there.

3. Here you claim mutations are an increase in information and also state
increases in entropy equate to more information.
There i was giving you the opportunity to explain why these are not an increase of information...i did not assert anything there but merely asked you something, so i cannot retract any assertion. Besides, i do not recall an answer to that question yet.

4. Here you state “useful mutations†are not information, and, therefore, claim that
ToE requires no new information. How do you form an eye without information,
especially repetitively?

So why are there useful mutations which you said yourself are an decrease of information? E.g. you willl agree that the loss of no longer needed features is useful (the organism doesn't waste energy to sustain them anymore) but a loss of information. Hence there is no correlation between usefulness of a mutation and its effect on the information content of the genome.
Besides, that does not equal claiming that the ToE does not require an increase of information (and depending on what definition one uses it does...just not of any type of information)

The existence of beneficial changes which depending on the definition may be increases of information is counterbalanced by cases of the exact opposite - hence no correlation.

And besides...the binding specifity definition which we use now has nothing to do with the formation of eyes. A new binding site can destroy eyes just like it can be required to form them. There is no correlation between beneficial features of an organism and its genome's binding sites.

5. Here you state there’s no correlation between damaging mutations and genetic
information.
That's correct. E.g. some mutation which has a huge new binding specifity of a protein - which i think would constitute a increase of information according to your understanding - is not necessarily beneficial for the organism.


6. Here you state the more disordered a mass is, the more complex it is. This is the
equivalent of saying a bunch of gravel in a creek bed is more complex than a
human.
Yes, it takes more bits to describe and has higher overall entropy. I don't think you will disagree...
 
By measuring its binding specifity...ok. But you cannot take the original binding sequence. It's the whole point that new ones arise.

But, using Shannon's theory, the uncertainty has increased, therefore reducing the amount of information.

If a bacteria is defined to have the highest possible information content and any alteration is considered a decrease by definition, then yes, it would only take loss of information to get from there to a human.

_____________________________________________________________________

See 1. From a conservation point of view, the ToE does require a decrease of information as it requires change, and any change is a decrease by definition there.

But, that doesn't make any sense. Your equating a decrease in information with the appearance of new, complex, and replicated structures. Surely, to accurately replicate an eye, for example, much information is needed.

There i was giving you the opportunity to explain why these are not an increase of information...i did not assert anything there but merely asked you something, so i cannot retract any assertion. Besides, i do not recall an answer to that question yet.

Because the uncertainty is increased.


That's correct. E.g. some mutation which has a huge new binding specifity of a protein - which i think would constitute a increase of information according to your understanding - is not necessarily beneficial for the organism.

I don't agree that the mutation is an increase in information. Uncertainty is increased, therefore it's less information.

Yes, it takes more bits to describe and has higher overall entropy. I don't think you will disagree...

To describe is an act of intelligence, though. The higher the entropy in, and of itself, is a decrease in information.
 
But, using Shannon's theory, the uncertainty has increased, therefore reducing the amount of information.
But you presupposed that that would happen...

But, that doesn't make any sense. Your equating a decrease in information with the appearance of new, complex, and replicated structures. Surely, to accurately replicate an eye, for example, much information is needed.
From a conservational point of view it does make sense. If the sender did not intend these features to appear, then they are a disturbance of the message and thus a loss of information.

If you have a part of an email that you wanted to send out changed to "E=MC²", then you would consider that a loss of information simply because the receiver didn't receive what you wanted him to, would you not? He missed some information which you deemed important in order to receive this instead.

I don't agree that the mutation is an increase in information. Uncertainty is increased, therefore it's less information.
But then you contradict yourself! If you argue that something which decreases binding specificy is a loss of information, how can something which increases binding specifity not be a gain then?

And how would uncertainty be increased there unless one applies the conservational understanding which i explained above there? In which any mutation is a decrease of information by definition, no matter what it does.

If you disagree, then please give me a example of a single mutation which you would accept as having increased the information content of the genome. What would it have to do? How do you measure its change of the information content of the genome? Apparently you don't like binding specifity anymore.

To describe is an act of intelligence, though.
And without intelligence, information means nothing.

The higher the entropy in, and of itself, is a decrease in information.
Again, your own sources (well..their own sources) contradict you.

You didn't address my refutation of your objections to that simulation which resulted in an increase of information. Unless i am to take that binding specifity is not a measure of information...
 
Back
Top