[_ Old Earth _] Hows this for logic

So, I'm guessing that means you have no idea how information actually works in genetics. Feel free to use your own numbers. Mine just simplify the calculations. But you have no idea how to do it, do you?



It's been observed to happen many, many times. Would you like to see some examples?

Bottom line; if you don't know what "information" is, or how to determine how much a population has, what makes you think it's a problem for evolution?

Information is the code in the DNA....similar to the code full of information running you computer. Would you like to learn about the code in DNA and what it can do?

Just for the record....Asides from make believe numbers coupled with wild assertion and speculation...you have nothing.

 
Information is the code in the DNA....similar to the code full of information running you computer. Would you like to learn about the code in DNA and what it can do?

Always willing to learn. Let's start with something simple. Tell me how retroviruses violate the Central Dogma of molecular biology. Or how the Hardy-Weinberg equation can test for natural selection occurring. But don't you want to talk about the way mutations increase information in a population?

And my offer to show you how a new allele increases in a population is still good. Want to learn about it?
 
200_s.gif
 
Always willing to learn. Let's start with something simple. Tell me how retroviruses violate the Central Dogma of molecular biology. Or how the Hardy-Weinberg equation can test for natural selection occurring. But don't you want to talk about the way mutations increase information in a population?

And my offer to show you how a new allele increases in a population is still good. Want to learn about it?
You need to explain how mutations increase information in the DNA...before we move into populations. You're kinda jumping the gun.
...Lets start with the cell. You seem to be avoiding that like the plague.
 
You need to explain how mutations increase information in the DNA...before we move into populations. You're kinda jumping the gun.

Sure. The formula for information (the same one that tells us how to send error-free transmissions from spacecraft billions of kilometers from Earth) is:
f06371cc727fd8a2c69b8dc832a7d806.png

Where Xi is the frequences of the ith allele in the population. So let's take a simple case (if you want a more complex one, I'll do it with your numbers) in which there are first two alleles with a frequency of 0.5 each. Then a mutation produces a new allele which over time, increases so that the frequency of the three alleles is each 0.333 (approximately)

The original information content was about 0.30. The information content of the gene was about 0.48, a significant increase.

But you haven't yet answered my questions, including your earlier offer to teach me about DNA. Did you forget? I'll paste it here for you:

Always willing to learn. Let's start with something simple. Tell me how retroviruses violate the Central Dogma of molecular biology. Or how the Hardy-Weinberg equation can test for natural selection occurring. But don't you want to talk about the way mutations increase information in a population?

Lets start with the cell.

You don't want to talk about DNA anymore? You do know that the central dogma and retroviruses have to do with cells, right?

OK. Let's look at the cell. Tell me what you think information says about specific cell properties? Show your numbers.
 
Sure. The formula for information (the same one that tells us how to send error-free transmissions from spacecraft billions of kilometers from Earth) is:
f06371cc727fd8a2c69b8dc832a7d806.png

Where Xi is the frequences of the ith allele in the population. So let's take a simple case (if you want a more complex one, I'll do it with your numbers) in which there are first two alleles with a frequency of 0.5 each. Then a mutation produces a new allele which over time, increases so that the frequency of the three alleles is each 0.333 (approximately)

The original information content was about 0.30. The information content of the gene was about 0.48, a significant increase.

But you haven't yet answered my questions, including your earlier offer to teach me about DNA. Did you forget? I'll paste it here for you:

Always willing to learn. Let's start with something simple. Tell me how retroviruses violate the Central Dogma of molecular biology. Or how the Hardy-Weinberg equation can test for natural selection occurring. But don't you want to talk about the way mutations increase information in a population?



You don't want to talk about DNA anymore? You do know that the central dogma and retroviruses have to do with cells, right?

OK. Let's look at the cell. Tell me what you think information says about specific cell properties? Show your numbers.
Get back to me when you want to discuss how information inceases in DNA allowing it to code for some remarkable organelle.

You seem to be dodging this issue.
 
Apparently, you missed my reply:

Barbarian suggests:
OK. Let's look at the cell. Tell me what you think information says about specific cell properties? Show your numbers.

So how about it?

Meantime...

Let's consider a genome with perhaps 30,000 genes. Tell me how much information it has, and I'll help you see how a new allele in that genome increases information.

Let me know what you think and show your numbers.

You've not yet told me what you think "information" is, in terms of biology, and I'm thinking that you haven't yet fully decided for yourself what it is. If I'm wrong, would you mind giving us a testable definition? I don't want to say you're dodging the issue, but so far, you've been reluctant to answer any questions about your idea of "information.".

As you might have guessed by now, "information" is a complicated issue, and is not merely the size of a genome, nor the number of functional genes therein. Would you be interested in discussing some of the ways in which information can be measured in genomes?
 
Apparently, you missed my reply:

Barbarian suggests:
OK. Let's look at the cell. Tell me what you think information says about specific cell properties? Show your numbers.

So how about it?

The amount of information is massive. For instance the information contains information on how to copy itself.

How does that evolve by chance mutation?
 
The amount of information is massive. For instance the information contains information on how to copy itself.

So you still can't give us any numbers or even a testable definition? Why is that?

You've not yet told me what you think "information" is, in terms of biology, and I'm thinking that you haven't yet fully decided for yourself what it is. If I'm wrong, would you mind giving us a testable definition? I don't want to say you're dodging the issue, but so far, you've been reluctant to answer any questions about your idea of "information.".
 
So you still can't give us any numbers or even a testable definition? Why is that?

You've not yet told me what you think "information" is, in terms of biology, and I'm thinking that you haven't yet fully decided for yourself what it is. If I'm wrong, would you mind giving us a testable definition? I don't want to say you're dodging the issue, but so far, you've been reluctant to answer any questions about your idea of "information.".

I believe I told you..several times...information is the code in the DNA. You keep sweeping that answer aside. Are you saying DNA doesn't contain a code? Information?
 
So if you can't define "information" and you can't tell me how much of it anything has, what makes you think information is a problem for science?

Realistically, you need to find out what "information" means, and get some idea of how to measure it, before you tell us about it.
 
So if you can't define "information" and you can't tell me how much of it anything has, what makes you think information is a problem for science?

Realistically, you need to find out what "information" means, and get some idea of how to measure it, before you tell us about it.

Being the "information" guru...I thought by now you would have been able to explain how the information contained in the DNA code has the ability to make organelle that make other organelle.

Did you watch the video in post 121?
 
Being the "information" guru...I thought by now you would have been able to explain how the information contained in the DNA code has the ability to make organelle that make other organelle.

Not hard. Mutation and natural selection. Would you like to hear about an observed case of a new organelle? Interestingly, it almost perfectly reproduces the way that organelles like chloroplasts and mitochondria were incorporated into the cell. If you want to talk about how to measure it, I've already shown you how that's done.

As I said, you still don't have it clear in your mind what "information" actually is. If you could come up with a testable definition, then it might be easier for you.
 
Not hard. Mutation and natural selection. Would you like to hear about an observed case of a new organelle? Interestingly, it almost perfectly reproduces the way that organelles like chloroplasts and mitochondria were incorporated into the cell. If you want to talk about how to measure it, I've already shown you how that's done.

As I said, you still don't have it clear in your mind what "information" actually is. If you could come up with a testable definition, then it might be easier for you.

I noticed you're still running.
 
I gather that means you're not going to answer the questions. If anyone else cares to discuss information and how it works in the genome, I'll be glad to continue.
 
I gather that means you're not going to answer the questions. If anyone else cares to discuss information and how it works in the genome, I'll be glad to continue.

I've been trying to discuss the information in the DNA code...but you keep avoiding the question.
 
I get it. You'd love to tell us about it, but the Evil Barbarian won't let you. How sad.

So, to get things rolling, here's one perspective on how to measure it:

The information content of DNA is much harder to determine than merely looking at the number of base pairs and multiplying it by 2 to get the size in bits (remember that each site can have up to 4 different nucleotides, or 2 bits). However, this approach can provide us with a zeroth order estimate of the maximum possible information that can be stored in said sequence which for the human genome with 3 billion base pairs would amount to 6 billion bits or 750 Mbytes.


However, information theory shows that random sequences have the lowest information content and that well preserved sequences contain the maximum information content. In other words, the actual information content ranges from zero for totally random sequences to 2 bits for conserved sequences.


Another way to look at this is to compress the DNA sequence using a regular archive utility. If the sequence is random, the compression will be minimal, if the sequence is fully regular, the compression will be much higher.


So how does one obtain a better estimate of the information content of DNA? By estimating the entropy per triplet (3 base pairs) which has a maximum entropy of 6 and for coding regions a value of 5.6 and for non-coding regions 5.82. This means that the information content for coding regions is 0.4 bit per triplet and for non-coding regions .18 bit per triplet. For 3 billion base pairs, or 1 billion triplets, this gives us an actual information content of 0.4 billion bits or 50 Mbytes assuming the best case scenario that all DNA is coding or about 24 Mbytes if all the DNA is non-coding.


Now how does this compare with evolutionary theory? In a 1960 paper “Natural selection as the process of accumulating genetic information in adaptive evolution”, Kimura calculated that the amount of information added per generation was around 0.29 bits or since the Cambrian explosion some 500 million years ago, on the order of 108 bits or 12.5 Mbytes assuming that the geometric mean of the duration of one generation is about 1 year.


As a side note, Kimura reasoned that about 107 or 108 bits bits of information would be necessary to specify human anatomy.(Source: Adaptation and Natural Selection By George Christopher Williams)


So is this a reliable way to determine the information content of DNA? Perhaps not, and a better way is to take a large sample DNA from different people and determine for each base pair, how variable it is. A preserved site will have the maximum of 2 bits of information while a totally random site will have zero bits of information.


The problem is to understand how much information is contained by these ‘bits’. For instance, the total number of electrons is about 1079 and finding one ‘preferred’ one’ amongst these which translates to about 250 bits. This means that in 1000 generations, natural selection can achieve something far more improbable than this.

http://pandasthumb.org/archives/2008/10/information-con.html

I'd be interested in your analysis of the math involved. Do you think that's an adequate way to measure the information, and if not, show us your system.
 
I get it. You'd love to tell us about it, but the Evil Barbarian won't let you. How sad.

So, to get things rolling, here's one perspective on how to measure it:

The information content of DNA is much harder to determine than merely looking at the number of base pairs and multiplying it by 2 to get the size in bits (remember that each site can have up to 4 different nucleotides, or 2 bits). However, this approach can provide us with a zeroth order estimate of the maximum possible information that can be stored in said sequence which for the human genome with 3 billion base pairs would amount to 6 billion bits or 750 Mbytes.


However, information theory shows that random sequences have the lowest information content and that well preserved sequences contain the maximum information content. In other words, the actual information content ranges from zero for totally random sequences to 2 bits for conserved sequences.


Another way to look at this is to compress the DNA sequence using a regular archive utility. If the sequence is random, the compression will be minimal, if the sequence is fully regular, the compression will be much higher.


So how does one obtain a better estimate of the information content of DNA? By estimating the entropy per triplet (3 base pairs) which has a maximum entropy of 6 and for coding regions a value of 5.6 and for non-coding regions 5.82. This means that the information content for coding regions is 0.4 bit per triplet and for non-coding regions .18 bit per triplet. For 3 billion base pairs, or 1 billion triplets, this gives us an actual information content of 0.4 billion bits or 50 Mbytes assuming the best case scenario that all DNA is coding or about 24 Mbytes if all the DNA is non-coding.


Now how does this compare with evolutionary theory? In a 1960 paper “Natural selection as the process of accumulating genetic information in adaptive evolution”, Kimura calculated that the amount of information added per generation was around 0.29 bits or since the Cambrian explosion some 500 million years ago, on the order of 108 bits or 12.5 Mbytes assuming that the geometric mean of the duration of one generation is about 1 year.


As a side note, Kimura reasoned that about 107 or 108 bits bits of information would be necessary to specify human anatomy.(Source: Adaptation and Natural Selection By George Christopher Williams)


So is this a reliable way to determine the information content of DNA? Perhaps not, and a better way is to take a large sample DNA from different people and determine for each base pair, how variable it is. A preserved site will have the maximum of 2 bits of information while a totally random site will have zero bits of information.


The problem is to understand how much information is contained by these ‘bits’. For instance, the total number of electrons is about 1079 and finding one ‘preferred’ one’ amongst these which translates to about 250 bits. This means that in 1000 generations, natural selection can achieve something far more improbable than this.

http://pandasthumb.org/archives/2008/10/information-con.html

I'd be interested in your analysis of the math involved. Do you think that's an adequate way to measure the information, and if not, show us your system.

I really don't care how you measure "information". The problem for you is that it exist in the biological word...way down to the inner cell level...(have you watched the video yet?) and you can't explain how it arrived.
 
I showed you how information works in biological systems. I even showed you a way to measure it in a genome. And yet, you're still dodging the questions:

Barbarian, post: 1197111 I'd be interested in your analysis of the math involved. Do you think that's an adequate way to measure the information, and if not, show us your system.

Barbarian, post: 1197089 Not hard. Mutation and natural selection. Would you like to hear about an observed case of a new organelle? Interestingly, it almost perfectly reproduces the way that organelles like chloroplasts and mitochondria were incorporated into the cell. If you want to talk about how to measure it, I've already shown you how that's done.

As I said, you still don't have it clear in your mind what "information" actually is. If you could come up with a testable definition, then it might be easier for you.

Barbarian, post: 1196992 You've not yet told me what you think "information" is, in terms of biology, and I'm thinking that you haven't yet fully decided for yourself what it is. If I'm wrong, would you mind giving us a testable definition? I don't want to say you're dodging the issue, but so far, you've been reluctant to answer any questions about your idea of "information."

Why are you so reluctant to tell us any of this? If you just don't know, say so. And if you think you know, tell us.
 
Back
Top