• CFN has a new look and a new theme

    "I bore you on eagle's wings, and brought you to Myself" (Exodus 19:4)

    More new themes will be coming in the future!

  • Desire to be a vessel of honor unto the Lord Jesus Christ?

    Join For His Glory for a discussion on how

    https://christianforums.net/threads/a-vessel-of-honor.110278/

  • CFN welcomes new contributing members!

    Please welcome Roberto and Julia to our family

    Blessings in Christ, and hope you stay awhile!

  • Have questions about the Christian faith?

    Come ask us what's on your mind in Questions and Answers

    https://christianforums.net/forums/questions-and-answers/

  • Read the Gospel of our Lord Jesus Christ?

    Read through this brief blog, and receive eternal salvation as the free gift of God

    /blog/the-gospel

  • Taking the time to pray? Christ is the answer in times of need

    https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/

  • Focus on the Family

    Strengthening families through biblical principles.

    Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.

[_ Old Earth _] Evolution 101

  • Thread starter Thread starter Oran_Taran
  • Start date Start date
Apologies for taking so long to show up here again....right now real life keeping me busy, i have not forgotten this thread though and will reply in a couple of days. It's just a matter of time.
 
Totally understood.

I appreciate your prior patience.

Peace
 
armed2010 said:
Heidi said:
armed2010 said:
Heidi said:
ArtGuy said:
[quote="armed2010":fb39f]Heidi, given the abysmal conclusions you come to on what The Theory of Evolution is, I highly doubt that you ever did believe it.

Given what her perception of it apparently is, if she did believe it, that doesn't speak well of her. Unless she was 5 at the time.

When I was in school, evolutionists claim we came from apes. Now they've changed their story again. They claim we came from a fictitious beast whom they still haven't identified yet. :-)

Do you want me to give you a shovel, because you're still digging a hole for yourself Heidi. Everyone here wishes that you would actually learn what Evolution is, rather than spouting the strawmen that your known for.

Well, the television show "Ape to Man" also claimed we came from apes. So it appears that evolutionists themselves disagree with each other! Or are you going to claim that all other evolutionists are ignorant of the theory except you? :o

The fact that you don't even know what I criticized you for further illustrates your utter lack of knowledge in regards to the ToE. Scientists havn't arrived at some conundrum where we've come from a "ficticious beast". We've mapped our origins quite well. Please, PLEASE go read up on Evolution, for the sake of everyone who wastes their time arguing with you.[/quote:fb39f]

Sorry, but now only have I read about evolution, I studied it for a long time and it contradicts what you're saying about it. The original theory stated that man came from the ape through a common ancestor, common to both humans and apes. The theory came before the evidence. It was conjured up in the mind of Charles Darwin, and after that, he went about looking for evidence.

It took scientists 50 years to even find a scrap that could possibly even be a skull fragment, much less a skull fragment of this common ancestor. The skull fragment was first identified as "piltdown" man, even though all they had was a scrap of material that they weren't even sure was a skull, and not the rest of the body itself. (And of course they call that scientific). The first in a series of fictitious common ancestors. But later testing showed it to be a hoax. Then there were 2 more "common ancestors" found that were also later identidified as being hoaxes. So this "missing link" is still missing and still only exists in the imaginations of men, as does the theory of evolution istelf. :-)
 
honesty

Heidi said:
[
Sorry, but now only have I read about evolution, I studied it for a long time and it contradicts what you're saying about it. The original theory stated that man came from the ape through a common ancestor, common to both humans and apes. The theory came before the evidence. It was conjured up in the mind of Charles Darwin, and after that, he went about looking for evidence.

It took scientists 50 years to even find a scrap that could possibly even be a skull fragment, much less a skull fragment of this common ancestor. The skull fragment was first identified as "piltdown" man, even though all they had was a scrap of material that they weren't even sure was a skull, and not the rest of the body itself. (And of course they call that scientific). The first in a series of fictitious common ancestors. But later testing showed it to be a hoax. Then there were 2 more "common ancestors" found that were also later identidified as being hoaxes. So this "missing link" is still missing and still only exists in the imaginations of men, as does the theory of evolution istelf. :-)
Heidi, would you tell us all where you "studied" about evolution?
How long are you going to be dishonest with yourself and claim all those supposed falsities you noted. All those examples are REAL examples of what was found and documented. How long are you deny the new evidence or rather fact that man is not decended from apes but is in fact an apelike creature who has evolved into what we are today? How long are you going to lean on Darwin when in fact this idea has existed for quite sometime. The ancient Greek philosophers had inferred that similar species were decended from common ancestors. You keep proclaiming for everyone to use "common sense" but you conveniently choose to bow out when the evidence overwhelmingly suggest you are wrong in all your ideas. The evidence of evolution is not getting sparcer but is growing by leaps and bounds everyday from genetics to biology the case only gets stronger not weaker.
 
Re: honesty

reznwerks said:
Heidi said:
[
Sorry, but now only have I read about evolution, I studied it for a long time and it contradicts what you're saying about it. The original theory stated that man came from the ape through a common ancestor, common to both humans and apes. The theory came before the evidence. It was conjured up in the mind of Charles Darwin, and after that, he went about looking for evidence.

It took scientists 50 years to even find a scrap that could possibly even be a skull fragment, much less a skull fragment of this common ancestor. The skull fragment was first identified as "piltdown" man, even though all they had was a scrap of material that they weren't even sure was a skull, and not the rest of the body itself. (And of course they call that scientific). The first in a series of fictitious common ancestors. But later testing showed it to be a hoax. Then there were 2 more "common ancestors" found that were also later identidified as being hoaxes. So this "missing link" is still missing and still only exists in the imaginations of men, as does the theory of evolution istelf. :-)
Heidi, would you tell us all where you "studied" about evolution?
How long are you going to be dishonest with yourself and claim all those supposed falsities you noted. All those examples are REAL examples of what was found and documented. How long are you deny the new evidence or rather fact that man is not decended from apes but is in fact an apelike creature who has evolved into what we are today? How long are you going to lean on Darwin when in fact this idea has existed for quite sometime. The ancient Greek philosophers had inferred that similar species were decended from common ancestors. You keep proclaiming for everyone to use "common sense" but you conveniently choose to bow out when the evidence overwhelmingly suggest you are wrong in all your ideas. The evidence of evolution is not getting sparcer but is growing by leaps and bounds everyday from genetics to biology the case only gets stronger not weaker.

In college and in every science book in elementary school & high school. I have also read Darwin's "Origin Of The Species." But since evolutionists are human beings and not omniscient, they always overlook a myriad of other variables such as the fact that every scrap of bone they find is analyzed with a preconceived notion of what that bone can suggest and it never occurs to them the other things that bone can suggest. For example, once finding a fragment of what they decided was a skull, evolutionists then drew a drawing from their imaginations of what the whole body attached to that skull looked like even though they had no bones from the rest of the body! Then they drew hair all over it even though there is no way they can determine that this imaginary body was even covered with hair! Then they presented this imaginary man to the public and declared they had found the "common ancestor!" And they pass it along as the truth.

This kind of brainwashing is hardly scientific. It is tunnel vision, which is looking for things to confirm one's theory instead of acknowledging that their findings could be any number of things. Then they invent an elaborate idea of how evolution could work forgetting of cousre, that descendents have to be able to breed with their ancestors because mating and breeding are what produce descendants. So too much analysis of the trees obscures the forest completely.

This is also what leads scientists to not know the difference between animals and humans because they don't see the obvious differences, but instead, have to look at the genes to try to figure it out. They do not realize that even if there is only one genetic difference between animals and humans, that that still doesn't make a human being an animal!

So people erroneously conclude that ifscientists have a laboratory, then whatever they say is true. They need to get out of their laboratories and look at reality to see wht's true. It's really quite simple. :-)
 
Loss of purpose fufilling work capability (useful work) has occured. Now there might be another pair (or program in the analogy) that can perform the
function...but you've still lost the original programming of the deleted base
pair (or program).
Does the claim that information can only decrease then not imply that no mutation can happen which inserts that particular base pair again then? After all, otherwise that would be an increase of information, as the previously extant information was lost and it is being regained from nothing.

I think this example is quite nice, as it works independent from the actual definition of information.

Living organisms require enzymes to do a specific job (i.e.-useful work), so
their information content is very close to the maximum.
Ordinary acids or alkalis hydrolyse many compounds. These have wonderful
extended-spectrum catalytic activity, but are not specific, so have so would
be useless for the precise control required for biological reactions. More is
not better for the case where exact control is required. With out this precise
control, you get all kinds of unintended and counterproductive reactions.
Like the suns energy (it's enormous)...ordinary, extended spectrum acids
and alkalis have much activity, but little or no specificity (i.e.-useful work or
useful information).
But isn't some specifity better/more information than no specifity at all?

Think about it terms of noise and entropy. An increase
in noise in a communication system (i.e.- unintended and counterproductive
reactions) is an increase in entropy. Of course an increase in entropy is a
increase in disorder.
That's Shannon entropy. One can get from amoeba to man with nothing but an increase of Shannon entropy. It has no bearing on biology.

The concepts of information and entropy have deep links with one another,
although it took many years for the development of the theories of
statistical mechanics and information theory to make this apparent.
...and in case of K/C, an increase of entropy resembles an increase of information. Order is analogous to redundancy there.

Interesting point. According to a fairly recent study though, by two
proponents of ToE , there is no evidence to show that gene doubling
results in any increase in order:
Tge texts which you quoted and highlighted afterwards do not talk about "information" or "order" at all. They just say that it is doubtful that gene doubling is responsible for large leaps in evolution. Gene doubling perhaps not being responsible for large leaps does not mean that gene doubling does not constitute an increase of information.

In terms of order...open a zip file in a text editor. Do you see much order there? Nonetheless i think you will agree that it is full of very dense information.
 
jwu wrote:

Does the claim that information can only decrease then not imply that no mutation can happen which inserts that particular base pair again then? After all, otherwise that would be an increase of information, as the previously extant information was lost and it is being regained from nothing.

Back to the computer analogy I was using previously, lets assume the dna

package as a whole is a hard drive or memory stick. Lets say lightning

struck and deleted a portion of the hard drive (the base pair). It's possible

for lightning to strike again and jumble up some of the remaining

information to create the exact program again (or a program with the exact

same function)...but the odds against it are astronomical.


Charlie Hatchett wrote:

Living organisms require enzymes to do a specific job (i.e.-useful work), so
their information content is very close to the maximum.
Ordinary acids or alkalis hydrolyse many compounds. These have wonderful
extended-spectrum catalytic activity, but are not specific, so have so would
be useless for the precise control required for biological reactions. More is
not better for the case where exact control is required. With out this precise
control, you get all kinds of unintended and counterproductive reactions.
Like the suns energy (it's enormous)...ordinary, extended spectrum acids
and alkalis have much activity, but little or no specificity (i.e.-useful work or
useful information).


jwu wrote:

But isn't some specifity better/more information than no specifity at all?

Agreed. But what we're discussing here is a highly specified enzyme being

degraded into a less specific compound (loss of info) and two other less

specified compounds increasing specificity (increase):

debate%202.gif


debate%203.gif


The net result is a loss in specificity.


Charlie Hatchett wrote:

Think about it terms of noise and entropy. An increase
in noise in a communication system (i.e.- unintended and counterproductive
reactions) is an increase in entropy. Of course an increase in entropy is a
increase in disorder.


jwu wrote:

That's Shannon entropy. One can get from amoeba to man with nothing but an increase of Shannon entropy. It has no bearing on biology.

I disagree. Anytime there's an increase in entropy, there's a decrease in

organization by definition.


Charlie Hatchett wrote:

The concepts of information and entropy have deep links with one another,
although it took many years for the development of the theories of
statistical mechanics and information theory to make this apparent.


jwu wrote:

...and in case of K/C, an increase of entropy resembles an increase of information. Order is analogous to redundancy there.

Again, I disagree. Anytime there's an increase in entropy, there's a

decrease in organization by definition.


jwu wrote:

Tge texts which you quoted and highlighted afterwards do not talk about "information" or "order" at all. They just say that it is doubtful that gene doubling is responsible for large leaps in evolution. Gene doubling perhaps not being responsible for large leaps does not mean that gene doubling does not constitute an increase of information.


debate%201.png


As the photo shows above, in gene doubling a transcribing error occurs when

the gene replicates, and a segment of information is transcribed twice. This

would be the equivalent of a human transcriber making an error copying the

sentence "The cat chased the mouse."/ "The cat cat chased

the mouse". The resultant error increases noise (disorder).


Peace
 
Back to the computer analogy I was using previously, lets assume the dna package as a whole is a hard drive or memory stick. Lets say lightning
struck and deleted a portion of the hard drive (the base pair). It's possible
for lightning to strike again and jumble up some of the remaining
information to create the exact program again (or a program with the exact
same function)...but the odds against it are astronomical.
That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.

Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.

The net result is a loss in specificity.
Only without gene doubling. With a gene doubling the original activity on ribitol can be retained and the larabitol and xylitol activity increases.

Furthermore, what if an organism started with what in Spetner's case is the mutated version of the gene? What mechanism would prevent a mutation that changes it to the version of the gene which currently occurs in the wild?

I disagree. Anytime there's an increase in entropy, there's a decrease in organization by definition.
Then please define "organization".
In Shannon's information theory, an original message is defined to have the highest possible content of information, and any change is considered a loss of information. Hence if an amoeba mutates to human DNA, this would be considered a loss of information by definition again.

Again, I disagree. Anytime there's an increase in entropy, there's a
decrease in organization by definition.
Again, what is "organization"? You seem to be shifting the goalposts.

As the photo shows above, in gene doubling a transcribing error occurs when the gene replicates, and a segment of information is transcribed twice. This would be the equivalent of a human transcriber making an error copying the sentence "The cat chased the mouse."/ "The cat cat chased the mouse". The resultant error increases noise (disorder).
No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.

If i am allowed to consider t and d equal to account for the 26 compared to 4 letters of the Latin alphabet in contrast to DNA, then a single subsequent mutation after a doubling can add a new meaning:

"The cat chased the mouse."/ "The mad cat chased the mouse" / "The cat had chased the mouse"

How about compressed files? They have a very high entropy, as in "evenness of distribution of characters", yet they contain plenty of information. As much as possible, in fact.
 
An amoeba wouldn't know WHAT to evolve, let alone HOW

Big Bangs DESTROY evereything - they CAN'T create matter

Must go!
 
An amoeba wouldn't know WHAT to evolve, let alone HOW

Big Bangs DESTROY everything - they CAN'T create matter

Must go!
 
Your point? Whether one knows how how or what to evolve or not is irrelevant to the existance of descent with modification and differential reproductive success.

Big Bangs DESTROY everything - they CAN'T create matter
...and what does that have to do with the topic of this thread? It's completely off topic, and no-one claims that the big bang produced matter in first instance. If you want to discuss this, then please make a thread for it, let's keep this one on topic.
 
jwu wrote:

That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.

Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.

That was the hypothetical situation you presented in the quote below.



jwu wrote:

As a seperate line of reasoning...what happens to the information content of a string of DNA if a base pair is deleted and whatever was coded in it doesn't fulfil a purpose anymore? Does the information content decrease because of that, or can this not be determined based on the effect of the codex enzyme or protein on the organism?


jwu wrote:

That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.

Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.

Actually, ID proponents believe that DNA was in it's final form from the very

beginning.

jwu wrote:

That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.

Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.


Actually I said the odds of the base pair being replaced with a similarly

functioning pair are astronomical...implying that I don't think it's possible.


jwu wrote:

Then please define "organization".
In Shannon's information theory, an original message is defined to have the highest possible content of information, and any change is considered a loss of information. Hence if an amoeba mutates to human DNA, this would be considered a loss of information by definition again.


What this is telling you is evolution doesn't happen.

It doesn't make sense to most people, because they make the assumption

that evolution is fact.


The American Heritage Dictionary's definition of entropy: "A

measure of disorder or randomness in a closed system."



jwu wrote:

No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.


DNA and human language are both codes. DNA is written in only four

"letters", called A, C, T and G.

The meaning of this code lies in the sequence of the letters A, T, C

and G in the same way that the meaning of a word lies in the sequence

of alphabet letters.
Different languages use different alphabets to

convey meaning. A computer uses only 2 "letters": 0 and 1.


There is a lot of DNA that is never used to make

protein : we know what some of this DNA does, but not all. The bits of DNA

we don't understand are often called "junk DNA".

If we were to try to build a protein using 'junk' DNA sequence, we would

find that the protein wouldn't fit together, or that it wasn't stable once we

had built it. Not all sequences of amino acids will make useful proteins.


These "junk DNA" sequences are mutations. Sometimes, one of the DNA

letters is accidentally swapped for another letter. Sometimes the sequences

are duplicated or deleted.


jwu wrote:

How about compressed files? They have a very high entropy, as in

"evenness of distribution of characters", yet they contain plenty of

information. As much as possible, in fact.


Actually compressed files have very low entropy. It's the compressed file

coder and decoder that becomes the "language" in this situation
. After the

information is decompressed, other "languages" are used to interpret the

information.





Peace
 
That was the hypothetical situation you presented in the quote below.
Ok, objection withdrawn. I guess the break was too long, i don't remember many things that already were discussed here anymore.

Actually, ID proponents believe that DNA was in it's final form from the very beginning.
No-one on the evolution side, i meant.

Actually I said the odds of the base pair being replaced with a similarly
functioning pair are astronomical...implying that I don't think it's possible.
Why? There are only four possibilities. In case of a mutation there the odds therefore are 1/4.

What this is telling you is evolution doesn't happen.
It doesn't make sense to most people, because they make the assumption
that evolution is fact.
Not at all!
Let me rephrase it: Even if the intelligent designer decided that that amoeba's next offspring should be a human being this would consititute a loss of information according to Shannon!
Shannon's information theory, when applied to biology, results in nothing but a loss of information compared to its starting point regardless of the development being a natural one or guided by a deity.


The American Heritage Dictionary's definition of entropy: "A
measure of disorder or randomness in a closed system."
That is not a definition of "organization" at all. Furthermore, you are equivocating various types of entropy. They are not the same.

DNA and human language are both codes. DNA is written in only four "letters", called A, C, T and G.
The meaning of this code lies in the sequence of the letters A, T, C
and G in the same way that the meaning of a word lies in the sequence
of alphabet letters. Different languages use different alphabets to
convey meaning. A computer uses only 2 "letters": 0 and 1.
DNA however is not subject to rigid syntax, nor is it as redundant as a human language. Please show me a string of DNA with start and end marks that violates some sort of syntax rules.


There is a lot of DNA that is never used to make
protein : we know what some of this DNA does, but not all. The bits of DNA
we don't understand are often called "junk DNA".
It is not used to code proteins...so what? There are other uses for DNA as well. Furthermore, the DNA that isn't even attempted to be transcribed into stuff, because some markings cause it to be skipped. It's not due to a read failure.

These "junk DNA" sequences are mutations. Sometimes, one of the DNA letters is accidentally swapped for another letter. Sometimes the sequences are duplicated or deleted.
Actually creationist organizations are excited every time a piece of previously so called junk DNA is discovered to have some sort of function.

Do you seriously claim that all so called junk DNA is damaged DNA which was taken out of the "active" part of the genome? How about all the mutations which we observe in the active part which cause no such results?

Actually compressed files have very low entropy. It's the compressed file coder and decoder that becomes the "language" in this situation. After the information is decompressed, other "languages" are used to interpret the information.
How do you measure that entropy? A mathematical formula please. Entropy is a measure for the evenness of distribution of characters, and exactly that is very high in a compressed text. A random text has a very even distribution and therefore high entropy, but so does a compressed or encrypted text.

The entropy of a discrete message space M is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message m from that message space:

H(M) = \mathbb{E} \{I(m)\} = \sum_{m \in M} p(m) I(m) = -\sum_{m \in M} p(m) \log p(m).

The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case H(M) = log | M | .

You can try for yourself, e.g. using a neat program which you can download there: http://www.cryptool.com/

The entropy of this post's text is 4.16, with 4.7 being the highest possible value (for an even distribution of 26 different characters). After compressing it using winzip and repeating the analysis it had an entropy of 7.86 out of a maximum of 8. Yet the actual information content for me as a reader has not decreased.
 
jwu wrote:
Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.

Quote: Charlie
Actually I said the odds of the base pair being replaced with a similarly functioning pair are astronomical...implying that I don't think it's possible.

jwu wrote:


Why? There are only four possibilities. In case of a mutation there the odds therefore are 1/4.

In the type of error you’ve described, correction is impossible.

jwu wrote:

As a seperate line of reasoning...what happens to the information content of a string of DNA if a base pair is deleted and whatever was coded in it doesn't fulfil a purpose anymore?

Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it
.


‘Deletion mutations cannot be restored by true reversion.’

http://www.sci.sdsu.edu/~smaloy/Microbi ... e-rev.html


‘Deletions remove one or more nucleotides from the DNA. Like insertions, these mutations can alter the reading frame of the gene. They are irreversible.’

http://en.wikipedia.org/wiki/Mutation


‘Reversion - back mutation
Can be 1) true reversion or 2) pseudoreversion or suppressor mutation
Point mutations have highest reversion frequency; deletion mutations have the lowest (none).'

http://www.science.siu.edu/microbiology ... rsion.html


As to the other type mutations, DNA repair that's built into the DNA

programming kicks in:

DNA repair is a process constantly operating in cells; it is essential to survival because it protects the genome from damage and harmful mutations. In human cells, both normal metabolic activities and environmental factors (such as UV rays) can cause DNA damage, resulting in as many as 500,000 individual molecular lesions per cell per day. These lesions cause structural damage to the DNA molecule, and can dramatically alter the cell's way of reading the information encoded in its genes. Consequently, the DNA repair process must be constantly operating, to correct rapidly any damage in the DNA structure.

As cells age, however, the rate of DNA repair decreases until it can no longer keep up with ongoing DNA damage. The cell then suffers one of three possible fates:

1. an irreversible state of dormancy, known as senescence
2. cell suicide, also known as apoptosis or programmed cell death
3. carcinogenesis, or the formation of cancer.

Most cells in the body first become senescent. Then, after irreparable DNA damage, apoptosis occurs. In this case, apoptosis functions as a "last resort" mechanism to prevent a cell from becoming carcinogenic (able to form a tumor - see cancer) and endangering the organism.

When cells become senescent, alterations in biosynthesis and turnover cause them to function less efficiently, which inevitably causes disease. The DNA repair ability of a cell is vital to the integrity of its genome and thus to its normal functioning and that of the organism. Many genes that were initially shown to influence lifespan have turned out to be involved in DNA damage repair and protection.

http://en.wikipedia.org/wiki/DNA_repair



Quote: Charlie
DNA and human language are both codes. DNA is written in only four "letters", called A, C, T and G. The meaning of this code lies in the sequence of the letters A, T, C and G in the same way that the meaning of a word lies in the sequence of alphabet letters. Different languages use different alphabets to convey meaning. A computer uses only 2 "letters": 0 and 1.


Jwu wote:

DNA however is not subject to rigid syntax, nor is it as redundant as a human language. Please show me a string of DNA with start and end marks that violates some sort of syntax rules.



The genetic code is translated three nucleotide bases (one codon) at a

time, with no punctuation between the codons. Addition or deletion of a

single base pair in the middle of a coding sequence will result in

out-of-frame translation of all of the downstream codons, and thus result in

a completely different amino acid sequence, which is often prematurely

truncated by stop codons (UAG,UAA,UGA) generated by reading the coding

sequence out-of-frame. Such mutations, which are a special subclass of

point mutations, are referred to as frameshift mutations. Deletion of a

single base pair results in moving ahead one base in all of the downstream

codons, and is often referred to as a positive frameshift. Addition of one

base pair (or loss of two base pairs) shifts the reading frame behind by one

base, and is often referred to as a negative frameshift.


Charlie wrote:

There is a lot of DNA that is never used to make

protein : we know what some of this DNA does, but not all. The bits of DNA

we don't understand are often called "junk DNA".

If we were to try to build a protein using 'junk' DNA sequence, we would

find that the protein wouldn't fit together, or that it wasn't stable once we

had built it. Not all sequences of amino acids will make useful proteins.

These "junk DNA" sequences are mutations. Sometimes, one of the DNA

letters is accidentally swapped for another letter. Sometimes the sequences

are duplicated or deleted.


jwu wrote:

Do you seriously claim that all so called junk DNA is damaged DNA which was taken out of the "active" part of the genome? How about all the mutations which we observe in the active part which cause no such results?




Actually my intended point was there are combinations of letters which do not make sense in human language and DNA language. My statement “These 'junk DNA' sequences are mutations.†is opinion, but I can’t prove it, so I withdraw the statement as it relates to mutations.


My point was made in response to your answer:




jwu wrote:

No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.




Peace
 
jwu wrote:

How do you measure that entropy? A mathematical formula please. Entropy is a measure for the evenness of distribution of characters, and exactly that is very high in a compressed text. A random text has a very even distribution and therefore high entropy, but so does a compressed or encrypted text.

Let me give you the model I use for information theory in general,

including how entropy fits in:

Information is always a measure of the decrease of uncertainty at a receiver

(or molecular machine).

R = Hbefore - Hafter.



Where H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per

symbol)-Shannon uncertainty.

Imagine that we are in communication and that we have agreed on an

alphabet. Before I send you a bunch of symbols, you are uncertain

(Hbefore) as to what I'm about to send. After you receive a symbol, your

uncertainty goes down (to Hafter). Hafter is never zero because of noise in

the communication system. This is your entropy. Noise includes distractions,

errors in decoding, errors in coding, typos, unclear points ( lol ), forgetting,

etc...Many of the statements in the early literature assumed a noiseless

channel, so the uncertainty after receipt is zero (Hafter=0). This leads to

the special where R = Hbefore. But

Hbefore is not "the uncertainty", it is the uncertainty of the receiver

before receiving the message


Your decrease in uncertainty after receipt is the information (R) that you

gain.

Since Hbefore and Hafter are state functions, this makes R a function of

state.


Let's say before you compressed the text file, your uncertainty of what it

contained was 50% (lol..I know I don't remember near 50 % of what we

wrote in the last post). When you compressed it, your uncertainty was still

50%. When you uncompressed it, your uncertainty still remained 50% (it's

in text file format). When you open the file (receive the message), lets say

your uncertainty dropped to 5% ( 0 is impossible because of noise issues

(distractions, typos, unclear points, forgetting, encoding and decoding

errors...etc...). Information gained would equal:50%-5%= 45%. The

zipping and unzipping aren't a factor unless the information is garbled

somewhat in the process. Usually that's not the case, but if so, then the

information gained decreases.

Entropy is a measure for the evenness of distribution of characters, and

exactly that is very high in a compressed text.

I think your confusing entropy with the uncertainty before receipt.

Hbefore is not "the uncertainty", it is the uncertainty of the receiver before

receiving the message. It also is not entropy.

jwu wrote:

An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case H(M) = log | M | .

Again, I think there's some confusion here. What your describing here is

uncertainty before receipt.


The entropy of this post's text is 4.16, with 4.7 being the highest possible value (for an even distribution of 26 different characters). After compressing it using winzip and repeating the analysis it had an entropy of 7.86 out of a maximum of 8. Yet the actual information content for me as a reader has not decreased.

Again, this is dealing with uncertainty before receipt.



Peace
 
In the type of error you’ve described, correction is impossible.
Why? What mechanism prevents it? Please be specific.
Does the genome somehow remember where deletions happened? If so, how?

‘Deletion mutations cannot be restored by true reversion.’
What is the mechanism which prevents it?
Technically that statement implies that no insertions can ever happen, as any insertion is the reversal of a possible previous deletion.

These websites are making a generic statement because insertions and deletions are generally more rare than point mutations, hence reversals don't happen as "often" as in case of point mutations.

Either way, for the sake of this argument i will modify that argument, to a point mutation which removed the binding specifity of the protein, thus constituting a loss of information according to Spetner. Your own source says that these get reversed comparatively often, which would constitute exactly the increase of information that i was talking about before, with the deletion example.


The genetic code is translated three nucleotide bases (one codon) at a time, with no punctuation between the codons. Addition or deletion of a
single base pair in the middle of a coding sequence will result in
out-of-frame translation of all of the downstream codons, and thus result in
a completely different amino acid sequence, which is often prematurely
truncated by stop codons (UAG,UAA,UGA) generated by reading the coding
sequence out-of-frame. Such mutations, which are a special subclass of
point mutations, are referred to as frameshift mutations. Deletion of a
single base pair results in moving ahead one base in all of the downstream
codons, and is often referred to as a positive frameshift. Addition of one
base pair (or loss of two base pairs) shifts the reading frame behind by one
base, and is often referred to as a negative frameshift.
Yep - and it's still valid syntax.
It was a frameshift mutation which enabled some bacteria to digest nylon.

Actually my intended point was there are combinations of letters which do not make sense in human language and DNA language.
"No sense" in what way? As in "cannot be read" or as in "useless"?



Let me give you the model I use for information theory in general,
including how entropy fits in:
Information is always a measure of the decrease of uncertainty at a receiver
(or molecular machine).
R = Hbefore - Hafter.



Where H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per
symbol)-Shannon uncertainty.
Imagine that we are in communication and that we have agreed on an
alphabet. Before I send you a bunch of symbols, you are uncertain
(Hbefore) as to what I'm about to send. After you receive a symbol, your
uncertainty goes down (to Hafter). Hafter is never zero because of noise in
the communication system. This is your entropy. Noise includes distractions,
errors in decoding, errors in coding, typos, unclear points ( lol ), forgetting,
etc...Many of the statements in the early literature assumed a noiseless
channel, so the uncertainty after receipt is zero (Hafter=0). This leads to
the special where R = Hbefore. But
Hbefore is not "the uncertainty", it is the uncertainty of the receiver
before receiving the message
Your decrease in uncertainty after receipt is the information (R) that you
gain.
Since Hbefore and Hafter are state functions, this makes R a function of
state.
Let's say before you compressed the text file, your uncertainty of what it
contained was 50% (lol..I know I don't remember near 50 % of what we
wrote in the last post). When you compressed it, your uncertainty was still
50%. When you uncompressed it, your uncertainty still remained 50% (it's
in text file format). When you open the file (receive the message), lets say
your uncertainty dropped to 5% ( 0 is impossible because of noise issues
(distractions, typos, unclear points, forgetting, encoding and decoding
errors...etc...). Information gained would equal:50%-5%= 45%. The
zipping and unzipping aren't a factor unless the information is garbled
somewhat in the process. Usually that's not the case, but if so, then the
information gained decreases.
And how would you apply this to biology?

Where is the sender, where the receiver, and whose uncertainty is reduced?

Again, I think there's some confusion here. What your describing here is uncertainty before receipt.
It was taken straight from wikipedia. Wiki and answers.com are not infallible of course, but i'd consider this one reliable. Such a gross mistake would have been corrected meanwhile, methinks.
http://en.wikipedia.org/wiki/Information_entropy
http://www.answers.com/topic/information-theory


Well...if not that, then what exactly is information entropy?
 
jwu wrote:


Why? What mechanism prevents it? Please be specific.

Does the genome somehow remember where deletions happened? If so, how?

replication slippage, which occurs as repetitive sequences when the new strand mispairs with the template strand. The microsatellite polymorphism is mainly caused by the replication slippage. If the mutation occurs in a coding region, it could produce abnormal proteins, leading to diseases

http://www.web-books.com/MoBio/Free/Ch7F3.htm


Ch7F3.gif



This type of mutation is caused by replication slippage (also called slipped-strand mispairing) which occurs in genes. Replication slippage can occur in two ways: forward replication slippage (causes a deletion mutation) and backward replication slippage (causes a insertion mutation). Codon expansions affect the production of a protein and thus the gene expression may be altered.

Researchers study the structure and nature of these mutations in animal models such as the Drosophila melanogaster (fruit-fly). Fruit flies exhibit codon repeats in the glass gene of the eye and may help explain why these diseases worsen with each generation. Intergenerational expansion, also called genetic anticipation, may increase the severity of the disease as the number of these codon repeats increases. With each consecutive generation, the strand continues to slip and the disease becomes more severe.

http://www.woodrow.org/teachers/esi/200 ... /intro.htm


Well...if not that, then what exactly is information entropy?

R = H(x) - Hy(x)

Or

Hy(x)=H(x)-R



Hy(x) measures the average ambiguity (entropy) of the received

signal. H(x) = the uncertainty before receipt.

R=information gain.


jwu wrote:

And how would you apply this to biology?

Where is the sender, where the receiver, and whose uncertainty is reduced?


One of a great many examples follows:

Sender=DNA (ultimately God or some other intelligent agent)

Receiver=molecular machine


R = H(x) - Hy(x)=Doe the Molecular machine do what it's supposed to do,

the way it was intended to function? Does it stray at all from it's original

purpose?


jwu wrote:

Either way, for the sake of this argument i will modify that argument, to a point mutation which removed the binding specifity of the protein, thus constituting a loss of information according to Spetner. Your own source says that these get reversed comparatively often, which would constitute exactly the increase of information that i was talking about before, with the deletion example.

What occurs in the situation is synonymous substitution. There is no info

gain or loss. Also entropy does not increase.

Also, I think you might be confusing two different scenarios:

1. Synonymous Substitution- Which is described above and

2.

a When a mutation causing "loss of function" occurs, the number of different mutations yielding that phenotype is probably large and a variety of mutagens (or a variety of different spontaneous events) will be capable of producing such mutations.

I don't think we disagree that this constitutes a loss of info.


b. Gain of function is likely to require a much more specific change than loss of function. The mutational spectrum in gain-of-function conditions should be correspondingly more restricted, and the same condition should not be produced by deletion or disruption of the gene.

Interestingly, none of these mutations have demonstrated a increase in

genetic info.The vast majority have proven to be a decrease in

info...i.e.-loss of function

The rest have shown to be neutral.

Here's a couple of examples:

Background Polycythemia vera, essential thrombocythemia, and idiopathic myelofibrosis are clonal myeloproliferative disorders arising from a multipotent progenitor. The loss of heterozygosity (LOH) on the short arm of chromosome 9 (9pLOH) in myeloproliferative disorders suggests that 9p harbors a mutation that contributes to the cause of clonal expansion of hematopoietic cells in these diseases.

Methods We performed microsatellite mapping of the 9pLOH region and DNA sequencing in 244 patients with myeloproliferative disorders (128 with polycythemia vera, 93 with essential thrombocythemia, and 23 with idiopathic myelofibrosis).

Results Microsatellite mapping identified a 9pLOH region that included the Janus kinase 2 (JAK2) gene. In patients with 9pLOH, JAK2 had a homozygous G->T transversion, causing phenylalanine to be substituted for valine at position 617 of JAK2 (V617F). All 51 patients with 9pLOH had the V617F mutation. Of 193 patients without 9pLOH, 66 were heterozygous for V617F and 127 did not have the mutation. The frequency of V617F was 65 percent among patients with polycythemia vera (83 of 128), 57 percent among patients with idiopathic myelofibrosis (13 of 23), and 23 percent among patients with essential thrombocythemia (21 of 93). V617F is a somatic mutation present in hematopoietic cells. Mitotic recombination probably causes both 9pLOH and the transition from heterozygosity to homozygosity for V617F. Genetic evidence and in vitro functional studies indicate that V617F gives hematopoietic precursors proliferative and survival advantages. Patients with the V617F mutation had a significantly longer duration of disease and a higher rate of complications (fibrosis, hemorrhage, and thrombosis) and treatment with cytoreductive therapy than patients with wild-type JAK2.

Conclusions A high proportion of patients with myeloproliferative disorders carry a dominant gain-of-function mutation of JAK2.

http://content.nejm.org/cgi/content/abs ... 52/17/1779

The New England Journal of Medicine


POLYCYTHEMIA VERA

...PV is a clonal disorder characterized by the overproduction of mature red blood cells in the bone marrow.1 Myeloid and megakaryocytic elements are also often increased....

...No obvious etiology exists.3 Genetic and environmental factors have been implicated in rare cases.3 Familial PV has been associated with mutation of the erythropoietin receptor.4 An increased number of cases have been reported in survivors of the atomic bomb explosion in Hiroshima during World War II.3

The primary defect involves a pluripotent stem cell capable of differentiating into red blood cells, granulocytes, and platelets.3 Clonality has been demonstrated through G6PD studies as well as restriction fragment length polymorphism of the active X chromosome.4 Erythroid precursors in PV are exquisitely sensitive to erythropoietin, which leads to increased red blood cell production.3 Precursors in PV are also more responsive to cytokines such as interleukin-3 (IL-3), granulocyte-macrophage colony-stimulating factor, and steel factor.4 Myeloid and megakaryocytic elements are often increased in the bone marrow.4 More than 60% of patients will have endogenous megakaryocyte colony unit formation.3

Increased red blood cell production in PV leads to an increased red cell mass and increased blood viscosity. This in turn can lead to arterial or venous thrombosis and/or bleeding.1 The hematocrit is directly proportional to the number of thrombotic events.3 Investigators have demonstrated a reduction in cerebral blood flow in patients with hematocrits between 53% and 62%.4 An increased platelet count may also contribute to bleeding and thrombosis.3 Although platelet aggregation abnormalities exist in most patients, these abnormalities do not appear to correlate with the risk of bleeding or thrombosis.3 Increased production and breakdown of blood cells can lead to hyperuricemia and hypermetabolism...


http://www.clevelandclinicmeded.com/dis ... orders.htm

Anjali S. Advani, MD, Department of Hematology/ Medical Oncology and Assistant Professor in the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University




What's happened here, is there's been a a loss of info:

The message has lost part of it's original info. Hence, things go a bit

'haywire".

What we're seeing also, from observations to date, is no mutation has

shown to increase genetic information. There either neutral (Synonymous

Substitution) or a loss in genetic info.





Peace
 
Replication slippage is a particular cause of insertion and deletion mutations, but that doesn't answer the question in any way.

One of a great many examples follows:

Sender=DNA (ultimately God or some other intelligent agent)

Receiver=molecular machine


R = H(x) - Hy(x)=Doe the Molecular machine do what it's supposed to do,

the way it was intended to function? Does it stray at all from it's original

purpose?
And exactly that is why Shannon's information theory is absolutely inapplicable to biology. In it by definition any change of the original message is considered a loss of information, as it's a deviation from the "original intent". If by some freak accident the most primitive amoeba mutated to have the genome of a human, that would be considered a loss of information in Shannon's information theory, as the amoeba's genome was supposed to remain exactly as is.


What occurs in the situation is synonymous substitution. There is no info gain or loss. Also entropy does not increase.
Also, I think you might be confusing two different scenarios:

1. Synonymous Substitution- Which is described above and
No. Sunbstituting base pairs is not synonymous substitution. Some cases of it are, but not all.

2.a When a mutation causing "loss of function" occurs, the number of different mutations yielding that phenotype is probably large and a variety of mutagens (or a variety of different spontaneous events) will be capable of producing such mutations.

I don't think we disagree that this constitutes a loss of info.
If information is defined as function, then yes, i would agree to that.
However, non-synonymous base substitutions can cause such a thing (resulting in a loss of information), and your own sources say that these can get reversed. That effectively ends the debate.

b. Gain of function is likely to require a much more specific change than loss of function. The mutational spectrum in gain-of-function conditions should be correspondingly more restricted, and the same condition should not be produced by deletion or disruption of the gene.
Not by deletion (unless that gene had an inhibiting function elsewhere), but a disruption as in "a change" can produce it.

Interestingly, none of these mutations have demonstrated a increase in genetic info.The vast majority have proven to be a decrease in
info...i.e.-loss of function The rest have shown to be neutral.
How about immunity to HIV? Is that an increase of information? The ability to metabolize nylon?
 
Charlie wrote:

In the type of error you’ve described, correction is impossible.

jwu wrote:

Why? What mechanism prevents it? Please be specific.
Does the genome somehow remember where deletions happened? If so, how?


jwu wrote:

Replication slippage is a particular cause of insertion and deletion mutations, but that doesn't answer the question in any way.

Intuitively I agree with you...nothing can be proved impossible. I think the

abstracts and texts talk about it in this way because effectively

the astrnomical odds deem it "impossible". One thing the

statements do reveal, is correction of a deletion or insertion error has never

been observed. I haven't found a peer reviewed paper that states otherwise.


jwu wrote:

If information is defined as function, then yes, i would agree to that.
However, non-synonymous base substitutions can cause such a thing (resulting in a loss of information), and your own sources say that these can get reversed. That effectively ends the debate

If the base pair substitution get's repaired by accident, you end up with a

neutral change in in the orignal info. Before the accidental correction, their

was a loss of info. So, again, there is again no gain in info...only back to

where we started. Far from effectively ending the debate.

How about immunity to HIV? Is that an increase of information? The ability to metabolize nylon?

I know that studies are very limited concerning the particular gene

"mutation" that may be resposible for immunity to HIV. But who know's if

the "mutation" existed more prominately in the past or not. People who are

not immune could simply have a loss of function mutation. I think it's

strange how recently the HIV epidemic has developed. Again, further

studies should shed more light on this topic. Interesting point.

As to the nylon digesting ability allegedly from a frame shift:

Evidence against the evolutionary explanation includes:
1.

There are five transposable elements on the pOAD2 plasmid. When activated, transposase enzymes coded therein cause genetic recombination. Externally imposed stress such as high temperature, exposure to a poison, or starvation can activate transposases. The presence of the transposases in such numbers on the plasmid suggests that the plasmid is designed to adapt when the bacterium is under stress.
2.

All five transposable elements are identical, with 764 base pairs (bp) each. This comprises over eight percent of the plasmid. How could random mutations produce three new catalytic/degradative genes (coding for EI, EII and EIII) without at least some changes being made to the transposable elements? Negoro speculated that the transposable elements must have been a ‘late addition’ to the plasmids to not have changed. But there is no evidence for this, other than the circular reasoning that supposedly random mutations generated the three enzymes and so they would have changed the transposase genes if they had been in the plasmid all along. Furthermore, the adaptation to nylon digestion does not take very long (see point 5 below), so the addition of the transposable elements afterwards cannot be seriously entertained.
3.

All three types of nylon degrading genes appear on plasmids and only on plasmids. None appear on the main bacterial chromosomes of either Flavobacterium or Pseudomonas. This does not look like some random origin of these genesâ€â€the chance of this happening is low. If the genome of Flavobacterium is about two million bp,7 and the pOAD2 plasmid comprises 45,519 bp, and if there were say 5 pOAD2 plasmids per cell (~10% of the total chromosomal DNA), then the chance of getting all three of the genes on the pOAD2 plasmid would be about 0.0015. If we add the probability of the nylon degrading genes of Pseudomonas also only being on plasmids, the probability falls to 2.3 x 10-6. If the enzymes developed in the independent laboratory-controlled adaptation experiments (see point 5, below) also resulted in enzyme activity on plasmids (almost certainly, but not yet determined), then attributing the development of the adaptive enzymes purely to chance mutations becomes even more implausible.
4.

The antisense DNA strand of the four nylon genes investigated in Flavobacterium and Pseudomonas lacks any stop codons.8 This is most remarkable in a total of 1,535 bases. The probability of this happening by chance in all four antisense sequences is about 1 in 1012. Furthermore, the EIII gene in Pseudomonas is clearly not phylogenetically related to the EII genes of Flavobacterium, so the lack of stop codons in the antisense strands of all genes cannot be due to any commonality in the genes themselves (or in their ancestry). Also, the wild-type pOAD2 plasmid is not necessary for the normal growth of Flavobacterium, so functionality in the wild-type parent DNA sequences would appear not to be a factor in keeping the reading frames open in the genes themselves, let alone the antisense strands.

Some statements by Yomo et al., express their consternation:

‘These results imply that there may be some unknown mechanism behind the evolution of these genes for nylon oligomer-degrading enzymes.

‘The presence of a long NSF (non-stop frame) in the antisense strand seems to be a rare case, but it may be due to the unusual characteristics of the genes or plasmids for nylon oligomer degradation.

‘Accordingly, the actual existence of these NSFs leads us to speculate that some special mechanism exists in the regions of these genes.’

It looks like recombination of codons (base pair triplets), not single base pairs, has occurred between the start and stop codons for each sequence. This would be about the simplest way that the antisense strand could be protected from stop codon generation. The mechanism for such a recombination is unknown, but it is highly likely that the transposase genes are involved.

Interestingly, Yomo et al. also show that it is highly unlikely that any of these genes arose through a frame shift mutation, because such mutations (forward or reverse) would have generated lots of stop codons. This nullifies the claim of Thwaites that a functional gene arose from a purely random process (an accident).
5.

The Japanese researchers demonstrated that nylon degrading ability can be obtained de novo in laboratory cultures of Pseudomonas aeruginosa [strain] POA, which initially had no enzymes capable of degrading nylon oligomers.9 This was achieved in a mere nine days! The rapidity of this adaptation suggests a special mechanism for such adaptation, not something as haphazard as random mutations and selection.
6.

The researchers have not been able to ascertain any putative ancestral gene to the nylon-degrading genes. They represent a new gene family. This seems to rule out gene duplications as a source of the raw material for the new genes.

http://www.answersingenesis.org/tj/v17/i3/bacteria.asp

And exactly that is why Shannon's information theory is absolutely inapplicable to biology. In it by definition any change of the original message is considered a loss of information, as it's a deviation from the "original intent". If by some freak accident the most primitive amoeba mutated to have the genome of a human, that would be considered a loss of information in Shannon's information theory, as the amoeba's genome was supposed to remain exactly as is.

Exactly...though impossible in the case of an amoeba mutating to a

human.The reason you think Information theory is inapplicable to

biology, is you assume evolution is a fact. It's circular rerasoning.


So, let's see, to accomodate your view, we have to change Genesis 1:1-11

(because it doesn't agree with evolution), Modern Information Theory

(Conservation of Information, Informational Entropy, Etc...), violate the rule

of Cause and Effect, twist the Second Law of Thermodynamics, and defy

common sense.

Your views are beginning to appear religious versus

scientific, jwu.
 
Back
Top