Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Join For His Glory for a discussion on how
https://christianforums.net/threads/a-vessel-of-honor.110278/
https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/
Strengthening families through biblical principles.
Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.
Read daily articles from Focus on the Family in the Marriage and Parenting Resources forum.
armed2010 said:Heidi said:armed2010 said:Heidi said:ArtGuy said:[quote="armed2010":fb39f]Heidi, given the abysmal conclusions you come to on what The Theory of Evolution is, I highly doubt that you ever did believe it.
Given what her perception of it apparently is, if she did believe it, that doesn't speak well of her. Unless she was 5 at the time.
When I was in school, evolutionists claim we came from apes. Now they've changed their story again. They claim we came from a fictitious beast whom they still haven't identified yet.
Do you want me to give you a shovel, because you're still digging a hole for yourself Heidi. Everyone here wishes that you would actually learn what Evolution is, rather than spouting the strawmen that your known for.
Well, the television show "Ape to Man" also claimed we came from apes. So it appears that evolutionists themselves disagree with each other! Or are you going to claim that all other evolutionists are ignorant of the theory except you? :o
Heidi, would you tell us all where you "studied" about evolution?Heidi said:[
Sorry, but now only have I read about evolution, I studied it for a long time and it contradicts what you're saying about it. The original theory stated that man came from the ape through a common ancestor, common to both humans and apes. The theory came before the evidence. It was conjured up in the mind of Charles Darwin, and after that, he went about looking for evidence.
It took scientists 50 years to even find a scrap that could possibly even be a skull fragment, much less a skull fragment of this common ancestor. The skull fragment was first identified as "piltdown" man, even though all they had was a scrap of material that they weren't even sure was a skull, and not the rest of the body itself. (And of course they call that scientific). The first in a series of fictitious common ancestors. But later testing showed it to be a hoax. Then there were 2 more "common ancestors" found that were also later identidified as being hoaxes. So this "missing link" is still missing and still only exists in the imaginations of men, as does the theory of evolution istelf.
reznwerks said:Heidi, would you tell us all where you "studied" about evolution?Heidi said:[
Sorry, but now only have I read about evolution, I studied it for a long time and it contradicts what you're saying about it. The original theory stated that man came from the ape through a common ancestor, common to both humans and apes. The theory came before the evidence. It was conjured up in the mind of Charles Darwin, and after that, he went about looking for evidence.
It took scientists 50 years to even find a scrap that could possibly even be a skull fragment, much less a skull fragment of this common ancestor. The skull fragment was first identified as "piltdown" man, even though all they had was a scrap of material that they weren't even sure was a skull, and not the rest of the body itself. (And of course they call that scientific). The first in a series of fictitious common ancestors. But later testing showed it to be a hoax. Then there were 2 more "common ancestors" found that were also later identidified as being hoaxes. So this "missing link" is still missing and still only exists in the imaginations of men, as does the theory of evolution istelf.
How long are you going to be dishonest with yourself and claim all those supposed falsities you noted. All those examples are REAL examples of what was found and documented. How long are you deny the new evidence or rather fact that man is not decended from apes but is in fact an apelike creature who has evolved into what we are today? How long are you going to lean on Darwin when in fact this idea has existed for quite sometime. The ancient Greek philosophers had inferred that similar species were decended from common ancestors. You keep proclaiming for everyone to use "common sense" but you conveniently choose to bow out when the evidence overwhelmingly suggest you are wrong in all your ideas. The evidence of evolution is not getting sparcer but is growing by leaps and bounds everyday from genetics to biology the case only gets stronger not weaker.
Does the claim that information can only decrease then not imply that no mutation can happen which inserts that particular base pair again then? After all, otherwise that would be an increase of information, as the previously extant information was lost and it is being regained from nothing.Loss of purpose fufilling work capability (useful work) has occured. Now there might be another pair (or program in the analogy) that can perform the
function...but you've still lost the original programming of the deleted base
pair (or program).
But isn't some specifity better/more information than no specifity at all?Living organisms require enzymes to do a specific job (i.e.-useful work), so
their information content is very close to the maximum.
Ordinary acids or alkalis hydrolyse many compounds. These have wonderful
extended-spectrum catalytic activity, but are not specific, so have so would
be useless for the precise control required for biological reactions. More is
not better for the case where exact control is required. With out this precise
control, you get all kinds of unintended and counterproductive reactions.
Like the suns energy (it's enormous)...ordinary, extended spectrum acids
and alkalis have much activity, but little or no specificity (i.e.-useful work or
useful information).
That's Shannon entropy. One can get from amoeba to man with nothing but an increase of Shannon entropy. It has no bearing on biology.Think about it terms of noise and entropy. An increase
in noise in a communication system (i.e.- unintended and counterproductive
reactions) is an increase in entropy. Of course an increase in entropy is a
increase in disorder.
...and in case of K/C, an increase of entropy resembles an increase of information. Order is analogous to redundancy there.The concepts of information and entropy have deep links with one another,
although it took many years for the development of the theories of
statistical mechanics and information theory to make this apparent.
Tge texts which you quoted and highlighted afterwards do not talk about "information" or "order" at all. They just say that it is doubtful that gene doubling is responsible for large leaps in evolution. Gene doubling perhaps not being responsible for large leaps does not mean that gene doubling does not constitute an increase of information.Interesting point. According to a fairly recent study though, by two
proponents of ToE , there is no evidence to show that gene doubling
results in any increase in order:
jwu wrote:
Does the claim that information can only decrease then not imply that no mutation can happen which inserts that particular base pair again then? After all, otherwise that would be an increase of information, as the previously extant information was lost and it is being regained from nothing.
Charlie Hatchett wrote:
Living organisms require enzymes to do a specific job (i.e.-useful work), so
their information content is very close to the maximum.
Ordinary acids or alkalis hydrolyse many compounds. These have wonderful
extended-spectrum catalytic activity, but are not specific, so have so would
be useless for the precise control required for biological reactions. More is
not better for the case where exact control is required. With out this precise
control, you get all kinds of unintended and counterproductive reactions.
Like the suns energy (it's enormous)...ordinary, extended spectrum acids
and alkalis have much activity, but little or no specificity (i.e.-useful work or
useful information).
jwu wrote:
But isn't some specifity better/more information than no specifity at all?
Charlie Hatchett wrote:
Think about it terms of noise and entropy. An increase
in noise in a communication system (i.e.- unintended and counterproductive
reactions) is an increase in entropy. Of course an increase in entropy is a
increase in disorder.
jwu wrote:
That's Shannon entropy. One can get from amoeba to man with nothing but an increase of Shannon entropy. It has no bearing on biology.
Charlie Hatchett wrote:
The concepts of information and entropy have deep links with one another,
although it took many years for the development of the theories of
statistical mechanics and information theory to make this apparent.
jwu wrote:
...and in case of K/C, an increase of entropy resembles an increase of information. Order is analogous to redundancy there.
jwu wrote:
Tge texts which you quoted and highlighted afterwards do not talk about "information" or "order" at all. They just say that it is doubtful that gene doubling is responsible for large leaps in evolution. Gene doubling perhaps not being responsible for large leaps does not mean that gene doubling does not constitute an increase of information.
That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.Back to the computer analogy I was using previously, lets assume the dna package as a whole is a hard drive or memory stick. Lets say lightning
struck and deleted a portion of the hard drive (the base pair). It's possible
for lightning to strike again and jumble up some of the remaining
information to create the exact program again (or a program with the exact
same function)...but the odds against it are astronomical.
Only without gene doubling. With a gene doubling the original activity on ribitol can be retained and the larabitol and xylitol activity increases.The net result is a loss in specificity.
Then please define "organization".I disagree. Anytime there's an increase in entropy, there's a decrease in organization by definition.
Again, what is "organization"? You seem to be shifting the goalposts.Again, I disagree. Anytime there's an increase in entropy, there's a
decrease in organization by definition.
No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.As the photo shows above, in gene doubling a transcribing error occurs when the gene replicates, and a segment of information is transcribed twice. This would be the equivalent of a human transcriber making an error copying the sentence "The cat chased the mouse."/ "The cat cat chased the mouse". The resultant error increases noise (disorder).
...and what does that have to do with the topic of this thread? It's completely off topic, and no-one claims that the big bang produced matter in first instance. If you want to discuss this, then please make a thread for it, let's keep this one on topic.Big Bangs DESTROY everything - they CAN'T create matter
Big Bangs DESTROY everything - they CAN'T create matter
jwu wrote:
That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.
Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.
jwu wrote:
As a seperate line of reasoning...what happens to the information content of a string of DNA if a base pair is deleted and whatever was coded in it doesn't fulfil a purpose anymore? Does the information content decrease because of that, or can this not be determined based on the effect of the codex enzyme or protein on the organism?
jwu wrote:
That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.
Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.
jwu wrote:
That's a straw man. First off, DNA isn't as "dualistic" as a computer program, which either works or does not work. Furthermore, no-one proposes that a large chunk of fully formed and functional DNA popped up at once.
Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.
jwu wrote:
Then please define "organization".
In Shannon's information theory, an original message is defined to have the highest possible content of information, and any change is considered a loss of information. Hence if an amoeba mutates to human DNA, this would be considered a loss of information by definition again.
jwu wrote:
No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.
jwu wrote:
How about compressed files? They have a very high entropy, as in
"evenness of distribution of characters", yet they contain plenty of
information. As much as possible, in fact.
Ok, objection withdrawn. I guess the break was too long, i don't remember many things that already were discussed here anymore.That was the hypothetical situation you presented in the quote below.
No-one on the evolution side, i meant.Actually, ID proponents believe that DNA was in it's final form from the very beginning.
Why? There are only four possibilities. In case of a mutation there the odds therefore are 1/4.Actually I said the odds of the base pair being replaced with a similarly
functioning pair are astronomical...implying that I don't think it's possible.
Not at all!What this is telling you is evolution doesn't happen.
It doesn't make sense to most people, because they make the assumption
that evolution is fact.
That is not a definition of "organization" at all. Furthermore, you are equivocating various types of entropy. They are not the same.The American Heritage Dictionary's definition of entropy: "A
measure of disorder or randomness in a closed system."
DNA however is not subject to rigid syntax, nor is it as redundant as a human language. Please show me a string of DNA with start and end marks that violates some sort of syntax rules.DNA and human language are both codes. DNA is written in only four "letters", called A, C, T and G.
The meaning of this code lies in the sequence of the letters A, T, C
and G in the same way that the meaning of a word lies in the sequence
of alphabet letters. Different languages use different alphabets to
convey meaning. A computer uses only 2 "letters": 0 and 1.
It is not used to code proteins...so what? There are other uses for DNA as well. Furthermore, the DNA that isn't even attempted to be transcribed into stuff, because some markings cause it to be skipped. It's not due to a read failure.There is a lot of DNA that is never used to make
protein : we know what some of this DNA does, but not all. The bits of DNA
we don't understand are often called "junk DNA".
Actually creationist organizations are excited every time a piece of previously so called junk DNA is discovered to have some sort of function.These "junk DNA" sequences are mutations. Sometimes, one of the DNA letters is accidentally swapped for another letter. Sometimes the sequences are duplicated or deleted.
How do you measure that entropy? A mathematical formula please. Entropy is a measure for the evenness of distribution of characters, and exactly that is very high in a compressed text. A random text has a very even distribution and therefore high entropy, but so does a compressed or encrypted text.Actually compressed files have very low entropy. It's the compressed file coder and decoder that becomes the "language" in this situation. After the information is decompressed, other "languages" are used to interpret the information.
The entropy of a discrete message space M is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message m from that message space:
H(M) = \mathbb{E} \{I(m)\} = \sum_{m \in M} p(m) I(m) = -\sum_{m \in M} p(m) \log p(m).
The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case H(M) = log | M | .
jwu wrote:
Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it.
Quote: Charlie
Actually I said the odds of the base pair being replaced with a similarly functioning pair are astronomical...implying that I don't think it's possible.
jwu wrote:
Why? There are only four possibilities. In case of a mutation there the odds therefore are 1/4.
jwu wrote:
As a seperate line of reasoning...what happens to the information content of a string of DNA if a base pair is deleted and whatever was coded in it doesn't fulfil a purpose anymore?
.Anyway...you seem to agree that a base pair deletion (or other type of mutation) which previously caused a loss of information can be reversed. That constitutes an increase of information then, no matter how you turn it
DNA repair is a process constantly operating in cells; it is essential to survival because it protects the genome from damage and harmful mutations. In human cells, both normal metabolic activities and environmental factors (such as UV rays) can cause DNA damage, resulting in as many as 500,000 individual molecular lesions per cell per day. These lesions cause structural damage to the DNA molecule, and can dramatically alter the cell's way of reading the information encoded in its genes. Consequently, the DNA repair process must be constantly operating, to correct rapidly any damage in the DNA structure.
As cells age, however, the rate of DNA repair decreases until it can no longer keep up with ongoing DNA damage. The cell then suffers one of three possible fates:
1. an irreversible state of dormancy, known as senescence
2. cell suicide, also known as apoptosis or programmed cell death
3. carcinogenesis, or the formation of cancer.
Most cells in the body first become senescent. Then, after irreparable DNA damage, apoptosis occurs. In this case, apoptosis functions as a "last resort" mechanism to prevent a cell from becoming carcinogenic (able to form a tumor - see cancer) and endangering the organism.
When cells become senescent, alterations in biosynthesis and turnover cause them to function less efficiently, which inevitably causes disease. The DNA repair ability of a cell is vital to the integrity of its genome and thus to its normal functioning and that of the organism. Many genes that were initially shown to influence lifespan have turned out to be involved in DNA damage repair and protection.
http://en.wikipedia.org/wiki/DNA_repair
Quote: Charlie
DNA and human language are both codes. DNA is written in only four "letters", called A, C, T and G. The meaning of this code lies in the sequence of the letters A, T, C and G in the same way that the meaning of a word lies in the sequence of alphabet letters. Different languages use different alphabets to convey meaning. A computer uses only 2 "letters": 0 and 1.
Jwu wote:
DNA however is not subject to rigid syntax, nor is it as redundant as a human language. Please show me a string of DNA with start and end marks that violates some sort of syntax rules.
Charlie wrote:
There is a lot of DNA that is never used to make
protein : we know what some of this DNA does, but not all. The bits of DNA
we don't understand are often called "junk DNA".
If we were to try to build a protein using 'junk' DNA sequence, we would
find that the protein wouldn't fit together, or that it wasn't stable once we
had built it. Not all sequences of amino acids will make useful proteins.
These "junk DNA" sequences are mutations. Sometimes, one of the DNA
letters is accidentally swapped for another letter. Sometimes the sequences
are duplicated or deleted.
jwu wrote:
Do you seriously claim that all so called junk DNA is damaged DNA which was taken out of the "active" part of the genome? How about all the mutations which we observe in the active part which cause no such results?
jwu wrote:
No. Human language is not like DNA at all. There are combinations of letters which do not make sense in human language as they do not constitute a valid word, while there is no such thing in DNA. Furthermore, the syntax of DNA is much simpler, and it only has four letters after all.
jwu wrote:
How do you measure that entropy? A mathematical formula please. Entropy is a measure for the evenness of distribution of characters, and exactly that is very high in a compressed text. A random text has a very even distribution and therefore high entropy, but so does a compressed or encrypted text.
Entropy is a measure for the evenness of distribution of characters, and
exactly that is very high in a compressed text.
jwu wrote:
An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case H(M) = log | M | .
The entropy of this post's text is 4.16, with 4.7 being the highest possible value (for an even distribution of 26 different characters). After compressing it using winzip and repeating the analysis it had an entropy of 7.86 out of a maximum of 8. Yet the actual information content for me as a reader has not decreased.
Why? What mechanism prevents it? Please be specific.In the type of error you’ve described, correction is impossible.
What is the mechanism which prevents it?‘Deletion mutations cannot be restored by true reversion.’
Yep - and it's still valid syntax.The genetic code is translated three nucleotide bases (one codon) at a time, with no punctuation between the codons. Addition or deletion of a
single base pair in the middle of a coding sequence will result in
out-of-frame translation of all of the downstream codons, and thus result in
a completely different amino acid sequence, which is often prematurely
truncated by stop codons (UAG,UAA,UGA) generated by reading the coding
sequence out-of-frame. Such mutations, which are a special subclass of
point mutations, are referred to as frameshift mutations. Deletion of a
single base pair results in moving ahead one base in all of the downstream
codons, and is often referred to as a positive frameshift. Addition of one
base pair (or loss of two base pairs) shifts the reading frame behind by one
base, and is often referred to as a negative frameshift.
"No sense" in what way? As in "cannot be read" or as in "useless"?Actually my intended point was there are combinations of letters which do not make sense in human language and DNA language.
And how would you apply this to biology?Let me give you the model I use for information theory in general,
including how entropy fits in:
Information is always a measure of the decrease of uncertainty at a receiver
(or molecular machine).
R = Hbefore - Hafter.
Where H = - sum (from i = 1 to number of symbols) Pi log2 Pi (bits per
symbol)-Shannon uncertainty.
Imagine that we are in communication and that we have agreed on an
alphabet. Before I send you a bunch of symbols, you are uncertain
(Hbefore) as to what I'm about to send. After you receive a symbol, your
uncertainty goes down (to Hafter). Hafter is never zero because of noise in
the communication system. This is your entropy. Noise includes distractions,
errors in decoding, errors in coding, typos, unclear points ( lol ), forgetting,
etc...Many of the statements in the early literature assumed a noiseless
channel, so the uncertainty after receipt is zero (Hafter=0). This leads to
the special where R = Hbefore. But
Hbefore is not "the uncertainty", it is the uncertainty of the receiver
before receiving the message
Your decrease in uncertainty after receipt is the information (R) that you
gain.
Since Hbefore and Hafter are state functions, this makes R a function of
state.
Let's say before you compressed the text file, your uncertainty of what it
contained was 50% (lol..I know I don't remember near 50 % of what we
wrote in the last post). When you compressed it, your uncertainty was still
50%. When you uncompressed it, your uncertainty still remained 50% (it's
in text file format). When you open the file (receive the message), lets say
your uncertainty dropped to 5% ( 0 is impossible because of noise issues
(distractions, typos, unclear points, forgetting, encoding and decoding
errors...etc...). Information gained would equal:50%-5%= 45%. The
zipping and unzipping aren't a factor unless the information is garbled
somewhat in the process. Usually that's not the case, but if so, then the
information gained decreases.
It was taken straight from wikipedia. Wiki and answers.com are not infallible of course, but i'd consider this one reliable. Such a gross mistake would have been corrected meanwhile, methinks.Again, I think there's some confusion here. What your describing here is uncertainty before receipt.
jwu wrote:
Why? What mechanism prevents it? Please be specific.
Does the genome somehow remember where deletions happened? If so, how?
replication slippage, which occurs as repetitive sequences when the new strand mispairs with the template strand. The microsatellite polymorphism is mainly caused by the replication slippage. If the mutation occurs in a coding region, it could produce abnormal proteins, leading to diseases
http://www.web-books.com/MoBio/Free/Ch7F3.htm
This type of mutation is caused by replication slippage (also called slipped-strand mispairing) which occurs in genes. Replication slippage can occur in two ways: forward replication slippage (causes a deletion mutation) and backward replication slippage (causes a insertion mutation). Codon expansions affect the production of a protein and thus the gene expression may be altered.
Researchers study the structure and nature of these mutations in animal models such as the Drosophila melanogaster (fruit-fly). Fruit flies exhibit codon repeats in the glass gene of the eye and may help explain why these diseases worsen with each generation. Intergenerational expansion, also called genetic anticipation, may increase the severity of the disease as the number of these codon repeats increases. With each consecutive generation, the strand continues to slip and the disease becomes more severe.
http://www.woodrow.org/teachers/esi/200 ... /intro.htm
Well...if not that, then what exactly is information entropy?
jwu wrote:
And how would you apply this to biology?
Where is the sender, where the receiver, and whose uncertainty is reduced?
jwu wrote:
Either way, for the sake of this argument i will modify that argument, to a point mutation which removed the binding specifity of the protein, thus constituting a loss of information according to Spetner. Your own source says that these get reversed comparatively often, which would constitute exactly the increase of information that i was talking about before, with the deletion example.
Background Polycythemia vera, essential thrombocythemia, and idiopathic myelofibrosis are clonal myeloproliferative disorders arising from a multipotent progenitor. The loss of heterozygosity (LOH) on the short arm of chromosome 9 (9pLOH) in myeloproliferative disorders suggests that 9p harbors a mutation that contributes to the cause of clonal expansion of hematopoietic cells in these diseases.
Methods We performed microsatellite mapping of the 9pLOH region and DNA sequencing in 244 patients with myeloproliferative disorders (128 with polycythemia vera, 93 with essential thrombocythemia, and 23 with idiopathic myelofibrosis).
Results Microsatellite mapping identified a 9pLOH region that included the Janus kinase 2 (JAK2) gene. In patients with 9pLOH, JAK2 had a homozygous G->T transversion, causing phenylalanine to be substituted for valine at position 617 of JAK2 (V617F). All 51 patients with 9pLOH had the V617F mutation. Of 193 patients without 9pLOH, 66 were heterozygous for V617F and 127 did not have the mutation. The frequency of V617F was 65 percent among patients with polycythemia vera (83 of 128), 57 percent among patients with idiopathic myelofibrosis (13 of 23), and 23 percent among patients with essential thrombocythemia (21 of 93). V617F is a somatic mutation present in hematopoietic cells. Mitotic recombination probably causes both 9pLOH and the transition from heterozygosity to homozygosity for V617F. Genetic evidence and in vitro functional studies indicate that V617F gives hematopoietic precursors proliferative and survival advantages. Patients with the V617F mutation had a significantly longer duration of disease and a higher rate of complications (fibrosis, hemorrhage, and thrombosis) and treatment with cytoreductive therapy than patients with wild-type JAK2.
Conclusions A high proportion of patients with myeloproliferative disorders carry a dominant gain-of-function mutation of JAK2.
http://content.nejm.org/cgi/content/abs ... 52/17/1779
The New England Journal of Medicine
POLYCYTHEMIA VERA
...PV is a clonal disorder characterized by the overproduction of mature red blood cells in the bone marrow.1 Myeloid and megakaryocytic elements are also often increased....
...No obvious etiology exists.3 Genetic and environmental factors have been implicated in rare cases.3 Familial PV has been associated with mutation of the erythropoietin receptor.4 An increased number of cases have been reported in survivors of the atomic bomb explosion in Hiroshima during World War II.3
The primary defect involves a pluripotent stem cell capable of differentiating into red blood cells, granulocytes, and platelets.3 Clonality has been demonstrated through G6PD studies as well as restriction fragment length polymorphism of the active X chromosome.4 Erythroid precursors in PV are exquisitely sensitive to erythropoietin, which leads to increased red blood cell production.3 Precursors in PV are also more responsive to cytokines such as interleukin-3 (IL-3), granulocyte-macrophage colony-stimulating factor, and steel factor.4 Myeloid and megakaryocytic elements are often increased in the bone marrow.4 More than 60% of patients will have endogenous megakaryocyte colony unit formation.3
Increased red blood cell production in PV leads to an increased red cell mass and increased blood viscosity. This in turn can lead to arterial or venous thrombosis and/or bleeding.1 The hematocrit is directly proportional to the number of thrombotic events.3 Investigators have demonstrated a reduction in cerebral blood flow in patients with hematocrits between 53% and 62%.4 An increased platelet count may also contribute to bleeding and thrombosis.3 Although platelet aggregation abnormalities exist in most patients, these abnormalities do not appear to correlate with the risk of bleeding or thrombosis.3 Increased production and breakdown of blood cells can lead to hyperuricemia and hypermetabolism...
http://www.clevelandclinicmeded.com/dis ... orders.htm
Anjali S. Advani, MD, Department of Hematology/ Medical Oncology and Assistant Professor in the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University
And exactly that is why Shannon's information theory is absolutely inapplicable to biology. In it by definition any change of the original message is considered a loss of information, as it's a deviation from the "original intent". If by some freak accident the most primitive amoeba mutated to have the genome of a human, that would be considered a loss of information in Shannon's information theory, as the amoeba's genome was supposed to remain exactly as is.One of a great many examples follows:
Sender=DNA (ultimately God or some other intelligent agent)
Receiver=molecular machine
R = H(x) - Hy(x)=Doe the Molecular machine do what it's supposed to do,
the way it was intended to function? Does it stray at all from it's original
purpose?
No. Sunbstituting base pairs is not synonymous substitution. Some cases of it are, but not all.What occurs in the situation is synonymous substitution. There is no info gain or loss. Also entropy does not increase.
Also, I think you might be confusing two different scenarios:
1. Synonymous Substitution- Which is described above and
If information is defined as function, then yes, i would agree to that.2.a When a mutation causing "loss of function" occurs, the number of different mutations yielding that phenotype is probably large and a variety of mutagens (or a variety of different spontaneous events) will be capable of producing such mutations.
I don't think we disagree that this constitutes a loss of info.
Not by deletion (unless that gene had an inhibiting function elsewhere), but a disruption as in "a change" can produce it.b. Gain of function is likely to require a much more specific change than loss of function. The mutational spectrum in gain-of-function conditions should be correspondingly more restricted, and the same condition should not be produced by deletion or disruption of the gene.
How about immunity to HIV? Is that an increase of information? The ability to metabolize nylon?Interestingly, none of these mutations have demonstrated a increase in genetic info.The vast majority have proven to be a decrease in
info...i.e.-loss of function The rest have shown to be neutral.
Charlie wrote:
In the type of error you’ve described, correction is impossible.
jwu wrote:
Why? What mechanism prevents it? Please be specific.
Does the genome somehow remember where deletions happened? If so, how?
jwu wrote:
Replication slippage is a particular cause of insertion and deletion mutations, but that doesn't answer the question in any way.
jwu wrote:
If information is defined as function, then yes, i would agree to that.
However, non-synonymous base substitutions can cause such a thing (resulting in a loss of information), and your own sources say that these can get reversed. That effectively ends the debate
How about immunity to HIV? Is that an increase of information? The ability to metabolize nylon?
1.
There are five transposable elements on the pOAD2 plasmid. When activated, transposase enzymes coded therein cause genetic recombination. Externally imposed stress such as high temperature, exposure to a poison, or starvation can activate transposases. The presence of the transposases in such numbers on the plasmid suggests that the plasmid is designed to adapt when the bacterium is under stress.
2.
All five transposable elements are identical, with 764 base pairs (bp) each. This comprises over eight percent of the plasmid. How could random mutations produce three new catalytic/degradative genes (coding for EI, EII and EIII) without at least some changes being made to the transposable elements? Negoro speculated that the transposable elements must have been a ‘late addition’ to the plasmids to not have changed. But there is no evidence for this, other than the circular reasoning that supposedly random mutations generated the three enzymes and so they would have changed the transposase genes if they had been in the plasmid all along. Furthermore, the adaptation to nylon digestion does not take very long (see point 5 below), so the addition of the transposable elements afterwards cannot be seriously entertained.
3.
All three types of nylon degrading genes appear on plasmids and only on plasmids. None appear on the main bacterial chromosomes of either Flavobacterium or Pseudomonas. This does not look like some random origin of these genesâ€â€the chance of this happening is low. If the genome of Flavobacterium is about two million bp,7 and the pOAD2 plasmid comprises 45,519 bp, and if there were say 5 pOAD2 plasmids per cell (~10% of the total chromosomal DNA), then the chance of getting all three of the genes on the pOAD2 plasmid would be about 0.0015. If we add the probability of the nylon degrading genes of Pseudomonas also only being on plasmids, the probability falls to 2.3 x 10-6. If the enzymes developed in the independent laboratory-controlled adaptation experiments (see point 5, below) also resulted in enzyme activity on plasmids (almost certainly, but not yet determined), then attributing the development of the adaptive enzymes purely to chance mutations becomes even more implausible.
4.
The antisense DNA strand of the four nylon genes investigated in Flavobacterium and Pseudomonas lacks any stop codons.8 This is most remarkable in a total of 1,535 bases. The probability of this happening by chance in all four antisense sequences is about 1 in 1012. Furthermore, the EIII gene in Pseudomonas is clearly not phylogenetically related to the EII genes of Flavobacterium, so the lack of stop codons in the antisense strands of all genes cannot be due to any commonality in the genes themselves (or in their ancestry). Also, the wild-type pOAD2 plasmid is not necessary for the normal growth of Flavobacterium, so functionality in the wild-type parent DNA sequences would appear not to be a factor in keeping the reading frames open in the genes themselves, let alone the antisense strands.
Some statements by Yomo et al., express their consternation:
‘These results imply that there may be some unknown mechanism behind the evolution of these genes for nylon oligomer-degrading enzymes.
‘The presence of a long NSF (non-stop frame) in the antisense strand seems to be a rare case, but it may be due to the unusual characteristics of the genes or plasmids for nylon oligomer degradation.
‘Accordingly, the actual existence of these NSFs leads us to speculate that some special mechanism exists in the regions of these genes.’
It looks like recombination of codons (base pair triplets), not single base pairs, has occurred between the start and stop codons for each sequence. This would be about the simplest way that the antisense strand could be protected from stop codon generation. The mechanism for such a recombination is unknown, but it is highly likely that the transposase genes are involved.
Interestingly, Yomo et al. also show that it is highly unlikely that any of these genes arose through a frame shift mutation, because such mutations (forward or reverse) would have generated lots of stop codons. This nullifies the claim of Thwaites that a functional gene arose from a purely random process (an accident).
5.
The Japanese researchers demonstrated that nylon degrading ability can be obtained de novo in laboratory cultures of Pseudomonas aeruginosa [strain] POA, which initially had no enzymes capable of degrading nylon oligomers.9 This was achieved in a mere nine days! The rapidity of this adaptation suggests a special mechanism for such adaptation, not something as haphazard as random mutations and selection.
6.
The researchers have not been able to ascertain any putative ancestral gene to the nylon-degrading genes. They represent a new gene family. This seems to rule out gene duplications as a source of the raw material for the new genes.
http://www.answersingenesis.org/tj/v17/i3/bacteria.asp
And exactly that is why Shannon's information theory is absolutely inapplicable to biology. In it by definition any change of the original message is considered a loss of information, as it's a deviation from the "original intent". If by some freak accident the most primitive amoeba mutated to have the genome of a human, that would be considered a loss of information in Shannon's information theory, as the amoeba's genome was supposed to remain exactly as is.