Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Join For His Glory for a discussion on how
https://christianforums.net/threads/a-vessel-of-honor.110278/
https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/
Strengthening families through biblical principles.
Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.
Read daily articles from Focus on the Family in the Marriage and Parenting Resources forum.
Or as it's been said..."A lie, if believed by millions of people, is still a lie." :Just because something is thought to be accurate by a large number of people, doesn't mean it really is.
Radiometric dating is a guess at best.
There is no gold standard that can be used to verify it for anything that is supposed to be ancient.
Recorded history only goes back less than 10,000 years. When we date something at 1 million years old (or more), there are no artifacts that we already know for a fact are 1 million years old to compare them to.
What the coelacanth shows us, is that we were told that something was over 65 million years old, when it was actually with us today.
BTW, the lens diagram shows what is called "flare." It is solved, not by optical design, but by placing a hood on the lens that extends just short of the field of view of the lens on the film.
The use of multiple thin-layer metallic coatings on the lens will help, but will not completely reduce flare.
I have a Leica that was made the summer Hitler attacked Poland. (Being German, the Lietz people kept meticulous records of serial numbers and dates)
The lens is not coated, but as long as I have the proper shade in place, it puts most modern lenses available to consumers to shame.
You've confused the fact that the retina is reversed with the fact that a convex lens forms an inverted real image if the object is beyond the focal point. They aren't related.
[quote:4909f]Camera and telescope designers realised this design was the best configuration for resolving images with good resolution a long time ago, it's a shame that in general biologists haven't caught on!
[quote:4909f][quote:4909f]
Well no. A designer would simply have used a better receptor cell, eliminating the need for a reversed retina, and thus removing the blind spot. There are other defects, all of them noticeable in terms of the way they evolved.
[quote:4909f] Therefore, in practice, the blind spot is hardly a disadvantage. At least I can think of no such scenario for where it would be in reality.
A competent designer would simply avoid the problem in the first place, as in cephalopods. Maybe there are competing designers?
In humans, the center of vision is so densly packed with color receptors that our night vision is thereby impaired.
The crystalline lens is flexible enough to focus over a useful range for about four decades. Then it starts to decline. As I entered my fifth decade, I found my arms to short to focus. A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position. [quote:4909f]
For a living growing creature it is necessary to have such a lens that can grow as the eye grows. Furthermore, our lens it is capable of focusing over a very wide range in a very limited space. A crystalline lens has no advantage, but it has certain disadvantages. The creature would need to grow with a mechanism that could constantly change the size and focal length of the lens, there would also need to be a mechanism to finely polish the hard nature of crystalline materials throughout this process. This would take up room in the eye. Secondly, the focusing mechanism would require more room in order to focus, taking up even more room in the eye. Thirdly, I can't think of a muscle configuration for the focusing mechanism that could uniformly and repeatedly move the lens in the required manner along the normal of the lens. Fourthly, when focusing quickly the control muscles would have to give the lens momentum in order to move it to focus the image, and then would then have to stop the lens, due to the elastic nature of muscles, for a brief time you'd get a slight oscillation when the muscles try and stop the lens (note that there is no net momentum required with the design we have in our eyes, and therefore there is no problem for us with our flexible lens). You'd need an additional PID feedback mechanism to try and prevent this. Such quick continual focusing of the eyes, as is part of our everyday life, would also most likely quickly general muscle fatigue for a system. There would need to be many additions for a crystalline lens, and so far we've found no advantage.
[quote:4909f] As I entered my fifth decade, I found my arms to short to focus.
A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position.
You surely did. I have had creationists who have read such edited "quotes" assure me that even Darwin didn't think eyes could evolve. If I didn't know better, that is what I would conclude from reading half the truth.
[quote:4909f]
OK, I'm going to ignore your other examples of eye progression,
[quote:4909f]
simply because they not help you, because you have not understood the problem with gene mutations that Dr Gary Parker was trying to explain.
[quote:4909f][quote:4909f]
It could take millions of years. But the evidence is that it does not. Simulations done by Dan-Erik Nilsson show that it could happen very quickly:
In fact, as I just pointed out, the retinas of complex cephalopod eyes are far more like those of limpets than vertebrate eyes.
[quote:4909f]
1. That the original kind (or ancestor) has a lot more genetic information than the descendants, carrying many dormant genes and chromosomes, allowing for the descendants to come without any additional genetic information.
He's wrong about that, too. Darwin himself recognized this. In fact, he spends a good deal of time discussing variation and the nature of variation. Darwin's idea was that variation plus natural selection explains evolution. The Modern synthesis says that mutation (variation) plus natural selection explains evolution.[quote:4909f]
Natural selection is not in contention here; he's not saying something has been added to "natural selection", but that something has been added in addition to natural selection, i.e. mutation.
[quote:4909f]
Therefore he is not saying natural selection is by chance.
In order for a population to evolve with a new trait, an individual must first gain the trait by mutation to add it to the population.Where did he confuse individuals and populations?
[quote:4909f][quote:4909f][quote:4909f]
Yep. We need to always remember that individual don't evolve; populations evolve. Your guy doesn't seem to have that quite clear.
No. Evolutionary theory only makes claims about the way living things change. It makes no claims about the origin of living things.
[quote:4909f]
You need to pay more attention to what is written.
[quote:4909f]
You are not paying attention with the math! The odds of getting two heads are the product of the powers of probability that the genes required to enable you to have two heads are all changed!
The fallacy here is in supposing it all has to happen at once. We know by observation that it does not.
Well if you get all your cards out in the same order after shuffling every time, you could be a good magician![quote:4909f][quote:4909f]
Let's see.. take a deck of cards, and shuffle it well, and deal out the cards one by one, noting the order. The likelihood of that order is one, divided by about:
8,065,817,517,094,387,857,166,063,685,640,400,000,000,000,000,000,000,000,000,000000,000,000,000,000,000,000,000,000
Yet it happens every time. I'm not impressed.
[quote:4909f]
Read up on natural selection, and you'll understand how it is that "snake kind" can have this information, though a particular species within that kind might not.
In fact, even "design" enthusiasts admit that not all eyes have a common origin.
The infrared "eyes" of snakes are an example.
Actually, one can still walk and use the knee without a patella. The patella functions like one of those little extensions on construction cranes. It gives a mechanical advantage.
You can also lose a ligament or two, and still use it, although with reduced function
[/quote:4909f][/quote:4909f][/quote:4909f][/quote:4909f]There are irreducibly complex biological systems, but they can easily evolve.
They are magnificent cameras! Keep hold of it for dear life!
I have a friend that is very much into photography with his own development room. He had an old Leica that was quite rare - but he did something I wouldn't have done. He's a bit over 70 years old now (and very active), and decided that his eyes were not good enough any more to focus the camera properly - which at that age is fair enough. If it had been me, I'd have put it in a box and kept it safely, and maybe bought a modern camera with an auto-focus. But well, as I say he did something I wouldn't have done, he sold it (and I think got a fair price for it), and used the money he got to buy a digital SLR with an automatic lens. Well, I guess on the positive side, at least he has a relatively nice camera that he can use, on the other hand, its a shame he sold his Leica - it really was a work of art.
This I certainly don't doubt. They are very well machined lenses of excellent quality. Sadly much of the consumer market nowadays compromise quality with a fast buck, they use poorer quality machined lenses and then coat them. The end result is that they may reduce flare, but the picture you get from them can be noticably poorer by comparison to what your Leica will provide. You could get your Leica lens coated, but I certainly wouldn't advise it, neither would I advise that you allow anyone to convince you otherwise. It would be the death of a master piece. Just keep using that hood.
Well, I've not confused them; they are directly related. Video camera designers do the same thing:
1. use a higher powered lens to invert the image, removing some optical interference, and thus increase clarity.
2. wire the imaging device backwards to compensate for the inverted image.
1+2 = high clarity image that is up the right way. It is not a defect, it is the same good design that video camera manufacturers use!
Camera and telescope designers realised this design was the best configuration for resolving images with good resolution a long time ago, it's a shame that in general biologists haven't caught on! [/quote}
Barbarian says:
Can you say "Leeuwenhoek"?
Barbarian observes:
Well, no. A designer would simply have used a better receptor cell, eliminating the need for a reversed retina, and thus removing the blind spot. There are other defects, all of them noticeable in terms of the way they evolved.
Have you found a better receptor cell configuration for viewing colour than that found in vertebrates?
Barbarian observes:
Cephalopods. And insects generally have much better color vision than vertebrates in general and mammals in particular.
This is certainly not true. All known cephalopods except for one (the firefly squid) are coloured blind, they see in monochrome, therefore, if they can't see colour, it is impossible for them to have a better receptor cell configuration for viewing colour than vertebrates.
However, in the one species we know to have color vision, it works quite well. It happens to be bichromic, but obviously, the one set of color receptors it has show that one does not need to reverse the retina to have color vision.
So cephalopods cannot compete with the vertebrates colour depth or resolution.
True, but not relevant. If it happened to have two axes of color instead of one, it would be essentially as we are. And since we now have an example of an organism that has color receptors, but does not need the defect of a blind spot, we've got the point settled. BTW, insects have color vision , and do not need a reversed retina. It's not necessary for color vision.
Insects do not have the retinal surface area to even begin to compete with the colour resolution (actually I don't know of any insect that has colour capability, if you can think of one, then can you name it together with the colour pigments it uses, and the source of your information)
Hmmm.... Let's see...
"Most vertebrates and insects, for example, have trichromatic color vision--that is, vision based on three colors. For vertebrates such as humans, those three colors are blue, green, and red, whereas for most insects the colors are ultraviolet, blue, and green. A flower that reflects only red wavelengths will thus appear red to people and to hummingbirds (important for the plant's chances of being pollinated) but black to most insects. (Some red flowers, such as poppies, also reflect ultraviolet light and thus attract insects.)"
http://www.findarticles.com/cf_dls/m113 ... html?term=
"We review the physiological, molecular, and neural mechanisms of insect color vision. Phylogenetic and molecular analyses reveal that the basic bauplan, UV-blue-green-trichromacy, appears to date back to the Devonian ancestor of all pterygote insects. There are variations on this theme, however. These concern the number of color receptor types, their differential expression across the retina, and their fine tuning along the wavelength scale."
http://visiongene.bio.uci.edu/ABARE01.html
of vertebrate eyes - its just not physically possible.
Many insects cannot resolve images a couple of feet away, let alone a couple of hundred.
Most don't need to. However, there is one example that does:
I used to have a plant which was home to a small jumping spider. It definitely could see me from some distance away. I used to bring it live insects, to watch it stalk. I think it was eventually conditioned to associate my approach with lunch, as it would go into hunting mode when I came near.
Humans can resolve colour images at far higher distance than this at high resolution, and the eagle even more so!
Resolution is not color vision.
So I'll ask the question again. To justify your claim, have you found a better receptor cell configuration for viewing colour than that found in vertebrates?
Yep. In cephalopods and arthropods, no blind spot is required. And they both have perfectly good color vision. In most arthropods, it's even trichroic like ours.
Therefore, in practice, the blind spot is hardly a disadvantage. At least I can think of no such scenario for where it would be in reality.
In other words, a minor defect. Still defective, though.
No, meaning that there is no disadvantage to having a blind spot, there is only the advantage of having a very clear high resolution colour image and therefore there is no defect.
But since that is possible without the blind spot...
barbarian observes:
A competent designer would simply avoid the problem in the first place, as in cephalopods. Maybe there are competing designers?
Well I'm sorry to say, with all due respect, this is arrogance in the highest degree.
I don't usually take it upon myself to make that kind of determination in others.
So far you have not demonstrated that there is any eye that can match the colour image resolution and quality of the vertebrate eye.
Read the cites. Arthropods do very well. Dragonflies have temporal image resolution many times our own. For us multiple images blur together, but for them, even cinema looks like a series of still photos.
In order to have such good colour resolution, there is going to be a high blood flow requirement making the blind spot necessary.
And yet insects have perfectly good color vision without such a requirement.
Barbarian cites another defect:
In humans, the center of vision is so densly packed with color receptors that our night vision is thereby impaired.
Yes, we have a wonderfully rich high resolution colour capability for viewing Gods creation! Our night vision for our needs is quite good enough.
Unfortunately not. You have to be trained to compensate for the problem. When I was in the service, I had to learn the tricks to make the best of rather defective night-vision capabilities.
Besides, we have also been created with great intelligence and are thus able to make lights of our own too.
So it's not an overwhelming defect, but still defective.
quote]The crystalline lens is flexible enough to focus over a useful range for about four decades. Then it starts to decline. As I entered my fifth decade, I found my arms to short to focus. A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position.[quote:bcae3]For a living growing creature it is necessary to have such a lens that can grow as the eye grows. Furthermore, our lens it is capable of focusing over a very wide range in a very limited space. A crystalline lens has no advantage, but it has certain disadvantages.
I've misled you again. "Crystalline lens" is the proper term for the lens in the eye, just behind the iris. Sorry.
Barbarian on selective quoting of Darwin:
You surely did. I have had creationists who have read such edited "quotes" assure me that even Darwin didn't think eyes could evolve. If I didn't know better, that is what I would conclude from reading half the truth.
I was different though, I certainly didn't assure you that Darwin didn't think eyes could evolve.
You don't seem the type to be dishonest. But it happens often enough that you will generally be perceived so by a lot of people if you do it. Be careful, especially when talking to people in the sciences. Many of them are a little touchy about "quote mining", and may assume the wrong thing.
OK, I'm going to ignore your other examples of eye progression,
Barbarian observes:
Most creationists do. The fact that we see such evolution in a number of different phyla is inexplicable to creationists, but perfectly clear in terms of common descent.
No, there is nothing inexplicable here, phyla were created with the capability of producing great variety without adding any genes to their gene pools.
How does one explain that the superficially similar eyes of squid are, on close inspection, far more like the primitive cuplike eyes of the limpet than our own?
Not perfectly clear for evolution. Evolutionists are unable to explain the changes on a micro-biological level, such as, where did cephalopods get the genes from to form a lens in the eye?
Random mutations and natural selection. We've directly observed useful traits evolve that way.
I'm asking for an in depth answer, that shows how, new genes or more to the point, new pairs of genes can come into existence to give a cephalopod a lens in its eye.
Remember, evolution procedes, not by creating things de novo, but by recruiting old things to new uses. So how does that happen, without inactivating needed old genes?
Here's one way:
"We report the identification and characterization of the gene encoding the eighth and final human ribonuclease (RNase) of the highly diversified RNase A superfamily. The RNase 8 gene is linked to seven other RNase A superfamily genes on chromosome 14. It is expressed prominently in the placenta, but is not detected in any other tissues examined. Phylogenetic analysis suggests that RNase 7 is the closest relative of RNase 8 and that the pair likely resulted from a recent gene duplication event in primates. Further analysis reveals that the RNase 8 gene has incorporated non-silent mutations at an elevated rate (1.3 x 10–9 substitutions/site/year) and that orthologous RNase 8 genes from 6 of 10 primate species examined have been deactivated by frameshifting deletions or point mutations at crucial structural or catalytic residues. The ribonucleolytic activity of recombinant human RNase 8 is among the lowest of members of this superfamily and it exhibits neither antiviral nor antibacterial activities characteristic of some other RNase A ribonucleases. The rapid evolution, species-limited deactivation and tissue-specific expression of RNase 8 suggest a unique physiological function and reiterates the evolutionary plasticity of the RNase A superfamily. "
RNase 8, a novel RNase A superfamily ribonuclease expressed uniquely in placenta; Nucleic Acids Research, 2002, Vol. 30, No. 5 1169-1175
Jianzhi Zhang1,2,*, Kimberly D. Dyer1 and Helene F. Rosenberg1
Gene duplication permits the evolution of new traits, while retaining old ones.
I am not asking you to provide the specific genes responsible, I'm just asking you to provide a general method on the micro-biological level. I ignored your other examples for the following reason:
simply because they not help you, because you have not understood the problem with gene mutations that Dr Gary Parker was trying to explain.
Barbarian observes:
We already know that mutations can produce new information and that this, with natural selection, can produce useful new traits. Even most creationists now admit that.
No, most creationists agree that mutations have the possibility of changing existing traits, into a variation on that trait (or in a mathematically highly improbable extreme case, change an existing trait into a completely different trait at the loss of the original). But at the end of the day, all this does, is produce variation within a kind, it does not cause one kind to eventually become another.
Microevolution (variation within a species) probably doesn't. But macroevoluton (speciation) certainly does. The key is in fostering reproductive isolation. Then the two populations will gradually diverge. Eventually, higher taxa evolve. The Institute for Creation Research has endorsed John Woodmorappe's contention that new species, genera, and families evolve. That's a lot of variation. That would permit, for example, the evolution of humans and chimps from a common ancestor.
And there's no reason to doubt it. If you know of some sort of barrier beyond which any species cannot vary, I'd like to know what it is.
Yes, mutations can cause slight changes to existing genes, but in general, these are mostly harmful mutations. By harmful, I don't mean they'll kill you, or necessarily even be noticeable from one generation to the next, but nonetheless, usually the least they'll do is reduce performance. For example, as you'll know from your modern genetics text books, there are over 300 alleles of the hemoglobin gene. That's a lot of variation, but all those alleles produce hemoglobin, a protein for carrying oxygen in red blood cells (and none do it better than the normal allele).
Actually, as that large number of alleles suggests, most mutations don't do very much of anything with regard to fitness. A few are harmful. A very few are useful. That's all natural selection needs.
BTW, a useful new form of Apo-AIM was recently discovered in one clan of Italians. It was traced to a mutation in one individual, and it provides a very good resistance to artheriosclerosis. Since the gene is duplicated, the origional form still exists.
It's not that good mutations are theoretically impossible. Rather, the price is too high.
We used to think most muations are bad, because the early studies were done with easily observable ones, with macroeffects, which are generally harmful. Most of us have a few mutations, and few of us ever know it.
I can name some very bad mutations, can you some very good ones (If so, then what are they)?
See above.
By concept and definition, alleles are just variants of a given gene, producing variation in a given trait. Mutations have been found to produce only alleles, which means they can produce only variation within kind (creation), not change from one kind to others over time (evolution).
No, gene duplication can, by introducing new loci, produce new genes.
Barbarian on the evolution of eyes:
It could take millions of years. But the evidence is that it does not. Simulations done by Dan-Erik Nilsson show that it could happen very quickly:
Firstly, they didn't show that it could happen quickly; this is a gross misunderstanding of what they have written if what you have included is all there is to go on. They simply showed eight stages (each with many steps) simulation that show how certain peripheral features of the eye could evolve. They showed that in a total of 1829 computational steps they could get the shape, lens and iris of an eye. This doesn't take into account any of the other differing features such as control muscles for the lens, the change in cells and complexity of the various retinas along the way (this is the big one), the vitreous fluid, and they're various types.
None of that is necessary for an eye. Our eyes have a great deal of evolutionary history. But the cephalopod, with eyes as complex as ours, can be traced to an "eye" that is hardly more than a depression with a few sensitive cells. We can see that in annelids as well.
Barbarian observes:
There are perfectly good eyes without them.
And what are they then?
Less developed eyes.
Furthermore, I ask you again, where it the evidence that the evolution of the eye wouldn't require a lot of time?
http://taxonomy.zoology.gla.ac.uk/~rdmp ... sld015.htm
In fact, as I just pointed out, the retinas of complex cephalopod eyes are far more like those of limpets than vertebrate eyes.
Yes, but there are astonishing differences between cephalopods and limpets, they eyes are just an example.
Yep. A lot of evolution, but yet, the eyes are fundamentally on the same plan. This is what creationism cannot explain. Why, if there's "design", do the analogous structures have to be modified forms of simpler organisms in the same phylum? Science can explain this, but creationism cannot. Can you?
As you showed through your extract from Dan-Erik Nilsson it would require a tremendous amount of steps to explain even differences in the eyes, and that’s without going into any real depth! And to top it off you even admit above that the differences between cephalopod and vertebrate retinas are far greater than the differences between cephalopods and limpets.
Right. That's only understandable in terms of common descent.
It amazes me that you believe that it is possible to account for such differences by mutation, without even incurring a great genetic load.
Natural selection does a very nice job of dealing with the incomplete dominant or dominant unfavorables. Hardy-Weinberg does a very good job of explaining why we can have a very large number of unfavorable recessives in a large population, with little "load." I do a simulation showing how it works with 8th grade students every year.
1. That the original kind (or ancestor) has a lot more genetic information than the descendants, carrying many dormant genes and chromosomes, allowing for the descendants to come without any additional genetic information.
Won't work. Let's take Adam and Eve. They could have had, at most, 4 alleles among them. Yet, you are talking about hundreds for many loci. The rest must have evolved. It's not a realistic idea to imagine some ur-cat packed to the gills with extra genes, just waiting to dump them as soon as it can speciate.
Barbarian asks:
Do you have any evidence supporting such an idea? Do you have any evidence that there are genes for infrared vision and poison fangs in garter snakes? Tell us about it.
You do know that natural selection can explain how it is that certain populations of snake have certain traits not found in others don't you.
I gather that means you don't.
Further you do also know that there is no reason to believe that skin colour in humans, infrared eyes in snakes, or such like evolved don't you?
Since we can observe how they evolved by intermediates existing today, it's hard to see how it couldn't be.
I'll explain.
Take human skin colour, for example. First of all, it may surprise you to learn that all of us (except albinos) have exactly the same skin-colouring agent. It's a protein called melanin.
Actually, there are several. Eumelanin is black. Phaomelanin is yellow. The amount of blood flow to the skin provides a reddish tint, and I forget the minor ones.
We all have the same basic skin colour, just different amounts of it. (Not a very big difference, is it?) How long would it take to get all the variation in the amount of skin colour we see among people today? A million years? No. A thousand years? No. Answer: just one generation!
Let's see how that works.
(Punnett square assuming one set of alleles)
"When a dark skinned person has kids with a light skinned person, the kids do indeed tend to be intermediate in skin tone. Because skin color does not segregate as a single gene mendelian trait ( unlike green and yellow peas in Mendel's experiments) it is likely that the trait in humans involves the interaction of many variations in multiple enzymes from the melanin pathways."
http://www.madsci.org/posts/archives/oc ... .Gb.r.html
[quote:bcae3]Evolutionists assume that all life started from one or a few chemically evolved life forms with an extremely small gene pool.
Some do. But it's not part of evolutionary theory. You're thinking about abiogenesis, not evolution. Evolutionary theory assumes living things exist, and describes how they vary.
For evolutionists, enlargement of the gene pool by selection of random mutations is a slow, tedious process that burdens each type with a "genetic load" of harmful mutations and evolutionary leftovers.
Why doesn't the large number of harmful alleles do us in? Here's why:
If the harmful one is dominant or mixed-dominant, natural selection takes it out. If it's recessive, it will fall to a level in the population where it is highly unlikely that it will be paired with another like it. Not surprisingly, the distribution of such traits tends to match the expected frequencies.
Creationists assume each created kind began with a large gene pool, designed to multiply and fill the earth with all its tremendous ecologic and geographic variety. (Ge 1:1-31.)
It's a religion notion, and without evidence, we aren't left with much to support it.
Neither creationist nor evolutionist was there at the beginning to see how it was done, but at least the creationist mechanism works, and it's consistent with what we observe.
No, as pointed out, it cannot explain things like analogous organs, intermediates, and observed instances of favorable mutations/speciations. It can't explain the fossil record, and many other things.
The evolutionist assumption doesn't work, and it's not consistent with what we presently know of genetics and reproduction.
But the vast majority of geneticists and biologists don't agree with you. That makes it difficult for your ideas. The evidence seems overwhelming.
According to the creation concept, each kind starts with a large gene pool present in created, probably "average-looking", parents. As descendants of these created kinds become isolated, each average-looking ("generalized") type would tend to break up into a variety of more "specialized" descendants adapted to different environments. Thus, the created ancestors of dogs, for example, have produced such varieties in nature as wolves, coyotes, and jackals. Human beings, of course, have great diversity, too. As the Bible says, God made of "one blood" (meaning also one gene pool) all the "tribes and tongues and nations" of the earth.
How exactly, would Adam and even have 300 alleles for hemoglobin?
In the same way as genes allow such variations in the skin colour of humans, they can also explain why it is that some snakes have infrared eyes, some less so, and some not at all. Why some have long fangs, some medium and some short, why some are poisonous and why others aren't. There is nothing to explain, all this variety can have come from the same common ancestors gene pool.
It seems impossible to cram all those alleles into one organism, at the same locus, unless you have polyploidy of incredible numbes. And animals rarely survive polyploidy. It's just not a credible argument.
Barbarian observes:
He's wrong about that, too. Darwin himself recognized this. In fact, he spends a good deal of time discussing variation and the nature of variation. Darwin's idea was that variation plus natural selection explains evolution. The Modern synthesis says that mutation (variation) plus natural selection explains evolution.
OK, this is getting very sad. Gary Parker says that "the modern evolutionist believes that new traits come about by chance, by random changes in genes called mutations, and not by use and disuse.".
Traits are phenotypes. They are formed by genes or suites of genes, and they are the result of random mutations and natural selection. There are neutral mutations, and genetic drift, but they have little to do with common descent.
That is the modern evolutionist says something (mutation) has been added in addition to natural selection, to explain evolution. You say the Modern synthesis says that mutation plus natural selection explains evolution.
The confusion seems to be about organism/population and genotype/phenotype.
Yep. We need to always remember that individual don't evolve; populations evolve. Your guy doesn't seem to have that quite clear.
In order for a population to evolve with a new trait, an individual must first gain the trait by mutation to add it to the population.
He could have gained some readability if he had you as a proofreader.
Let's just say that the modern synthesis says that mutation plus natural selection explains common descent. Would that be acceptable?
Barbarian on the notion that evolutionary theory is about the origin of life.
No. Evolutionary theory only makes claims about the way living things change. It makes no claims about the origin of living things.
If evolutionary theory "only makes claims about the way living things change." and "It makes no claims about the origin of living things.", then how do evolutionists explain the origin of living things?
They don't agree on that. Some think abiogenesis. Some think God magically created the first organisms. Some think they were seeded here from elsewhere, and God knows what else. It's just not part of evolutionary theory, so they don't necessarily all agree. Personally, since Genesis says that God created living things by natural means, I go with abiogenesis. But it's a religious belief; we don't have enough evidence to say for sure, yet.
Barbarian observes:
Somatic mutations (those happening to any cells other than germ cells, are meaningless to evolution.
Yes, they such mutations are absolutely meaningless, and this was not in contention. The purpose of that section was to show the probability, that we have a couple of cells with a mutated form of almost any gene that is different from any other cell in our body. This is not a meaningless excercise because it is the foundation for calculating the probability of a mutation (in addition to our inherited mutations) happening to one to germ cells.
Well, since the usual number seems to be about two or three per person, that's pretty obvious by inspection.
You are not paying attention with the math! The odds of getting two heads are the product of the powers of probability that the genes required to enable you to have two heads are all changed!
Barbarian observes:
True, but if you already have tossed one head, the probability of the next coming up heads is 0.5. Try it. Toss two coins. Every time the first one comes up heads, toss the second one and see how often it comes up heads, too. If you do this many times, the result will approach 0.5.
OK, you should know better. I'll explain. You have a coin, you haven't flipped it yet, the odds of you getting a head is 0.5. The odds of you getting a second head is 0.5*0.5 = 0.25.
But I was pointing out that once the first head is flipped and comes up heads, the odds of you getting two heads after the second flip is 0.5. It's true.
Now supposing you had already flipped it once and got a head, as you say it is completely true that you now have the probability of getting a head again is 0.5. But you have already flipped the coin one, where you had a probability of 0.5 of getting the first head.
Therefore the total probability of getting two heads in a row is still 0.5 (the for the one that you had already flipped)*0.5(for your second flip) = 0.25.
Here's a way to consider that. In the US, there used to be a game show where one could pick one door of three. Two were good prizes, and one was a joke prize. The emcee would sometimes pick one, and it was a good prize, but not a great one. He then would ask the person to decide whether to take the good prize, or pick a door for the great one.
Try it out. When you're done, you might think differently about the issue:
http://www.stat.sc.edu/~west/javahtml/L ... aDeal.html
Barbarian observes:
The fallacy here is in supposing it all has to happen at once. We know by observation that it does not.
When was that supposed? The paragraph was only explaining the first principles of probability, it was not saying that all the mutation had to happen in a single step.
If it doesn't, then condidtional probabilities apply. And then it's (in the case above) 0.5, not 0.25.
Barbarian on Hoyle's Folly:
Let's see.. take a deck of cards, and shuffle it well, and deal out the cards one by one, noting the order. The likelihood of that order is one, divided by about:
8,065,817,517,094,387,857,166,063,685,640,400,000,000,000,000,000,000,000,000,000000,000,000,000,000,000,000,000,000
Yet it happens every time. I'm not impressed.
Well if you get all your cards out in the same order after shuffling every time, you could be a good magician!
Barbarian observes:
True, but evolution doesn't. As I said, if you could look at a sponge,and infer a zebra, I'd be impressed. Otherwise, it's just finding an arrow in a tree, and drawing a bulls-eye around it.
OK, again, I'm going to have to explain. The likelihood of you dealing any particular combination of cards is roughly one in 8.07 to the 67th power. But the liklihood of dealing one of those possible combinations is 1. That is why it is not impressive. It's not impressive because you have to deal a combination of cards, there is a probability of 1, that you will deal one of those one in 8.07 to the 67th power combinations. The impressive bit is, can you you repeat this order after shuffling? That is the point that Gary Parker was making.
Well, if an identical organism evolved a second time from a different phylum, I'd be floored, too. But that's not what evolution predicts.
You are coming from a different point. OK, I'm going to explain the point further, because I realise this may not be entirely clear. Lets say that there is a 1 in 10 to the 7th power probability that a cell will mutate. Both you an Gary Parker are correct, but you are both coming from different starting points. Gary Parker is saying that there is a 1 in 10 to the 7th power that any particular cell is going to mutate, the probability of that cell mutating twice is the product of the powers of probability i.e. 1 in 10 to the 7th power, times 1 in 10 to the 7th power which equals one in 10 to the 14th power. You are starting at the point where you are saying, a cell has already mutated and I know which one it is (I have already dealt the hand, and I know the combination). Therefore the probability that that cell is going to mutate again is 1 in 10 to the 7th power. This is correct, but you have already had a mutation that had a 1 in 10 to the 7th power of mutating. So therefore, the total probability of that cell mutating twice is 1 in 10 to the 7th power (for the mutation that you have already witnessed) times 1 in 10 to the 7th power (for the probability of that cell having a second mutation) which is still 1 in 10 to the 14th power. Getting the first mutation is not impressive, there was a probability of 1 that one of the cells was going to mutate, but having that cell mutate twice in a row, now that is impressive!
True, but that's not what evolution requires. It merely requires that mutations occur in populations.
Now, this is not saying that all mutations have to happen at once, it is merely showing the principle of probability. One can apply this same principle of probability over generations with respect to a genes. This is what the biologists and mathematicians did. (An engineer is also a mathematician - open an engineering book and you'll see its applied math, you need to be a mathematician to understand much of it. Furthermore an engineer would be more than over qualified to deal with probabilities).
I've had to explain probability to more than one of them. In this case, as you suggested, the odds of any particular organism evolving are astonishingly small, but the probability of some organism evolving is 1.0. And it's not random, since natural selection weeds out a lot of variation that isn't fit enough to survive.
quote]Read up on natural selection, and you'll understand how it is that "snake kind" can have this information, though a particular species within that kind might not. [/quote:bcae3]
Barbarian observes:
There's no evidence for that assertion whatever.
The evidence is skin colour for example, I also showed above how you can get certain populations limited to just dark, just light, or just medium coloured skin, unless crossed.
It's a little more complicated than you've been led to believe. There isn't just one color, and there are a variety of ways genes can affect the way it's expressed.
Barbarian observes:
In fact, even "design" enthusiasts admit that not all eyes have a common origin.
True. For example, the eyes of cattle have their origin in cattle, the eyes in humans have their origins in humans. Cattle and humans do not have the same biological origin.
No, that's wrong. We can rather easily trace the way that primates and ungulates go back to a common ancestor. There's a huge amount of information pointing to that.
We see that anatomy, fossil evidence, genetics, and biochemistry all give us the same phylogenies. In science, confirmation by several independent lines of evidence is considered compelling.
You then directly said
Barbarian on ID people's opinion that not all eyes evolved from a common origin:
The infrared "eyes" of snakes are an example.
who told you that?
Some ID types from ARN. They were very emphatic that most, but not all eyes had a common origin. I wouldn't go that far; many of them are controlled by the same homobox genes, but the genes didn't necessarily first express themselves as eyes.
Now for the mammalian knee...
Barbarian observes:
Actually, one can still walk and use the knee without a patella. The patella functions like one of those little extensions on construction cranes. It gives a mechanical advantage.
There are some mammalian knee's that do not have patella’s, but then patella is not part of the irreducible four bar mechanism.
Barbarian observes:
You can also lose a ligament or two, and still use it, although with reduced function
Actually, you cannot lose any of the four cruciate ligaments (cruciate, because they are essential to the working of the knee). Losing one of these ligaments causes the knee to deform. As soon as any pressure is applied on the knee, the knee buckles, and gives way under the pressure, most likely causing even more damage. The knee cannot be used in this state, it is to terms of functionality useless. The four bar mechanism in the mammalian knee is most certainly irreducibly complex. Furthermore, there has never been a working evolutionary model of how such a mechanism could evolve. Until you produce any evidence to support your claim that "The knee is not in any way irreducibly complex", then we'll remain certain, that it is the case that it is irreducibly complex, just like those that have done indepth studies on it know it to be.
Quote:
There are irreducibly complex biological systems, but they can easily evolve.
[quote:bcae3]No, they can't, that is the point, they are irreducibly complex, and there is no model to describe how they could evolve.[quote:bcae3]
You've been misled. For example, Barry Hall once was working with E. Coli to see how a new enzyme might evolve. That happened, but after it did, something remarkable happened. A regulator evolved. A regulator is a protein that does not allow expression of the enzyme unless the substrate is present. After the regulator evolved, the system was irreducibly complex, since the gene for the enzyme, the gene for the regulator, and the substrate were all required for the system to work. Removal of even one cased the system to cease working.
Take a look here for some other ways:
http://www.cs.colorado.edu/~lindsay/cre ... l#scaffold
[quote:bcae3]Take the example given, the human knee. The majority of biology text books call it a hinge joint, giving the impression that it is just a pivot between the upper and lower leg bones. However this is a gross over-simplification because the knee joint is actually a very sophisticated mechanism and a masterpiece of design.
Actually, it's a pretty shoddy bit of work, because it evolved as the hind leg of a quadruped. Hence, it's one of the parts that comes undone most often, because it's not quite adapted to our recent bipedality.
I used to be an ergonomist, and one of the more interesting jobs I had was a rash of knee degenerations in workers for a conversion van company. Humans kneel, although you don't see that much in any quadruped. The defect in this case was the way the articular cartilage responds to static loading, instead of the cyclical loading/release it was evolved for.
I use to play soccer. I can tell you that knees are definitely not adapted to the stresses of running and sudden turns. My daughter plays for her university, and she's had some problems. Interestingly, she's doing an internship in sports orthopedics right now. I'll ask her about it and get back to you in more detail.
[/quote:bcae3][/quote:bcae3][/quote:bcae3][/quote:bcae3]More advanced books call the knee joint for what it is, a condylar joint, called so because of the rolling and sliding action (articulation) between the upper leg bone (femur) and the main lower leg bone (tibia). This action is controlled by the cruciate ligaments. It is this mechanism that is irreducible, and as I say, there has been no model to explain how it could have evolved.
No, that's wrong. Birds, for example, have no median cruciate ligament. The particular arrangement in mammals is a specific adaptation from a more generalized arrangement in the diapsids.
The Barbarian said:Nope. In fact, we've tested it with knowns, and shown that it is very accurate. The Argon/Argon testing that got the date of the destruction of Pompeii was a good example. Likewise, C14 has been shown to be extremely accurate within the range it is used by scientists.
Physicists, however, can apply known laws to see how older much more ancient material is. Even more convincing, the data from numerous methods all give the same ages. That's compelling evidence.
Actually, the two species we have today are not quite like the ancient ones. But they are obviously evolved from them. Why this should be so is a mystery to creationism, but is understandable in terms of natural selection.
And #2 of those "unprovable assumptions" is the same ones upon which nuclear science rests.
And unless you have some significant evidence proving all of nuclear science wrong and somehow life still exists despite the changing of such constants like the sun not going out or going dark when such constants change, you can stick your argument someplace dark.
How many times have you opened the newspaper and read an article describing the discovery of a new fossil, archaeological find, or underground fault? After describing the nature of the discovery, the article explains how scientists are so thrilled with its confirmation of evolutionary theory. An age is reported, perhaps millions or hundreds of millions or even billions of years. No questions are raised concerning the accuracy of the date, and readers may feel they have no reason to question it either.
Did you ever wonder how they got that date? How do they know with certainty something that happened so long ago? It's almost as if rocks and fossils talk, or come with labels on them explaining how old they are and how they got that way.
As an earth scientist, one who studies rocks and fossils, I will let you in on a little secret. My geologic colleagues may not like me to admit this, but rocks do not talk! Nor do they come with explanatory labels.
I have lots of rocks in my own personal collection, and there are many more in the ICR Museum. These rocks are well cared for and much appreciated. I never did have a "pet" rock, but I do have some favorites. I've spent many hours collecting, cataloging, and cleaning them. Some I've even polished and displayed.
But what would happen if I asked my favorite rock, "Rock, how old are you?" "Fossil, how did you get that way?" You know what would happen--NOTHING! Rocks do not talk! They do not talk to me, and I strongly suspect they do not talk to my evolutionary colleagues either! So where then do the dates and histories come from?
The answer may surprise you with its simplicity, but the concept forms the key thrust of this book. I've designed this book to explain how rocks and fossils are studied, and how conclusions are drawn as to their histories. But more than that, I've tried to explain not only how this endeavor usually proceeds, but how it should proceed.
Before I continue, let me clearly state that evolutionists are, in most cases, good scientists, and men and women of integrity. Their theories are precise and elegant. It is not my intention to ridicule or confuse. It is my desire to show the mind trap they have built for themselves, and show a better way. Let me do this through a hypothetical dating effort, purely fiction, but fairly typical in concept.
How It's Usually Done
Suppose you find a limestone rock containing a beautifully preserved fossil. You want to know the age of the rock, so you take it to the geology department at the nearby university and ask the professor. Fortunately, the professor takes an interest in your specimen and promises to spare no effort in its dating.
Much to your surprise, the professor does not perform carbon-14 dating on the fossil. He explains that carbon dating can only be used on organic materials, not on rocks or even on the fossils, since they too are rock. Furthermore, in theory it's only good for the last few thousand years, and he suspects your fossil is millions of years old. Nor does this expert measure the concentrations of radioactive isotopes to calculate the age of the rock. Sedimentary rock, the kind which contains fossils, he explains, "can not be accurately dated by radioisotope methods. Such methods are only applicable to igneous rocks, like lava rocks and granite". Instead, he studies only the fossil's shape and characteristics, not the rock. "By dating the fossil, the rock can be dated", he declares.
For purposes of this discussion, let's say your fossil is a clam. Many species of clams live today, of course, and this one looks little different from those you have seen. The professor informs you that many different clams have lived in the past, the ancestors of modern clams, but most have now become extinct.
Next, the professor removes a large book from his shelf entitled Invertebrate Paleontology, and opens to the chapter on clams. Sketches of many clams are shown. At first glance many seem similar, but when you look closely, they're all slightly different. Your clam is compared to each one, until finally a clam nearly identical to yours appears. The caption under the sketch identifies your clam as an index fossil, and explains that this type of clam evolved some 320 million years ago.
With a look of satisfaction and an air of certainty, the professor explains, "Your rock is approximately 320 million years old!"
Notice that the rock itself was not examined. It was dated by the fossils in it, and the fossil type was dated by the assumption of evolutionary progression over time. The limestone rock itself might be essentially identical to limestones of any age, so the rock cannot be used to date the rock. The fossils date the rock, and evolution dates the fossils.
You get to thinking. You know that limestones frequently contain fossils, but some seem to be a fine-grained matrix with no visible fossils. In many limestones the fossils seem to be ground to pieces, and other sedimentary rocks, like sandstone and shale, might contain no visible fossils at all. "What do you do then?" you ask.
The professor responds with a brief lecture on stratigraphy, information on how geologic layers are found, one on top of the other, with the "oldest" ones (i.e., containing the oldest fossils) beneath the "younger" ones. This makes sense, for obviously the bottom layer had to be deposited before the upper layers, "But how are the dates obtained?" "By the fossils they contain!" he says.
It turns out that many sedimentary rocks cannot be dated all by themselves. If they have no fossils which can be dated within the evolutionary framework, then "We must look for other fossil-bearing layers, above and below, which can help us sandwich in on a date", the prof says. Such layers may not even be in the same location, but by tracing the layer laterally, perhaps for great distances, some help can be found.
"Fortunately, your rock had a good fossil in it, an index fossil, defined as an organism which lived at only one time in evolutionary history. It's not that it looks substantially more or less advanced than other clams, but it has a distinctive feature somewhat different from other clams. When we see that kind of clam, we know that the rock in which it is found is about 320 million years old, since that kind of clam lived 320 million years ago", he says. "Most fossils are not index fossils. Many organisms, including many kinds of clams, snails, insects, even single-celled organisms, didn't change at all over hundreds of millions of years, and are found in many different layers. Since they didn't live at any one particular time, we cannot use them to date the rocks. Only index fossils are useful, since they are only found in one zone of rock, indicating they lived during a relatively brief period of geologic history. We know that because we only find them in one time period. Whenever we find them, we date the rock as of that age."
Let me pause in our story to identify this thinking process as circular reasoning. It obviously should have no place in science.
Instead of proceeding from observation to conclusion, the conclusion interprets the observation, which "proves" the conclusion. The fossils should contain the main evidence for evolution. But instead, we see that the ages of rocks are determined by the stage of evolution of the index fossils found therein, which are themselves dated and organized by the age of the rocks. Thus the rocks date the fossils, and the fossils date the rocks.
Back to our story. On another occasion, you find a piece of hardened lava, the kind extruded during a volcanic eruption as red hot, liquid lava. Obviously, it contains no fossils, since almost any living thing would have been incinerated or severely altered. You want to know the age of this rock too. But your professor friend in the geology department directs you to the geophysics department. "They can date this rock", you are told.
Your rock fascinates the geophysics professor. He explains that this is the kind of rock that can be dated by using radioisotope dating techniques, based on precise measurements of the ratios of radioactive isotopes in the rock. Once known, these ratios can be plugged into a set of mathematical equations which will give the absolute age of the rock.
Unfortunately, the tests take time. The rock must be ground into powder, then certain minerals isolated. Then the rock powder must be sent to a laboratory where they determine the ratios and report back. A computer will then be asked to analyze the ratios, solve the equations, and give the age.
The geophysicist informs you that these tests are very expensive, but that since your rock is so interesting, and since he has a government grant to pay the bill, and a graduate student to do the work, it will cost you nothing. Furthermore, he will request that several different tests be performed on your rock. There's the uranium-lead method, the potassium-argon method, rubidium-strontium, and a few others. They can be done on the whole rock or individual minerals within the rock. They can be analyzed by the model or the isochron techniques. All on the same rock. "We're sure to get good results that way", you are told. The results will come back with the rock's absolute age, plus or minus a figure for experimental error.
After several weeks the professor calls you in and shows you the results. Finally you will know the true age of your rock. Unfortunately, the results of the different tests do not agree. Each method produced a different age! "How can that happen on a single rock?" you ask.
The uranium-lead method gave 500 plus or minus 20 million years for the rock's age.
The potassium-argon method gave 100 plus or minus 2 million years
The rubidium-strontium model test gave 325 plus or minus 25 million years
The rubidium-strontium isochron test gave 375 plus or minus 35 million years.
Then comes the all-important question. "Where did you find this rock? Were there any fossils nearby, above or below the outcrop containing this lava rock?" When you report that it was just below the limestone layer containing your 320 million year old fossil, it all becomes clear. "The rubidium-strontium dates are correct; they prove your rock is somewhere between 325 and 375 million years old. The other tests were inaccurate. There must have been some leaching or contamination." Once again, the fossils date the rocks, and the fossils are dated by evolution.
Our little story may be fictional, but it is not at all far-fetched. This is the way it's usually done. An interpretation scheme has already been accepted as truth. Each dating result must be evaluated--accepted or rejected--by the assumption of evolution. And the whole dating process proceeds within the backdrop of the old-earth scenario. No evidence contrary to the accepted framework is allowed to remain. Evolution stands, old-earth ideas stand, no matter what the true evidence reveals. An individual fact is accepted or rejected as valid evidence according to its fit with evolution.
Let me illustrate this dilemma with a few quotes from evolutionists. The first is by paleontologist Dr. David Kitts, a valued acquaintance of mine when we were both on the faculty at the University of Oklahoma. While a committed evolutionist, Dr. Kitts is an honest man, a good scientist, and an excellent thinker. He and many others express disapproval with the typical thinking of evolutionists.
"...the record of evolution, like any other historical record, must be construed within a complex of particular and general preconceptions, not the least of which is the hypothesis that evolution has occurred."
David B. Kitts, Paleobiology, 1979, pp. 353, 354.
"And this poses something of a problem: If we date the rocks by the fossils, how can we then turn around and talk about patterns of evolutionary change through time in the fossil record?"
Niles Eldridge, Time Frames, 1985, p. 52.
"A circular argument arises: Interpret the fossil record in the terms of a particular theory of evolution, inspect the interpretation, and note that it confirms the theory. Well, it would, wouldn't it?"
Tom Kemp, "A Fresh Look at the Fossil Record",
New Scientist, Vol. 108, Dec. 5, 1985, p. 67.
For more than three decades potassium-argon (K-Ar) and argon-argon (Ar-Ar) dating of rocks has been crucial in underpinning the billions of years for Earth history claimed by evolutionists. Critical to these dating methods is the assumption that there was no radiogenic argon (40Ar*) in the rocks (e.g., basalt) when they formed, which is usually stated as self-evident. Dalrymple argues strongly:
The K-Ar method is the only decay scheme that can be used with little or no concern for the initial presence of the daughter isotope. This is because 40Ar is an inert gas that does not combine chemically with any other element and so escapes easily from rocks when they are heated. Thus, while a rock is molten, the 40Ar formed by the decay of 40K escapes from the liquid.1
However, this dogmatic statement is inconsistent with even Dalrymple's own work 25 years earlier on 26 historic, subaerial lava flows, 20% of which he found had non-zero concentrations of 40Ar* (or excess argon) in violation of this key assumption of the K-Ar dating method.2 The historically dated flows and their "ages" were:
Hualalai basalt, Hawaii (AD 1800-1801) 1.6±0.16 Ma; 1.41±0.08 Ma
Mt. Etna basalt, Sicily (122 BC) 0.25±0.08 Ma
Mt. Etna basalt, Sicily (AD 1972) 0.35±0.14 Ma
Mt. Lassen plagioclase, California (AD 1915) 0.11±0.03 Ma
Sunset Crater basalt, Arizona (AD 1064-1065) 0.27±0.09 Ma; 0.25±0.15 Ma
Far from being rare, there are numerous reported examples of excess 40Ar* in recent or young volcanic rocks producing excessively old K-Ar "ages":3
Akka Water Fall flow, Hawaii (Pleistocene) 32.3±7.2 Ma
Kilauea Iki basalt, Hawaii (AD 1959) 8.5±6.8 Ma
Mt. Stromboli, Italy, volcanic bomb (September 23, 1963) 2.4±2 Ma
Mt. Etna basalt, Sicily (May 1964) 0.7±0.01 Ma
Medicine Lake Highlands obsidian,
Glass Mountains, California (<500 years old) 12.6±4.5 Ma
Hualalai basalt, Hawaii (AD 1800-1801) 22.8±16.5 Ma
Rangitoto basalt, Auckland, NZ (<800 years old) 0.15±0.47 Ma
Alkali basalt plug, Benue, Nigeria (<30 Ma) 95 Ma
Olivine basalt, Nathan Hills, Victoria Land,
Antarctica (<0.3 Ma) 18.0±0.7 Ma
Anorthoclase in volcanic bomb, Mt Erebus,
Antarctica (1984) 0.64±0.03 Ma
Kilauea basalt, Hawaii (<200 years old) 21±8 Ma
Kilauea basalt, Hawaii (<1,000 years old) 42.9±4.2 Ma; 30.3±3.3 Ma
East Pacific Rise basalt (<1 Ma) 690±7 Ma
Seamount basalt, near East Pacific Rise (<2.5 Ma) 580±10 Ma; 700±150 Ma
East Pacific Rise basalt (<0.6 Ma) 24.2±1.0 Ma
Other studies have also reported measurements of excess 40Ar* in lavas.4 The June 30, 1954 andesite flow from Mt. Ngauruhoe, New Zealand, has yielded "ages" up to 3.5±0.2 Ma due to excess 40Ar*.5 Austin investigated the 1986 dacite lava flow from the post-October 26, 1980, lava dome within the Mount St. Helens crater, which yielded a 0.35±0.05 Ma whole-rock K-Ar model "age" due to excess 40Ar*.6 Concentrates of constituent minerals yielded "ages" up to 2.8±0.6 Ma (pyroxene ultra-concentrate).
Investigators also have found that excess 40Ar* is trapped in the minerals within lava flows.7 Several instances have been reported of phenocrysts with K-Ar "ages" 1-7 millions years greater than that of the whole rock, and one K-Ar "date" on olivine phenocrysts in a recent (<13,000 year old) basalt was greater than 110 Ma.8 Laboratory experiments have tested the solubility of argon in synthetic basalt melts and their constituent minerals, with olivine retaining 0.34 ppm 40Ar*.9 It was concluded that the argon is held primarily in lattice vacancy defects within the minerals.
The obvious conclusion most investigators have reached is that the excess 40Ar* had to be present in the molten lavas when extruded, which then did not completely degas as they cooled, the excess 40Ar* becoming trapped in constituent minerals and the rock fabrics themselves. However, from whence comes the excess 40Ar*, that is, 40Ar which cannot be attributed to atmospheric argon or in situ radioactive decay of 40K? It is not simply "magmatic" argon. Funkhouser and Naughton found that the excess 40Ar* in the 1800-1801 Hualalai flow, Hawaii, resided in fluid and gaseous inclusions in olivine, plagioclase, and pyroxene in ultramafic xenoliths in the basalt, and was sufficient to yield "ages" of 2.6 Ma to 2960 Ma.10 Thus, since the ultramafic xenoliths and the basaltic magmas came from the mantle, the excess 40Ar* must initially reside there, to be transported to the earth's surface in the magmas.
Many recent studies confirm the mantle source of excess 40Ar*. Hawaiian volcanism is typically cited as resulting from a mantle plume, most investigators now conceding that excess 40Ar* in the lavas, including those from the active Loihi and Kilauea volcanoes, is indicative of the mantle source area from which the magmas came. Considerable excess 40Ar* measured in ultramafic mantle xenoliths from Kerguelen Archipelago in the southern Indian Ocean likewise is regarded as the mantle source signature of hotspot volcanism.11 Indeed, data from single vesicles in mid-ocean ridge basalt samples dredged from the North Atlantic suggest the excess 40Ar* in the upper mantle may be almost double previous estimates, that is, almost 150 times more than the atmospheric content (relative to 36Ar).12 Another study on the same samples indicates the upper mantle content of 40Ar* could be even ten times higher.13
Further confirmation comes from diamonds, which form in the mantle and are carried by explosive volcanism into the upper crust and to the surface. When Zashu et al. obtained a K-Ar isochron "age" of 6.0±0.3 Ga for 10 Zaire diamonds, it was obvious excess 40Ar* was responsible, because the diamonds could not be older than the earth itself.14 These same diamonds produced 40Ar/39Ar "age" spectra yielding a ~5.7 Ga isochron.15 It was concluded that the 40Ar is an excess component which has no age significance and is found in tiny inclusions of mantle-derived fluid.
All this evidence clearly shows that excess 40Ar* is ubiquitous in volcanic rocks, and that the excess 40Ar* was inherited from the mantle source areas of the magmas. This is not only true for recent and young volcanics, but for ancient volcanics such as the Middle Proterozoic Cardenas Basalt of eastern Grand Canyon.16 In the mantle, this 40Ar* predominantly represents primordial argon that is not derived from in situ radioactive decay of 40K and thus has no age significance.
In conclusion, the fact that all the primordial argon has not been released yet from the earth's deep interior is consistent with a young Earth. Also, when samples of volcanic rocks are analyzed for K-Ar and Ar-Ar "dating," the investigators can never really be sure therefore that whatever 40Ar* is in the rocks is from in situ radioactive decay of 40K since their formation, or whether some or all of it came from the mantle with the magmas. This could even be the case when the K-Ar and Ar-Ar analyses yield "dates" compatible with other radioisotopic "dating" systems and/or with fossil "dating" based on evolutionary assumptions. Furthermore, there would be no way of knowing, because the 40Ar* from radioactive decay of 40K cannot be distinguished analytically from primordial 40Ar not from radioactive decay, except of course by external assumptions about the ages of the rocks. Thus all K-Ar and Ar-Ar "dates" of volcanic rocks are questionable, as well as fossil "dates" calibrated by them.
References
1 G.B. Dalrymple, The Age of the Earth (1991, Stanford, CA, Stanford University Press), p. 91.
2 G.B. Dalrymple, "40Ar/36Ar Analyses of Historic Lava Flows," Earth and Planetary Science Letters, 6 (1969): pp. 47-55.
3 For the original sources of these data, see the references in A.A. Snelling, "The Cause of Anomalous Potassium-Argon `Ages' for Recent Andesite Flows at Mt. Ngauruhoe, New Zealand, and the Implications for Potassium-Argon `Dating'," R.E. Walsh, ed., Proceedings of the Fourth International Conference on Creationism (1998, Pittsburgh, PA, Creation Science Fellowship), pp. 503-525.
4 Ibid.
5 Ibid.
6 S.A. Austin, "Excess Argon within Mineral Concentrates from the New Dacite Lava Dome at Mount St Helens Volcano," Creation Ex Nihilo Technical Journal, 10 (1996): pp. 335-343.
7 A.W. Laughlin, J. Poths, H.A. Healey, S. Reneau and G. WoldeGabriel, "Dating of Quaternary Basalts Using the Cosmogenic 3He and 14C Methods with Implications for Excess 40Ar," Geology, 22 (1994): pp. 135-138. D.B. Patterson, M. Honda and I. McDougall, "Noble Gases in Mafic Phenocrysts and Xenoliths from New Zealand," Geochimica et Cosmochimica Acta, 58 (1994): pp. 4411-4427. J. Poths, H. Healey and A.W. Laughlin, "Ubiquitous Excess Argon in Very Young Basalts," Geological Society of America Abstracts With Programs, 25 (1993): p. A-462.
8 P.E. Damon, A.W. Laughlin and J.K. Precious, "Problem of Excess Argon-40 in Volcanic Rocks," in Radioactive Dating Methods and Low-Level Counting (1967, Vienna, International Atomic Energy Agency), pp. 463-481.
9 C.L. Broadhurst, M.J. Drake, B.E. Hagee and T.J. Benatowicz, "Solubility and Partitioning of Ar in Anorthite, Diopside, Forsterite, Spinel, and Synthetic Basaltic Liquids," Geochimica et Cosmochimica Acta, 54 (1990): pp. 299-309. C.L. Broadhurst, M.J. Drake, B.E. Hagee and T.J. Benatowicz, "Solubility and Partitioning of Ne, Ar, Kr and Xe in Minerals and Synthetic Basaltic Melts," Geochimica et Cosmochimica Acta, 56 (1992): pp. 709-723.
10 J.G. Funkhouser and J.J. Naughton, "Radiogenic Helium and Argon in Ultramafic Inclusions from Hawaii," Journal of Geophysical Research, 73 (1968): pp. 4601-4607.
11 P.J. Valbracht, M. Honda, T. Matsumoto, N. Mattielli, I. McDougall, R. Ragettli and D. Weis, "Helium, Neon and Argon Isotope Systematics in Kerguelen Ultramafic Xenoliths: Implications for Mantle Source Signatures," Earth and Planetary Science Letters, 138 (1996): pp. 29-38.
12 M. Moreira, J. Kunz and C. Allègre, "Rare Gas Systematics in Popping Rock: Isotopic and Elemental Compositions in the Upper Mantle," Science, 279 (1998): pp. 1178-1181.
13 P. Burnard, D. Graham and G. Turner, "Vesicle-Specific Noble Gas Analyses of `Popping Rock': Implications for Primordial Noble Gases in the Earth," Science, 276 (1997): pp. 568-571.
14 S. Zashu, M. Ozima and O. Nitoh, "K-Ar Isochron Dating of Zaire Cubic Diamonds," Nature, 323 (1986): pp. 710-712.
15 M. Ozima, S. Zashu, Y. Takigami and G. Turner, "Origin of the Anomalous 40Ar-36Ar Age of Zaire Cubic Diamonds: Excess 40Ar in Pristine Mantle Fluids," Nature, 337 (1989): pp. 226-229.
16 S.A. Austin and A.A. Snelling, "Discordant Potassium-Argon Model and Isochron `Ages' for Cardenas Basalt (Middle Proterozoic) and Associated Diabase of Eastern Grand Canyon, Arizona," in R.E. Walsh, ed., Proceedings of the Fourth International Conference on Creationism (1998, Pittsburgh, PA, Creation Science Fellowship), pp. 35-51.
Note: "Ma" represents a million years (Mega-annum); "Ga" represents a billion years (Giga-annum).
* Dr. Snelling is Associate Professor of Geology at ICR.
Actually he is right, you are wrong, and nothing has been said which questions the foundations of nuclear science. And furthermore, stars work on nuclear fusion reactions that have nothing to do with the half-life’s of the atoms they contain. Stars can only burn in the periodic table up to a maximum of iron, at which point the core is too cool, gravity overcomes, and the star collapses to a neutron star, and if its of high enough mass, a black hole.
How exactly, would Adam and even have 300 alleles for hemoglobin?
The Barbarian,
I will reply to your other on Monday. It seems there are some issues where we have got our wires crossed. It seems that you have the impression that creationists believe that all living creatures today are the same as they were at creation (the alleged creationist "lawn" model as presented by Teaching about Evolution And The Nature Of Science), and that we do not believe that mutation and natural selection produce new species (I do not mean kinds), this is certainly not true. The kinds were created at the beginning, and within one generation there were a lot of species of that kind, but through over time by mutation and natural selection there have been produced a lot more species from them. But in such instances, though these are different speices, they are not different kinds. It's not that we do not believe in change, but that that change has limits.
In the classic textbook, Evolution, the late Theodosius Dobzhansky and three other famous evolutionists distinguish between SUBspeciation and TRANSspeciation. "Sub" is essentially variation within species, and "trans" is change from one species to another. The authors state their belief that one can "extrapolate" from variation within species to evolution between species. But they also admit that some of their fellow evolutionists believe that such extrapolation goes beyond all logical limits.
Creationists don't have a problem with subspeciation (even subspeciation due to mutation and natural selection), but they do with transspeciation.
It also seems that our definition of species and kinds are different from yours (I think you mentioned that evolution does not have the idea of set kinds? Am I right in this?).
On Monday I'll explain the difference. And by the way in short...
He didn't. Creationists believe the other alleles that we find today are just variations on the origonal that Adam and Eve had. These other alleles that we find today have come about by mutation. But the point is, it's still hemoglobin, and humans are still humans.
Maybe I need to learn a thing or two about verbally expressing myself in a way that can be understood without being misunderstood. I guess this takes experience.
And as a further note, although I certainly don't agree with you that the blind spot is a defect, I shouldn't have made the comment about your comment 'being the hight of arrogance'. Whether or not I believe this to be true is irrelevent, such a comment should not have been written on a public message board. So I do apologise.
I was under the impression that main sequence stars normally burn elements up to carbon, with supergiants burning elements up to iron.
I was also under the impression that the forces during the collapse of a supergiant were sufficient to cook the elements heavier than iron.
Incidentally, your source (perhaps unknowingly) makes a major and very misleading omission regarding the dating of sedimentary rocks.
While index fossils are useful (and were used by the creationists who worked out the geological column) they do not "date" rocks. They merely indicate what period the rock is from.
Dating is normally done by indexing where the rock happens to be relative to igneous rock that can be dated by radioisotope analysis. Ideally, there are deposits above and below the strata to be dated.
As you probably know, there is very good agreement between the various methods used, (assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens) and this sort of independent verification is compelling.
This is correct as far as I know. I was just trying to show that the mass limit for fusion is iron.
Yes this should be true for the atoms that are collapsing with the star. In the end though the pressure ends up so great that there are no longer protons, neutrons and electrons, but that all the matter just collapses down to neutrons (neutron star), or if the star had a high enough mass to begin with, a black hole.
Now, as a supergiant collapses, the repulsive electrical forces between the particles of the outer layers of the star, may overcome gravity, and if this is correct, it should in theory cause a large short lived explosion (supernova).
I guess in principle it could be true that atoms heavier than iron atoms, could form in this process. But whether or not they'd have an escape velocity high enough to evetually escape the collapsed stars gravitational well, I'm not entirely sure.
I think he does mention this, but I just didn't included it, because there is no radio isotope dating method that is reliable.
That is using the assumption that the evolutionary time scale is correct, in order to "help" date a rock. If the radioisotope methods were reliable, no such method of trying to locate the era of the rock would be required.
Thus someone has to then add their bias (whether for creation or evolution) into the sum, when calculating the age of the rock.
Yes. One can "date" a rock relatively to neighbouring igneous rock that can be "dated" using radioisotope analysis. But since there is no reliable radioisotope method of dating igneous rock, no rock can be reliably dated.
So we come back to dating rocks relative to how old we believe the fossils to be contained in the rocks.
Well no, generally there is quite good disagreement between the various methods (even when assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens).
Though they do seem to agree in general that long periods are involved. Even when they do agree long ages, with the evidence we have had from trying to date the rock of known age, its not hard to figure out that these dating records are at best unreliable, and most likely, completely wrong.
Radioisotope dating cannot reliably give us any information about the date of rocks, and thus the age of the earth.
Nice try. The Latimeria chalumnae (Cormoran population) is identical to fossils that were supposedly dated back 200 million years,
while the Latimeria Menadoensis (Sulawesi population) is simply a different geographical population of the same coelacanth species. Those who discovered the Sulawesi coelacanth did declare it to be a different species, but when it was actually compared to the Cormoran population, they were not able to find differences in the morphological traits of the two populations.
http://www.ucmp.berkeley.edu/vertebrate ... anths.html
More importantly, evolutionists told us that the coelacanth was a missing link of sorts.
That is crawled with its fins rather than swam, which was supposed to be proof that fish crawled onto land.
We now know that the coelacanths swim, not crawl, and that the evolutionists were just making things up (as usual).