Christian Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • Are you taking the time to pray? Christ is the answer in times of need

    https://christianforums.net/threads/psalm-70-1-save-me-o-god-lord-help-me-now.108509/

  • The Gospel of Jesus Christ

    Heard of "The Gospel"? Want to know more?

    There is salvation in no other, for there is not another name under heaven having been given among men, by which it behooves us to be saved."

  • Looking to grow in the word of God more?

    See our Bible Studies and Devotionals sections in Christian Growth

  • Focus on the Family

    Strengthening families through biblical principles.

    Focus on the Family addresses the use of biblical principles in parenting and marriage to strengthen the family.

  • Have questions about the Christian faith?

    Come ask us what's on your mind in Questions and Answers

[_ Old Earth _] Evolution

The numerous dating methods aren't exact, you can't say "this mammoth is exactly 39,873 years old, 3 months, probably around tea time on Friday". Its always around about a few hundred years in either direction. Its one of the things that caused arguements about the shroud of turin, the dating said a 400-500 year period (1100-1500ish), but then thats been argued about ever since.
While its not an exact science dating methods do allow us to narrow it down. If its properly done it will use several samples and even several techniques. Usually the data will all give around the same age, and some idea as to the age of the earth can be deduced.

There are two schools of Christian thought on the age of the earth, there are those as Evanman says who can see the planet as old, and those who refuse to believe its more than 6000 years old.
I'm not sure where the 6000 age comes from, I think it was just counting the ages given in the Bible and working backwards. Anyone confirm that?
 
but ,really, noone knows if they got it right or not. you can say its accurate because of this, and we got that, and we use this advanced technigue, but you just don't know.

But like i said, we also don't know how old the materials were during creation, or how long adam and eve were in eden.

There are too many unanswerable questions to go to accurately into these things.
 
Actually, the two known species of modern coelacanths are unknown in the fossil record. They've evolved quite a bit, from freshwater fish, to deepwater marine fish.

But scientists were surprised to learn that any representative of the group had survived. Like other "living fossils", coelacanths had found a niche that changed very little over the ages.

But, as noted before, evolutionary theory predicts that such species will change very little in such environments.

And as for the contention that radioisotope data isn't accurate, Argon/Argon dating tested the rock from the eruption that buried Pompeii, and got it right.
 
Radiometric dating is a guess at best. There is no gold standard that can be used to verify it for anything that is supposed to be ancient. Recorded history only goes back less than 10,000 years. When we date something at 1 million years old (or more), there are no artifacts that we already know for a fact are 1 million years old to compare them to.

What the coelacanth shows us, is that we were told that something was over 65 million years old, when it was actually with us today.
 
Radiometric dating is a guess at best.

Nope. In fact, we've tested it with knowns, and shown that it is very accurate. The Argon/Argon testing that got the date of the destruction of Pompeii was a good example. Likewise, C14 has been shown to be extremely accurate within the range it is used by scientists.

There is no gold standard that can be used to verify it for anything that is supposed to be ancient.

Isochrons. We know they work, because we can show that they can accurately date known samples.

Recorded history only goes back less than 10,000 years. When we date something at 1 million years old (or more), there are no artifacts that we already know for a fact are 1 million years old to compare them to.

Physicists, however, can apply known laws to see how older much more ancient material is. Even more convincing, the data from numerous methods all give the same ages. That's compelling evidence.

What the coelacanth shows us, is that we were told that something was over 65 million years old, when it was actually with us today.

Actually, the two species we have today are not quite like the ancient ones. But they are obviously evolved from them. Why this should be so is a mystery to creationism, but is understandable in terms of natural selection.
 
BTW, the lens diagram shows what is called "flare." It is solved, not by optical design, but by placing a hood on the lens that extends just short of the field of view of the lens on the film.

The use of multiple thin-layer metallic coatings on the lens will help, but will not completely reduce flare.

As a point of interest for you, flare isn't the same thing, nevertheless, it is still a problem for cameras, and the hood is the solution. I'll explain. Placing a hood on the lens, stops intense light from the sun internally reflecting in the lens and interfering with the image. This means that if the hood were not there, for a non-coated BK7 glass lens (the optical quality glass most likely on your Leica), at the air-glass interface 4% of the light will be reflected and 96% transmitted, and the same for glass-air (in practice its a bit different from this because it also depends on the angle of the incident photons on the glass). So what will happen is when the light from the sun hits the glass 96% will transmit into the lens and 4% will be reflected, then as the light passes through the lens and hits the glass-air interface, 96% of the transmitted light from the last interface will transmit to the image plain, and 4% will be internally reflected back into the lens; then when this internally reflected light hits the outside of the lens, 96% of it will be transmitted, and 4% will be internally reflected back in towards the camera; of this internally reflected light, at the next interface, 4% will be reflected and 96% of it will be transmitted (due to the shape of the lens this will most likely now be transmitted along the path of the light coming from the object you are trying to image), and you'll get the first silhouette of the sun on your photo. This process of internal reflection will then continue, and you'll get a number of silhouettes of the sun due to it on your photo travelling down from the direction of the sun (these silhouettes will vary in size due to the shape of the lens, and the stepping factor between them will depend on the thickness of the lens and its power). In practice though, there is an additional effect. What usually happens is the first transmission of light from the sun to get through the lens (also highlighting any artefacts on the lens surface) hits the optically uneven surface of the base of the lens tube, much of the light is absorbed, but the remainder of the light scatters in the general direction of the image plan. This causes glare. As you'll know, putting the hood on the lens, will shield the light from the sun getting to the lens and causing this problem.

The thin-layer metallic coatings you mentioned are anti-reflective coatings. As you are very correct in saying, they will help, but they will not completely get rid of flare. What these coatings do, is use graded refractive indices to reduce reflection at the air-glass and glass-air interfaces from 4% down to maybe 1% (sometimes they only coat the latter interface, but this doesn't give such good results as coating both). In effect this reduces the amount of light internally reflected by the lens, and thus also cuts down on the effect of the silhouettes (though it doesn't help with glare). Incidentally, it doesn't have to be the sun that causes this, but any bright light source can do it, I just used the sun as an example.

The lens diagrams I drew for you give the solution to another problem. They show the way to prevent photons from objects seen in the peripheral vision interfering with the object you are looking at. Basically, it increases optical clarity.


I have a Leica that was made the summer Hitler attacked Poland. (Being German, the Lietz people kept meticulous records of serial numbers and dates)

They are magnificent cameras! Keep hold of it for dear life! I have a friend that is very much into photography with his own development room. He had an old Leica that was quite rare - but he did something I wouldn't have done. He's a bit over 70 years old now (and very active), and decided that his eyes were not good enough any more to focus the camera properly - which at that age is fair enough. If it had been me, I'd have put it in a box and kept it safely, and maybe bought a modern camera with an auto-focus. But well, as I say he did something I wouldn't have done, he sold it (and I think got a fair price for it), and used the money he got to buy a digital SLR with an automatic lens. Well, I guess on the positive side, at least he has a relatively nice camera that he can use, on the other hand, its a shame he sold his Leica - it really was a work of art.

The lens is not coated, but as long as I have the proper shade in place, it puts most modern lenses available to consumers to shame.

This I certainly don't doubt. They are very well machined lenses of excellent quality. Sadly much of the consumer market nowadays compromise quality with a fast buck, they use poorer quality machined lenses and then coat them. The end result is that they may reduce flare, but the picture you get from them can be noticably poorer by comparison to what your Leica will provide. You could get your Leica lens coated, but I certainly wouldn't advise it, neither would I advise that you allow anyone to convince you otherwise. It would be the death of a master piece. Just keep using that hood.

You've confused the fact that the retina is reversed with the fact that a convex lens forms an inverted real image if the object is beyond the focal point. They aren't related.

Well, I've not confused them; they are directly related. Video camera designers do the same thing:

1. use a higher powered lens to invert the image, removing some optical interference, and thus increase clarity.

2. wire the imaging device backwards to compensate for the inverted image.

1+2 = high clarity image that is up the right way. It is not a defect, it is the same good design that video camera manufacturers use!

[quote:4909f]Camera and telescope designers realised this design was the best configuration for resolving images with good resolution a long time ago, it's a shame that in general biologists haven't caught on!

Can you say "Leeuwenhoek"? :D

[/quote:4909f]

Well good point, he designed some good microscopes. Well, at least there are some that have caught on, this leaves hope for the rest :D

[quote:4909f][quote:4909f]
Well no. A designer would simply have used a better receptor cell, eliminating the need for a reversed retina, and thus removing the blind spot. There are other defects, all of them noticeable in terms of the way they evolved.

Have you found a better receptor cell configuration for viewing colour than that found in vertebrates? [/quote:4909f]

Cephalopods. And insects generally have much better color vision than vertebrates in general and mammals in particular.
[/quote:4909f]

This is certainly not true. All known cephalopods except for one (the firefly squid) are coloured blind, they see in monochrome, therefore, if they can't see colour, it is impossible for them to have a better receptor cell configuration for viewing colour than vertebrates. The only cephalopod that can see colour, the firefly squid, has a limited colour discernment and resolution. So cephalopods cannot compete with the vertebrates colour depth or resolution. Insects do not have the retinal surface area to even begin to compete with the colour resolution (actually I don't know of any insect that has colour capability, if you can think of one, then can you name it together with the colour pigments it uses, and the source of your information) of vertebrate eyes - its just not physically possible.
Many insects cannot resolve images a couple of feet away, let alone a couple of hundred. Humans can resolve colour images at far higher distance than this at high resolution, and the eagle even more so! Thus, neither cephalopods, nor insects, can compete with the receptor cell configuration found in vertebrates. So I'll ask the question again. To justify your claim, have you found a better receptor cell configuration for viewing colour than that found in vertebrates?

[quote:4909f] Therefore, in practice, the blind spot is hardly a disadvantage. At least I can think of no such scenario for where it would be in reality.


In other words, a minor defect. Still defective, though.
[/quote:4909f]

No, meaning that there is no disadvantage to having a blind spot, there is only the advantage of having a very clear high resolution colour image and therefore there is no defect. Unless you can think of a practical example where you can find the blind spot to be a disadvantage (where the consequences are so great that it would be better to forego the advantage that it gives - I can't even think of an example, let alone one of this magnitude), then to continue calling it a defect, is more than a little naive, its plain arrogant.

A competent designer would simply avoid the problem in the first place, as in cephalopods. Maybe there are competing designers?

Well I'm sorry to say, with all due respect, this is arrogance in the highest degree. So far you have not demonstrated that there is any eye that can match the colour image resolution and quality of the vertebrate eye. In order to have such good colour resolution, there is going to be a high blood flow requirement making the blind spot necessary. And since you are unable to give a practical example where the blind spot is a problem, let alone a big problem, you are less than qualified to make such a claim. There are no competing designers, each eye is designed for its environment.

In humans, the center of vision is so densly packed with color receptors that our night vision is thereby impaired.


Yes, we have a wonderfully rich high resolution colour capability for viewing Gods creation! Our night vision for our needs is quite good enough. Besides, we have also been created with great intelligence and are thus able to make lights of our own too.


The crystalline lens is flexible enough to focus over a useful range for about four decades. Then it starts to decline. As I entered my fifth decade, I found my arms to short to focus. A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position. [quote:4909f]

For a living growing creature it is necessary to have such a lens that can grow as the eye grows. Furthermore, our lens it is capable of focusing over a very wide range in a very limited space. A crystalline lens has no advantage, but it has certain disadvantages. The creature would need to grow with a mechanism that could constantly change the size and focal length of the lens, there would also need to be a mechanism to finely polish the hard nature of crystalline materials throughout this process. This would take up room in the eye. Secondly, the focusing mechanism would require more room in order to focus, taking up even more room in the eye. Thirdly, I can't think of a muscle configuration for the focusing mechanism that could uniformly and repeatedly move the lens in the required manner along the normal of the lens. Fourthly, when focusing quickly the control muscles would have to give the lens momentum in order to move it to focus the image, and then would then have to stop the lens, due to the elastic nature of muscles, for a brief time you'd get a slight oscillation when the muscles try and stop the lens (note that there is no net momentum required with the design we have in our eyes, and therefore there is no problem for us with our flexible lens). You'd need an additional PID feedback mechanism to try and prevent this. Such quick continual focusing of the eyes, as is part of our everyday life, would also most likely quickly general muscle fatigue for a system. There would need to be many additions for a crystalline lens, and so far we've found no advantage.

[quote:4909f] As I entered my fifth decade, I found my arms to short to focus.

A common problem, as one gets older the ciliary muscles in the eyes become less able to accommodate the lens to short distances. But a crystalline lens wouldn't have helped you here for the following reason. Lets say that even if there were the required mechanisms to naturally form a crystalline lens in your eyes that could continually adjust the shape of it as one grows; and even if there were such a muscle configuration that could uniformly and repeatedly move the eye in the required manner; as one get older, the control muscles would become limited in range in the same way that our ciliary muscles do. The end result would be a limited focal range, and you'd still need additional glasses to read a book at a comfortable distance from your eyes.

A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position.

In what way is that? I can name a number of technical advantages to having a flexible lens. A design that depends on distorting a flexible lens is by far, more useful! Can you think of any advantage for having a crystalline lens?


You surely did. I have had creationists who have read such edited "quotes" assure me that even Darwin didn't think eyes could evolve. If I didn't know better, that is what I would conclude from reading half the truth.

I was different though, I certainly didn't assure you that Darwin didn't think eyes could evolve. When challenged, I told you that he did give an explanation in terms of natural selection, but that he also realised that it sounded absurd in the highest possible degree to suppose that the eye in all its complexity could have been formed by natural selection. However, I do agree that what I wrote could be misleading to those that don't know, therefore I'll bear this in mind in future conversations - I'll make sure that I let people know that he realised that to suppose that the eye in all its complexity, could be formed by natural selection, absurd in the highest possible degree; but regardless of this, he continued to give an explanation anyway.

[quote:4909f]
OK, I'm going to ignore your other examples of eye progression,

Most creationists do. The fact that we see such evolution in a number of different phyla is inexplicable to creationists, but perfectly clear in terms of common descent. [/quote:4909f]

No, there is nothing inexplicable here, phyla were created with the capability of producing great variety without adding any genes to their gene pools. Not perfectly clear for evolution. Evolutionists are unable to explain the changes on a micro-biological level, such as, where did cephalopods get the genes from to form a lens in the eye? I'm asking for an in depth answer, that shows how, new genes or more to the point, new pairs of genes can come into existence to give a cephalopod a lens in its eye. I am not asking you to provide the specific genes responsible, I'm just asking you to provide a general method on the micro-biological level. I ignored your other examples for the following reason:

[quote:4909f]
simply because they not help you, because you have not understood the problem with gene mutations that Dr Gary Parker was trying to explain.

We already know that mutations can produce new information and that this, with natural selection, can produce useful new traits. Even most creationists now admit that. [/quote:4909f]

No, most creationists agree that mutations have the possibility of changing existing traits, into a variation on that trait (or in a mathematically highly improbable extreme case, change an existing trait into a completely different trait at the loss of the original). But at the end of the day, all this does, is produce variation within a kind, it does not cause one kind to eventually become another.
Yes, mutations can cause slight changes to existing genes, but in general, these are mostly harmful mutations. By harmful, I don't mean they'll kill you, or necessarily even be noticeable from one generation to the next, but nonetheless, usually the least they'll do is reduce performance. For example, as you'll know from your modern genetics text books, there are over 300 alleles of the hemoglobin gene. That's a lot of variation, but all those alleles produce hemoglobin, a protein for carrying oxygen in red blood cells (and none do it better than the normal allele). It's not that good mutations are theoretically impossible. Rather, the price is too high. I can name some very bad mutations, can you some very good ones (If so, then what are they)? By concept and definition, alleles are just variants of a given gene, producing variation in a given trait. Mutations have been found to produce only alleles, which means they can produce only variation within kind (creation), not change from one kind to others over time (evolution).

[quote:4909f][quote:4909f]
It could take millions of years. But the evidence is that it does not. Simulations done by Dan-Erik Nilsson show that it could happen very quickly:

Firstly, they didn't show that it could happen quickly; this is a gross misunderstanding of what they have written if what you have included is all there is to go on. They simply showed eight stages (each with many steps) simulation that show how certain peripheral features of the eye could evolve. They showed that in a total of 1829 computational steps they could get the shape, lens and iris of an eye. This doesn't take into account any of the other differing features such as control muscles for the lens, the change in cells and complexity of the various retinas along the way (this is the big one), the vitreous fluid, and they're various types. Their research and simulation have not helped evolutionists at all, they have only highlighted how much difference there are even in basic features, and this is without adding in the additional detailed changes that need to be applied to the neural pathways and optic lobes of the creature, retinal structure etc. etc.. It is also without relating to how these changes can be applied in genes. It's merely a simulation that highlights just the tip of the ice-berg when it comes to what would need to be changed to get from one stage to the next. So where is the evidence that it wouldn't require a lot of time?
[/quote:4909f]

There are perfectly good eyes without them.

[/quote:4909f]

And what are they then? Furthermore, I ask you again, where it the evidence that the evolution of the eye wouldn't require a lot of time?

In fact, as I just pointed out, the retinas of complex cephalopod eyes are far more like those of limpets than vertebrate eyes.

Yes, but there are astonishing differences between cephalopods and limpets, they eyes are just an example. As you showed through your extract from Dan-Erik Nilsson it would require a tremendous amount of steps to explain even differences in the eyes, and that’s without going into any real depth! And to top it off you even admit above that the differences between cephalopod and vertebrate retinas are far greater than the differences between cephalopods and limpets. It amazes me that you believe that it is possible to account for such differences by mutation, without even incurring a great genetic load.

[quote:4909f]
1. That the original kind (or ancestor) has a lot more genetic information than the descendants, carrying many dormant genes and chromosomes, allowing for the descendants to come without any additional genetic information.

Do you have any evidence supporting such an idea? Do you have any evidence that there are genes for infrared vision and poison fangs in garter snakes? Tell us about it.
[/quote:4909f]

You do know that natural selection can explain how it is that certain populations of snake have certain traits not found in others don't you. Further you do also know that there is no reason to believe that skin colour in humans, infrared eyes in snakes, or such like evolved don't you? I'll explain.

Take human skin colour, for example. First of all, it may surprise you to learn that all of us (except albinos) have exactly the same skin-colouring agent. It's a protein called melanin. We all have the same basic skin colour, just different amounts of it. (Not a very big difference, is it?) How long would it take to get all the variation in the amount of skin colour we see among people today? A million years? No. A thousand years? No. Answer: just one generation!

Let's see how that works. As you'll know from your genetics text books, the amount of skin colour we have depends on at least two pairs of genes. Let's call these genes A and B. People with the darkest skin colour have genes AABB as their genotype (set of genes for a trait); those with very light skins have aabb. People with two "capital-letter" genes would be "medium-skinned", and those with one or three such genes would be a shade lighter or a shade darker.

variation.jpg


Suppose we start with two medium-skinned parents, AaBb. The image I've included is a genetic square that shows the kind of children they could have. Less than half (only 6 of the 16 combinations) would be medium-skinned like their parents. Four each would be a shade darker or lighter. One in 16 of the children of medium-skinned parents (AaBb) would have the darkest possible skin colour (AABB), while the chances are also 1 in 16 that a brother or sister will have the very lightest skin colour (aabb). (For details, see Parker, Reynolds, and Reynolds.)

All human beings have the same basic skin-colour agent (melanin), just different amounts of it. From parents created with medium skin colour as diagrammed, all the variation we see today could be produced in just one generation. In the same way, plants and animals created with a mixture of genes could have filled all of the earth's ecologic and geographic variety. As people break up into groups, however, some groups would develop limited variability---only dark, only medium, or only light as indicated.)

The Bible doesn't tell us what skin colour our first parents had, but, from a design point of view, the "middle" makes a great beginning. Starting with medium-skinned parents (AaBb), it would take only one generation to produce all the variation we see in human skin colour today. In fact, this is the normal situation in India today. Some Indians are as dark as the darkest Africans, and some--perhaps a brother or sister in the same family--as light as the lightest Europeans. Its not unknown to find Indian families that include members with every major skin colour you could see anywhere in the world.

But now notice what happens if human groups were isolated after creation. If those with very dark skins (AABB) migrate into the same areas and/or marry only those with very dark skins, then all their children will have very dark skins. (AABB is the only possible combination of AB egg and sperm cells, which are the only types that can be produced by AABB parents.) Similarly, parents with very light skins (aabb) can have only very light-skinned children, since they do not have any A or B genes to pass on. Even certain medium-skinned parents (AAbb or aaBB) can get "locked-in" to having only medium-skinned children, like the Orientals, Polynesians, and the Native Americans.

Where people with different skin colours get together again (as they do in the West Indies, for example), you find the full range of variation again--nothing less, but nothing more either, than what we started with. Clearly, all this is variation within kind. "Gene pool" refers to all the different genes that are present in a population. There are at least four skin-colour genes in the human gene pool: A, a, B, b. That total human gene pool for skin color can be found in just one person with medium skin colour (AaBb), or it can be "spread around" among many people with visibly different skin colours. In fact, the gene frequencies (percents of each gene) in one AaBb medium-skinned person are exactly the same as the gene frequencies in the 16 children that show five different skin colours. All that individual variation occurs in a group that remains constant with creation, and variation within the created kind!

What happened as the descendants of medium-skinned parents produced a variety of descendants? Evolution? Not at all. Except for albinism (the mutational loss of skin colour), the human gene pool is no bigger and no different now than the gene pool present at creation. As people multiplied, the genetic variability built right into the first created human beings came to visible expression. The darkest Nigerian and the lightest Norwegian, the tallest Watusi and the shortest Pygmy, the highest soprano and the lowest bass could have been present right from the beginning in two quite average-looking people. Great variation in size, colour, form, function, etc., would also be present in the two created ancestors of all the other kinds (plants and animals) as well.

Evolutionists assume that all life started from one or a few chemically evolved life forms with an extremely small gene pool. For evolutionists, enlargement of the gene pool by selection of random mutations is a slow, tedious process that burdens each type with a "genetic load" of harmful mutations and evolutionary leftovers. Creationists assume each created kind began with a large gene pool, designed to multiply and fill the earth with all its tremendous ecologic and geographic variety. (Ge 1:1-31.)

Neither creationist nor evolutionist was there at the beginning to see how it was done, but at least the creationist mechanism works, and it's consistent with what we observe. The evolutionist assumption doesn't work, and it's not consistent with what we presently know of genetics and reproduction. As a scientist, I prefer ideas that do work and do help to explain what we can observe, and that is creation!

According to the creation concept, each kind starts with a large gene pool present in created, probably "average-looking", parents. As descendants of these created kinds become isolated, each average-looking ("generalized") type would tend to break up into a variety of more "specialized" descendants adapted to different environments. Thus, the created ancestors of dogs, for example, have produced such varieties in nature as wolves, coyotes, and jackals. Human beings, of course, have great diversity, too. As the Bible says, God made of "one blood" (meaning also one gene pool) all the "tribes and tongues and nations" of the earth.

Varieties within a created kind have the same genes, but in different percentages. Take the Native Americans. Certain tribes have a high percentage of blood type A, but that type is quite rare among other tribes, including branches of the Cherokee Nation. The differences represent just differences in the genes carded by the founders of each tribe as people migrated across the North American continent.

Differences from average gene percentages can come to expression quickly in small populations (a process called "genetic drift"). Take the Pennsylvania Amish, for example. Because they are descendants of only about 200 settlers who tended to marry among themselves, they have a greater percentage than their ancestors of genes for short fingers, short stature, a sixth finger, and a certain blood disease. For similar reasons, plants and animals on opposite sides of mountains, rivers, or canyons often have variations in size, colour, ear-shape, or some such feature that makes them recognisable as variations of a given kind.

All the different varieties of human beings can, of course, marry one another and have children. Many varieties of plants and animals also retain the ability to reproduce and trade genes, despite differences in appearance as great as those between St. Bernards and Chihuahuas. But varieties of one kind may also lose the ability to interbreed with others of their kind. For example, fruit flies multiplying through Central and South America have split up into many subgroups. And since these subgroups no longer interbreed, each can be called a separate species. Though they are still of the same kind.

In the same way as genes allow such variations in the skin colour of humans, they can also explain why it is that some snakes have infrared eyes, some less so, and some not at all. Why some have long fangs, some medium and some short, why some are poisonous and why others aren't. There is nothing to explain, all this variety can have come from the same common ancestors gene pool.

[quote:4909f]
Natural selection is not in contention here; he's not saying something has been added to "natural selection", but that something has been added in addition to natural selection, i.e. mutation.
He's wrong about that, too. Darwin himself recognized this. In fact, he spends a good deal of time discussing variation and the nature of variation. Darwin's idea was that variation plus natural selection explains evolution. The Modern synthesis says that mutation (variation) plus natural selection explains evolution.
[/quote:4909f]

OK, this is getting very sad. Gary Parker says that "the modern evolutionist believes that new traits come about by chance, by random changes in genes called mutations, and not by use and disuse.". That is the modern evolutionist says something (mutation) has been added in addition to natural selection, to explain evolution. You say the Modern synthesis says that mutation plus natural selection explains evolution.

Lets look at these two, and follow how they are both saying exactly the same thing. Gary Parker says that the modern evolutionist says:

Something (mutation) has been added in addition to natural selection explains natural selection.

i.e. mutation in addition to natural selection explains evolution.

i.e. mutation plus natural selection explains evolution.

you say

mutation plus natural selection explains evolution.

Therefore, I don't know what your problem is, since he is saying exactly the same thing.

[quote:4909f]
Therefore he is not saying natural selection is by chance.

It might seem like a small error, but confusing individuals and populations has some major problems as you get into the details.
Where did he confuse individuals and populations?

[quote:4909f][quote:4909f][quote:4909f]
Yep. We need to always remember that individual don't evolve; populations evolve. Your guy doesn't seem to have that quite clear.
In order for a population to evolve with a new trait, an individual must first gain the trait by mutation to add it to the population.
[/quote:4909f]

He could have gained some readability if he had you as a proofreader.
[/quote:4909f]

OK, again, this is getting very sad. I said (to which you seemed to agree)

"In order for a population to evolve with a new trait, an individual must first gain the trait by mutation to add it to the population."

i.e. the individual must first evolve the new trait, before it can then be passed on to the population.

i.e. the individual must evolve the trait first, before it can be passed onto the population.

i.e. the individual must first evolve (change) before the population can evolve (change).

If you do not agree, with this, then would you like to explain how a population can evolve (change) without an individual first evolving that change? If you do agree with this then you are in contradiction to your statsment "We need to always remember that individual don't evolve".

No. Evolutionary theory only makes claims about the way living things change. It makes no claims about the origin of living things.

If evolutionary theory "only makes claims about the way living things change." and "It makes no claims about the origin of living things.", then how do evolutionists explain the origin of living things?

[quote:4909f]
You need to pay more attention to what is written.

I did. Somatic mutations (those happening to any cells other than germ cells, are meaningless to evolution.
[/quote:4909f]

Yes, they such mutations are absolutely meaningless, and this was not in contention. The purpose of that section was to show the probability, that we have a couple of cells with a mutated form of almost any gene that is different from any other cell in our body. This is not a meaningless excercise because it is the foundation for calculating the probability of a mutation (in addition to our inherited mutations) happening to one to germ cells.

[quote:4909f]
You are not paying attention with the math! The odds of getting two heads are the product of the powers of probability that the genes required to enable you to have two heads are all changed!

True, but if you already have tossed one head, the probability of the next coming up heads is 0.5. Try it. Toss two coins. Every time the first one comes up heads, toss the second one and see how often it comes up heads, too. If you do this many times, the result will approach 0.5.
[/quote:4909f]

OK, you should know better. I'll explain. You have a coin, you haven't flipped it yet, the odds of you getting a head is 0.5. The odds of you getting a second head is 0.5*0.5 = 0.25.

Now supposing you had already flipped it once and got a head, as you say it is completely true that you now have the probability of getting a head again is 0.5. But you have already flipped the coin one, where you had a probability of 0.5 of getting the first head.
Therefore the total probability of getting two heads in a row is still 0.5 (the for the one that you had already flipped)*0.5(for your second flip) = 0.25.

The fallacy here is in supposing it all has to happen at once. We know by observation that it does not.

When was that supposed? The paragraph was only explaining the first principles of probability, it was not saying that all the mutation had to happen in a single step.


[quote:4909f][quote:4909f]
Let's see.. take a deck of cards, and shuffle it well, and deal out the cards one by one, noting the order. The likelihood of that order is one, divided by about:
8,065,817,517,094,387,857,166,063,685,640,400,000,000,000,000,000,000,000,000,000000,000,000,000,000,000,000,000,000

Yet it happens every time. I'm not impressed.
Well if you get all your cards out in the same order after shuffling every time, you could be a good magician!

[/quote:4909f]
True, but evolution doesn't. As I said, if you could look at a sponge,and infer a zebra, I'd be impressed. Otherwise, it's just finding an arrow in a tree, and drawing a bulls-eye around it.
[/quote:4909f]

OK, again, I'm going to have to explain. The likelihood of you dealing any particular combination of cards is roughly one in 8.07 to the 67th power. But the liklihood of dealing one of those possible combinations is 1. That is why it is not impressive. It's not impressive because you have to deal a combination of cards, there is a probability of 1, that you will deal one of those one in 8.07 to the 67th power combinations. The impressive bit is, can you you repeat this order after shuffling? That is the point that Gary Parker was making. You are coming from a different point. OK, I'm going to explain the point further, because I realise this may not be entirely clear. Lets say that there is a 1 in 10 to the 7th power probability that a cell will mutate. Both you an Gary Parker are correct, but you are both coming from different starting points. Gary Parker is saying that there is a 1 in 10 to the 7th power that any particular cell is going to mutate, the probability of that cell mutating twice is the product of the powers of probability i.e. 1 in 10 to the 7th power, times 1 in 10 to the 7th power which equals one in 10 to the 14th power. You are starting at the point where you are saying, a cell has already mutated and I know which one it is (I have already dealt the hand, and I know the combination). Therefore the probability that that cell is going to mutate again is 1 in 10 to the 7th power. This is correct, but you have already had a mutation that had a 1 in 10 to the 7th power of mutating. So therefore, the total probability of that cell mutating twice is 1 in 10 to the 7th power (for the mutation that you have already witnessed) times 1 in 10 to the 7th power (for the probability of that cell having a second mutation) which is still 1 in 10 to the 14th power. Getting the first mutation is not impressive, there was a probability of 1 that one of the cells was going to mutate, but having that cell mutate twice in a row, now that is impressive!

Now, this is not saying that all mutations have to happen at once, it is merely showing the principle of probability. One can apply this same principle of probability over generations with respect to a genes. This is what the biologists and mathematicians did. (An engineer is also a mathematician - open an engineering book and you'll see its applied math, you need to be a mathematician to understand much of it. Furthermore an engineer would be more than over qualified to deal with probabilities).


[quote:4909f]
Read up on natural selection, and you'll understand how it is that "snake kind" can have this information, though a particular species within that kind might not.


There's no evidence for that assertion whatever. [/quote:4909f]

The evidence is skin colour for example, I also showed above how you can get certain populations limited to just dark, just light, or just medium coloured skin, unless crossed.

In fact, even "design" enthusiasts admit that not all eyes have a common origin.

True. For example, the eyes of cattle have their origin in cattle, the eyes in humans have their origins in humans. Cattle and humans do not have the same biological origin.

You then directly said

The infrared "eyes" of snakes are an example.

who told you that?

Now for the mammalian knee...

Actually, one can still walk and use the knee without a patella. The patella functions like one of those little extensions on construction cranes. It gives a mechanical advantage.

There are some mammalian knee's that do not have patella’s, but then patella is not part of the irreducible four bar mechanism.

You can also lose a ligament or two, and still use it, although with reduced function

Actually, you cannot lose any of the four cruciate ligaments (cruciate, because they are essential to the working of the knee). Losing one of these ligaments causes the knee to deform. As soon as any pressure is applied on the knee, the knee buckles, and gives way under the pressure, most likely causing even more damage. The knee cannot be used in this state, it is to terms of functionality useless. The four bar mechanism in the mammalian knee is most certainly irreducibly complex. Furthermore, there has never been a working evolutionary model of how such a mechanism could evolve. Until you produce any evidence to support your claim that "The knee is not in any way irreducibly complex", then we'll remain certain, that it is the case that it is irreducibly complex, just like those that have done indepth studies on it know it to be.

There are irreducibly complex biological systems, but they can easily evolve.
[/quote:4909f][/quote:4909f][/quote:4909f][/quote:4909f]

No, they can't, that is the point, they are irreducibly complex, and there is no model to describe how they could evolve. Take the example given, the human knee. The majority of biology text books call it a hinge joint, giving the impression that it is just a pivot between the upper and lower leg bones. However this is a gross over-simplification because the knee joint is actually a very sophisticated mechanism and a masterpiece of design. More advanced books call the knee joint for what it is, a condylar joint, called so because of the rolling and sliding action (articulation) between the upper leg bone (femur) and the main lower leg bone (tibia). This action is controlled by the cruciate ligaments. It is this mechanism that is irreducible, and as I say, there has been no model to explain how it could have evolved.
 
BTW, the lens diagram shows what is called "flare." It is solved, not by optical design, but by placing a hood on the lens that extends just short of the field of view of the lens on the film.

The use of multiple thin-layer metallic coatings on the lens will help, but will not completely reduce flare.

(Discussion of reflection in internal elements)

Normally, That's why they cement them together in groups when possible. Leitz used a balsam cement, which is, a half-century later, prone to separation.

Barbarian observes:
I have a Leica that was made the summer Hitler attacked Poland. (Being German, the Lietz people kept meticulous records of serial numbers and dates)

They are magnificent cameras! Keep hold of it for dear life!

Actually, mine is a relatively common model. A few years ago, you could have purchased it for less than $200. But they are amazing in their simplicity, reliability, and precision. Where we rely on electronics, everything in one of these is mechanical.

I have a friend that is very much into photography with his own development room. He had an old Leica that was quite rare - but he did something I wouldn't have done. He's a bit over 70 years old now (and very active), and decided that his eyes were not good enough any more to focus the camera properly - which at that age is fair enough. If it had been me, I'd have put it in a box and kept it safely, and maybe bought a modern camera with an auto-focus. But well, as I say he did something I wouldn't have done, he sold it (and I think got a fair price for it), and used the money he got to buy a digital SLR with an automatic lens. Well, I guess on the positive side, at least he has a relatively nice camera that he can use, on the other hand, its a shame he sold his Leica - it really was a work of art.

Bad deal. His digital (much as I've learned to love them) will decline in value. Ironically, rangefinders are much easier to manual focus than any SLR. And more accurate. I've learned to live with autofocus, but I'll never love it.

Barbarian observes:
The lens is not coated, but as long as I have the proper shade in place, it puts most modern lenses available to consumers to shame.

This I certainly don't doubt. They are very well machined lenses of excellent quality. Sadly much of the consumer market nowadays compromise quality with a fast buck, they use poorer quality machined lenses and then coat them. The end result is that they may reduce flare, but the picture you get from them can be noticably poorer by comparison to what your Leica will provide. You could get your Leica lens coated, but I certainly wouldn't advise it, neither would I advise that you allow anyone to convince you otherwise. It would be the death of a master piece. Just keep using that hood.

I have a Summitar from the early 40s that's coated. It doesn't seem to be as good, somehow. I believe my Zeiss lenses for my Contax II are even better. These date from the 40s to early 50s, I think. The Contax was the only real alternative to Leica for 35mm photography then, but Zeiss took an entirely different approach. Leica was simple, and assembled by craftsmen. Contax was mass produced, and engineered to be rugged but complex.

And now that we've bored everyone, I'll deal with the on-topic stuff in the next post.
 
Barbarian observes:
You've confused the fact that the retina is reversed with the fact that a convex lens forms an inverted real image if the object is beyond the focal point. They aren't related.

Well, I've not confused them; they are directly related. Video camera designers do the same thing:

1. use a higher powered lens to invert the image, removing some optical interference, and thus increase clarity.

2. wire the imaging device backwards to compensate for the inverted image.

1+2 = high clarity image that is up the right way. It is not a defect, it is the same good design that video camera manufacturers use!

It's not about inversion. The brain doesn't really care which what the image is presented; even if prism goggles are worn to reverse the image to the eye, the brain will automatically compensate after a very short time.

Rather, the vertebrate retina is formed so that the nerves actually come into the retina from the front, instead of from the back. I'd be really surprised to learn that CCDs work that way.

Camera and telescope designers realised this design was the best configuration for resolving images with good resolution a long time ago, it's a shame that in general biologists haven't caught on! [/quote}

Barbarian says:
Can you say "Leeuwenhoek"?

Barbarian observes:
Well, no. A designer would simply have used a better receptor cell, eliminating the need for a reversed retina, and thus removing the blind spot. There are other defects, all of them noticeable in terms of the way they evolved.

Have you found a better receptor cell configuration for viewing colour than that found in vertebrates?

Barbarian observes:
Cephalopods. And insects generally have much better color vision than vertebrates in general and mammals in particular.

This is certainly not true. All known cephalopods except for one (the firefly squid) are coloured blind, they see in monochrome, therefore, if they can't see colour, it is impossible for them to have a better receptor cell configuration for viewing colour than vertebrates.

However, in the one species we know to have color vision, it works quite well. It happens to be bichromic, but obviously, the one set of color receptors it has show that one does not need to reverse the retina to have color vision.

So cephalopods cannot compete with the vertebrates colour depth or resolution.

True, but not relevant. If it happened to have two axes of color instead of one, it would be essentially as we are. And since we now have an example of an organism that has color receptors, but does not need the defect of a blind spot, we've got the point settled. BTW, insects have color vision , and do not need a reversed retina. It's not necessary for color vision.

Insects do not have the retinal surface area to even begin to compete with the colour resolution (actually I don't know of any insect that has colour capability, if you can think of one, then can you name it together with the colour pigments it uses, and the source of your information)

Hmmm.... Let's see...

"Most vertebrates and insects, for example, have trichromatic color vision--that is, vision based on three colors. For vertebrates such as humans, those three colors are blue, green, and red, whereas for most insects the colors are ultraviolet, blue, and green. A flower that reflects only red wavelengths will thus appear red to people and to hummingbirds (important for the plant's chances of being pollinated) but black to most insects. (Some red flowers, such as poppies, also reflect ultraviolet light and thus attract insects.)"
http://www.findarticles.com/cf_dls/m113 ... html?term=

"We review the physiological, molecular, and neural mechanisms of insect color vision. Phylogenetic and molecular analyses reveal that the basic bauplan, UV-blue-green-trichromacy, appears to date back to the Devonian ancestor of all pterygote insects. There are variations on this theme, however. These concern the number of color receptor types, their differential expression across the retina, and their fine tuning along the wavelength scale."
http://visiongene.bio.uci.edu/ABARE01.html

of vertebrate eyes - its just not physically possible.
Many insects cannot resolve images a couple of feet away, let alone a couple of hundred.

Most don't need to. However, there is one example that does:

salteyes.gif


I used to have a plant which was home to a small jumping spider. It definitely could see me from some distance away. I used to bring it live insects, to watch it stalk. I think it was eventually conditioned to associate my approach with lunch, as it would go into hunting mode when I came near.

Humans can resolve colour images at far higher distance than this at high resolution, and the eagle even more so!

Resolution is not color vision.

So I'll ask the question again. To justify your claim, have you found a better receptor cell configuration for viewing colour than that found in vertebrates?

Yep. In cephalopods and arthropods, no blind spot is required. And they both have perfectly good color vision. In most arthropods, it's even trichroic like ours.

Therefore, in practice, the blind spot is hardly a disadvantage. At least I can think of no such scenario for where it would be in reality.

In other words, a minor defect. Still defective, though.

No, meaning that there is no disadvantage to having a blind spot, there is only the advantage of having a very clear high resolution colour image and therefore there is no defect.

But since that is possible without the blind spot...

barbarian observes:
A competent designer would simply avoid the problem in the first place, as in cephalopods. Maybe there are competing designers?

Well I'm sorry to say, with all due respect, this is arrogance in the highest degree.

I don't usually take it upon myself to make that kind of determination in others.

So far you have not demonstrated that there is any eye that can match the colour image resolution and quality of the vertebrate eye.

Read the cites. Arthropods do very well. Dragonflies have temporal image resolution many times our own. For us multiple images blur together, but for them, even cinema looks like a series of still photos.

In order to have such good colour resolution, there is going to be a high blood flow requirement making the blind spot necessary.

And yet insects have perfectly good color vision without such a requirement.

Barbarian cites another defect:
In humans, the center of vision is so densly packed with color receptors that our night vision is thereby impaired.

Yes, we have a wonderfully rich high resolution colour capability for viewing Gods creation! Our night vision for our needs is quite good enough.

Unfortunately not. You have to be trained to compensate for the problem. When I was in the service, I had to learn the tricks to make the best of rather defective night-vision capabilities.

Besides, we have also been created with great intelligence and are thus able to make lights of our own too.

So it's not an overwhelming defect, but still defective.

quote]The crystalline lens is flexible enough to focus over a useful range for about four decades. Then it starts to decline. As I entered my fifth decade, I found my arms to short to focus. A "design" that depends on distorting a flexible lens is not as useful as one that can adjust position.
[quote:bcae3]For a living growing creature it is necessary to have such a lens that can grow as the eye grows. Furthermore, our lens it is capable of focusing over a very wide range in a very limited space. A crystalline lens has no advantage, but it has certain disadvantages.

I've misled you again. "Crystalline lens" is the proper term for the lens in the eye, just behind the iris. Sorry.

Barbarian on selective quoting of Darwin:
You surely did. I have had creationists who have read such edited "quotes" assure me that even Darwin didn't think eyes could evolve. If I didn't know better, that is what I would conclude from reading half the truth.

I was different though, I certainly didn't assure you that Darwin didn't think eyes could evolve.

You don't seem the type to be dishonest. But it happens often enough that you will generally be perceived so by a lot of people if you do it. Be careful, especially when talking to people in the sciences. Many of them are a little touchy about "quote mining", and may assume the wrong thing.

OK, I'm going to ignore your other examples of eye progression,

Barbarian observes:
Most creationists do. The fact that we see such evolution in a number of different phyla is inexplicable to creationists, but perfectly clear in terms of common descent.

No, there is nothing inexplicable here, phyla were created with the capability of producing great variety without adding any genes to their gene pools.

How does one explain that the superficially similar eyes of squid are, on close inspection, far more like the primitive cuplike eyes of the limpet than our own?

Not perfectly clear for evolution. Evolutionists are unable to explain the changes on a micro-biological level, such as, where did cephalopods get the genes from to form a lens in the eye?

Random mutations and natural selection. We've directly observed useful traits evolve that way.

I'm asking for an in depth answer, that shows how, new genes or more to the point, new pairs of genes can come into existence to give a cephalopod a lens in its eye.

Remember, evolution procedes, not by creating things de novo, but by recruiting old things to new uses. So how does that happen, without inactivating needed old genes?

Here's one way:

"We report the identification and characterization of the gene encoding the eighth and final human ribonuclease (RNase) of the highly diversified RNase A superfamily. The RNase 8 gene is linked to seven other RNase A superfamily genes on chromosome 14. It is expressed prominently in the placenta, but is not detected in any other tissues examined. Phylogenetic analysis suggests that RNase 7 is the closest relative of RNase 8 and that the pair likely resulted from a recent gene duplication event in primates. Further analysis reveals that the RNase 8 gene has incorporated non-silent mutations at an elevated rate (1.3 x 10–9 substitutions/site/year) and that orthologous RNase 8 genes from 6 of 10 primate species examined have been deactivated by frameshifting deletions or point mutations at crucial structural or catalytic residues. The ribonucleolytic activity of recombinant human RNase 8 is among the lowest of members of this superfamily and it exhibits neither antiviral nor antibacterial activities characteristic of some other RNase A ribonucleases. The rapid evolution, species-limited deactivation and tissue-specific expression of RNase 8 suggest a unique physiological function and reiterates the evolutionary plasticity of the RNase A superfamily. "

RNase 8, a novel RNase A superfamily ribonuclease expressed uniquely in placenta; Nucleic Acids Research, 2002, Vol. 30, No. 5 1169-1175

Jianzhi Zhang1,2,*, Kimberly D. Dyer1 and Helene F. Rosenberg1

Gene duplication permits the evolution of new traits, while retaining old ones.

I am not asking you to provide the specific genes responsible, I'm just asking you to provide a general method on the micro-biological level. I ignored your other examples for the following reason:

simply because they not help you, because you have not understood the problem with gene mutations that Dr Gary Parker was trying to explain.

Barbarian observes:
We already know that mutations can produce new information and that this, with natural selection, can produce useful new traits. Even most creationists now admit that.

No, most creationists agree that mutations have the possibility of changing existing traits, into a variation on that trait (or in a mathematically highly improbable extreme case, change an existing trait into a completely different trait at the loss of the original). But at the end of the day, all this does, is produce variation within a kind, it does not cause one kind to eventually become another.

Microevolution (variation within a species) probably doesn't. But macroevoluton (speciation) certainly does. The key is in fostering reproductive isolation. Then the two populations will gradually diverge. Eventually, higher taxa evolve. The Institute for Creation Research has endorsed John Woodmorappe's contention that new species, genera, and families evolve. That's a lot of variation. That would permit, for example, the evolution of humans and chimps from a common ancestor.

And there's no reason to doubt it. If you know of some sort of barrier beyond which any species cannot vary, I'd like to know what it is.

Yes, mutations can cause slight changes to existing genes, but in general, these are mostly harmful mutations. By harmful, I don't mean they'll kill you, or necessarily even be noticeable from one generation to the next, but nonetheless, usually the least they'll do is reduce performance. For example, as you'll know from your modern genetics text books, there are over 300 alleles of the hemoglobin gene. That's a lot of variation, but all those alleles produce hemoglobin, a protein for carrying oxygen in red blood cells (and none do it better than the normal allele).

Actually, as that large number of alleles suggests, most mutations don't do very much of anything with regard to fitness. A few are harmful. A very few are useful. That's all natural selection needs.

BTW, a useful new form of Apo-AIM was recently discovered in one clan of Italians. It was traced to a mutation in one individual, and it provides a very good resistance to artheriosclerosis. Since the gene is duplicated, the origional form still exists.

It's not that good mutations are theoretically impossible. Rather, the price is too high.

We used to think most muations are bad, because the early studies were done with easily observable ones, with macroeffects, which are generally harmful. Most of us have a few mutations, and few of us ever know it.

I can name some very bad mutations, can you some very good ones (If so, then what are they)?

See above.

By concept and definition, alleles are just variants of a given gene, producing variation in a given trait. Mutations have been found to produce only alleles, which means they can produce only variation within kind (creation), not change from one kind to others over time (evolution).

No, gene duplication can, by introducing new loci, produce new genes.

Barbarian on the evolution of eyes:
It could take millions of years. But the evidence is that it does not. Simulations done by Dan-Erik Nilsson show that it could happen very quickly:

Firstly, they didn't show that it could happen quickly; this is a gross misunderstanding of what they have written if what you have included is all there is to go on. They simply showed eight stages (each with many steps) simulation that show how certain peripheral features of the eye could evolve. They showed that in a total of 1829 computational steps they could get the shape, lens and iris of an eye. This doesn't take into account any of the other differing features such as control muscles for the lens, the change in cells and complexity of the various retinas along the way (this is the big one), the vitreous fluid, and they're various types.

None of that is necessary for an eye. Our eyes have a great deal of evolutionary history. But the cephalopod, with eyes as complex as ours, can be traced to an "eye" that is hardly more than a depression with a few sensitive cells. We can see that in annelids as well.

Barbarian observes:
There are perfectly good eyes without them.

And what are they then?

Less developed eyes.

Furthermore, I ask you again, where it the evidence that the evolution of the eye wouldn't require a lot of time?

http://taxonomy.zoology.gla.ac.uk/~rdmp ... sld015.htm

In fact, as I just pointed out, the retinas of complex cephalopod eyes are far more like those of limpets than vertebrate eyes.

Yes, but there are astonishing differences between cephalopods and limpets, they eyes are just an example.

Yep. A lot of evolution, but yet, the eyes are fundamentally on the same plan. This is what creationism cannot explain. Why, if there's "design", do the analogous structures have to be modified forms of simpler organisms in the same phylum? Science can explain this, but creationism cannot. Can you?

As you showed through your extract from Dan-Erik Nilsson it would require a tremendous amount of steps to explain even differences in the eyes, and that’s without going into any real depth! And to top it off you even admit above that the differences between cephalopod and vertebrate retinas are far greater than the differences between cephalopods and limpets.

Right. That's only understandable in terms of common descent.

It amazes me that you believe that it is possible to account for such differences by mutation, without even incurring a great genetic load.

Natural selection does a very nice job of dealing with the incomplete dominant or dominant unfavorables. Hardy-Weinberg does a very good job of explaining why we can have a very large number of unfavorable recessives in a large population, with little "load." I do a simulation showing how it works with 8th grade students every year.

1. That the original kind (or ancestor) has a lot more genetic information than the descendants, carrying many dormant genes and chromosomes, allowing for the descendants to come without any additional genetic information.

Won't work. Let's take Adam and Eve. They could have had, at most, 4 alleles among them. Yet, you are talking about hundreds for many loci. The rest must have evolved. It's not a realistic idea to imagine some ur-cat packed to the gills with extra genes, just waiting to dump them as soon as it can speciate.

Barbarian asks:
Do you have any evidence supporting such an idea? Do you have any evidence that there are genes for infrared vision and poison fangs in garter snakes? Tell us about it.

You do know that natural selection can explain how it is that certain populations of snake have certain traits not found in others don't you.

I gather that means you don't.

Further you do also know that there is no reason to believe that skin colour in humans, infrared eyes in snakes, or such like evolved don't you?

Since we can observe how they evolved by intermediates existing today, it's hard to see how it couldn't be.

I'll explain.
Take human skin colour, for example. First of all, it may surprise you to learn that all of us (except albinos) have exactly the same skin-colouring agent. It's a protein called melanin.

Actually, there are several. Eumelanin is black. Phaomelanin is yellow. The amount of blood flow to the skin provides a reddish tint, and I forget the minor ones.

We all have the same basic skin colour, just different amounts of it. (Not a very big difference, is it?) How long would it take to get all the variation in the amount of skin colour we see among people today? A million years? No. A thousand years? No. Answer: just one generation!

Let's see how that works.

(Punnett square assuming one set of alleles)

"When a dark skinned person has kids with a light skinned person, the kids do indeed tend to be intermediate in skin tone. Because skin color does not segregate as a single gene mendelian trait ( unlike green and yellow peas in Mendel's experiments) it is likely that the trait in humans involves the interaction of many variations in multiple enzymes from the melanin pathways."
http://www.madsci.org/posts/archives/oc ... .Gb.r.html

[quote:bcae3]Evolutionists assume that all life started from one or a few chemically evolved life forms with an extremely small gene pool.

Some do. But it's not part of evolutionary theory. You're thinking about abiogenesis, not evolution. Evolutionary theory assumes living things exist, and describes how they vary.

For evolutionists, enlargement of the gene pool by selection of random mutations is a slow, tedious process that burdens each type with a "genetic load" of harmful mutations and evolutionary leftovers.

Why doesn't the large number of harmful alleles do us in? Here's why:

Image3.gif


If the harmful one is dominant or mixed-dominant, natural selection takes it out. If it's recessive, it will fall to a level in the population where it is highly unlikely that it will be paired with another like it. Not surprisingly, the distribution of such traits tends to match the expected frequencies.

Creationists assume each created kind began with a large gene pool, designed to multiply and fill the earth with all its tremendous ecologic and geographic variety. (Ge 1:1-31.)

It's a religion notion, and without evidence, we aren't left with much to support it.

Neither creationist nor evolutionist was there at the beginning to see how it was done, but at least the creationist mechanism works, and it's consistent with what we observe.

No, as pointed out, it cannot explain things like analogous organs, intermediates, and observed instances of favorable mutations/speciations. It can't explain the fossil record, and many other things.

The evolutionist assumption doesn't work, and it's not consistent with what we presently know of genetics and reproduction.

But the vast majority of geneticists and biologists don't agree with you. That makes it difficult for your ideas. The evidence seems overwhelming.

According to the creation concept, each kind starts with a large gene pool present in created, probably "average-looking", parents. As descendants of these created kinds become isolated, each average-looking ("generalized") type would tend to break up into a variety of more "specialized" descendants adapted to different environments. Thus, the created ancestors of dogs, for example, have produced such varieties in nature as wolves, coyotes, and jackals. Human beings, of course, have great diversity, too. As the Bible says, God made of "one blood" (meaning also one gene pool) all the "tribes and tongues and nations" of the earth.

How exactly, would Adam and even have 300 alleles for hemoglobin?

In the same way as genes allow such variations in the skin colour of humans, they can also explain why it is that some snakes have infrared eyes, some less so, and some not at all. Why some have long fangs, some medium and some short, why some are poisonous and why others aren't. There is nothing to explain, all this variety can have come from the same common ancestors gene pool.

It seems impossible to cram all those alleles into one organism, at the same locus, unless you have polyploidy of incredible numbes. And animals rarely survive polyploidy. It's just not a credible argument.

Barbarian observes:
He's wrong about that, too. Darwin himself recognized this. In fact, he spends a good deal of time discussing variation and the nature of variation. Darwin's idea was that variation plus natural selection explains evolution. The Modern synthesis says that mutation (variation) plus natural selection explains evolution.

OK, this is getting very sad. Gary Parker says that "the modern evolutionist believes that new traits come about by chance, by random changes in genes called mutations, and not by use and disuse.".

Traits are phenotypes. They are formed by genes or suites of genes, and they are the result of random mutations and natural selection. There are neutral mutations, and genetic drift, but they have little to do with common descent.

That is the modern evolutionist says something (mutation) has been added in addition to natural selection, to explain evolution. You say the Modern synthesis says that mutation plus natural selection explains evolution.

The confusion seems to be about organism/population and genotype/phenotype.

Yep. We need to always remember that individual don't evolve; populations evolve. Your guy doesn't seem to have that quite clear.

In order for a population to evolve with a new trait, an individual must first gain the trait by mutation to add it to the population.

He could have gained some readability if he had you as a proofreader.

Let's just say that the modern synthesis says that mutation plus natural selection explains common descent. Would that be acceptable?

Barbarian on the notion that evolutionary theory is about the origin of life.
No. Evolutionary theory only makes claims about the way living things change. It makes no claims about the origin of living things.

If evolutionary theory "only makes claims about the way living things change." and "It makes no claims about the origin of living things.", then how do evolutionists explain the origin of living things?

They don't agree on that. Some think abiogenesis. Some think God magically created the first organisms. Some think they were seeded here from elsewhere, and God knows what else. It's just not part of evolutionary theory, so they don't necessarily all agree. Personally, since Genesis says that God created living things by natural means, I go with abiogenesis. But it's a religious belief; we don't have enough evidence to say for sure, yet.

Barbarian observes:
Somatic mutations (those happening to any cells other than germ cells, are meaningless to evolution.

Yes, they such mutations are absolutely meaningless, and this was not in contention. The purpose of that section was to show the probability, that we have a couple of cells with a mutated form of almost any gene that is different from any other cell in our body. This is not a meaningless excercise because it is the foundation for calculating the probability of a mutation (in addition to our inherited mutations) happening to one to germ cells.

Well, since the usual number seems to be about two or three per person, that's pretty obvious by inspection.

You are not paying attention with the math! The odds of getting two heads are the product of the powers of probability that the genes required to enable you to have two heads are all changed!

Barbarian observes:
True, but if you already have tossed one head, the probability of the next coming up heads is 0.5. Try it. Toss two coins. Every time the first one comes up heads, toss the second one and see how often it comes up heads, too. If you do this many times, the result will approach 0.5.

OK, you should know better. I'll explain. You have a coin, you haven't flipped it yet, the odds of you getting a head is 0.5. The odds of you getting a second head is 0.5*0.5 = 0.25.

But I was pointing out that once the first head is flipped and comes up heads, the odds of you getting two heads after the second flip is 0.5. It's true.

Now supposing you had already flipped it once and got a head, as you say it is completely true that you now have the probability of getting a head again is 0.5. But you have already flipped the coin one, where you had a probability of 0.5 of getting the first head.
Therefore the total probability of getting two heads in a row is still 0.5 (the for the one that you had already flipped)*0.5(for your second flip) = 0.25.

Here's a way to consider that. In the US, there used to be a game show where one could pick one door of three. Two were good prizes, and one was a joke prize. The emcee would sometimes pick one, and it was a good prize, but not a great one. He then would ask the person to decide whether to take the good prize, or pick a door for the great one.

Try it out. When you're done, you might think differently about the issue:
http://www.stat.sc.edu/~west/javahtml/L ... aDeal.html

Barbarian observes:
The fallacy here is in supposing it all has to happen at once. We know by observation that it does not.

When was that supposed? The paragraph was only explaining the first principles of probability, it was not saying that all the mutation had to happen in a single step.

If it doesn't, then condidtional probabilities apply. And then it's (in the case above) 0.5, not 0.25.

Barbarian on Hoyle's Folly:
Let's see.. take a deck of cards, and shuffle it well, and deal out the cards one by one, noting the order. The likelihood of that order is one, divided by about:
8,065,817,517,094,387,857,166,063,685,640,400,000,000,000,000,000,000,000,000,000000,000,000,000,000,000,000,000,000

Yet it happens every time. I'm not impressed.

Well if you get all your cards out in the same order after shuffling every time, you could be a good magician!

Barbarian observes:
True, but evolution doesn't. As I said, if you could look at a sponge,and infer a zebra, I'd be impressed. Otherwise, it's just finding an arrow in a tree, and drawing a bulls-eye around it.


OK, again, I'm going to have to explain. The likelihood of you dealing any particular combination of cards is roughly one in 8.07 to the 67th power. But the liklihood of dealing one of those possible combinations is 1. That is why it is not impressive. It's not impressive because you have to deal a combination of cards, there is a probability of 1, that you will deal one of those one in 8.07 to the 67th power combinations. The impressive bit is, can you you repeat this order after shuffling? That is the point that Gary Parker was making.

Well, if an identical organism evolved a second time from a different phylum, I'd be floored, too. But that's not what evolution predicts.

You are coming from a different point. OK, I'm going to explain the point further, because I realise this may not be entirely clear. Lets say that there is a 1 in 10 to the 7th power probability that a cell will mutate. Both you an Gary Parker are correct, but you are both coming from different starting points. Gary Parker is saying that there is a 1 in 10 to the 7th power that any particular cell is going to mutate, the probability of that cell mutating twice is the product of the powers of probability i.e. 1 in 10 to the 7th power, times 1 in 10 to the 7th power which equals one in 10 to the 14th power. You are starting at the point where you are saying, a cell has already mutated and I know which one it is (I have already dealt the hand, and I know the combination). Therefore the probability that that cell is going to mutate again is 1 in 10 to the 7th power. This is correct, but you have already had a mutation that had a 1 in 10 to the 7th power of mutating. So therefore, the total probability of that cell mutating twice is 1 in 10 to the 7th power (for the mutation that you have already witnessed) times 1 in 10 to the 7th power (for the probability of that cell having a second mutation) which is still 1 in 10 to the 14th power. Getting the first mutation is not impressive, there was a probability of 1 that one of the cells was going to mutate, but having that cell mutate twice in a row, now that is impressive!

True, but that's not what evolution requires. It merely requires that mutations occur in populations.

Now, this is not saying that all mutations have to happen at once, it is merely showing the principle of probability. One can apply this same principle of probability over generations with respect to a genes. This is what the biologists and mathematicians did. (An engineer is also a mathematician - open an engineering book and you'll see its applied math, you need to be a mathematician to understand much of it. Furthermore an engineer would be more than over qualified to deal with probabilities).

I've had to explain probability to more than one of them. In this case, as you suggested, the odds of any particular organism evolving are astonishingly small, but the probability of some organism evolving is 1.0. And it's not random, since natural selection weeds out a lot of variation that isn't fit enough to survive.

quote]Read up on natural selection, and you'll understand how it is that "snake kind" can have this information, though a particular species within that kind might not. [/quote:bcae3]

Barbarian observes:
There's no evidence for that assertion whatever.

The evidence is skin colour for example, I also showed above how you can get certain populations limited to just dark, just light, or just medium coloured skin, unless crossed.

It's a little more complicated than you've been led to believe. There isn't just one color, and there are a variety of ways genes can affect the way it's expressed.

Barbarian observes:
In fact, even "design" enthusiasts admit that not all eyes have a common origin.

True. For example, the eyes of cattle have their origin in cattle, the eyes in humans have their origins in humans. Cattle and humans do not have the same biological origin.

No, that's wrong. We can rather easily trace the way that primates and ungulates go back to a common ancestor. There's a huge amount of information pointing to that.

We see that anatomy, fossil evidence, genetics, and biochemistry all give us the same phylogenies. In science, confirmation by several independent lines of evidence is considered compelling.

You then directly said

Barbarian on ID people's opinion that not all eyes evolved from a common origin:
The infrared "eyes" of snakes are an example.

who told you that?

Some ID types from ARN. They were very emphatic that most, but not all eyes had a common origin. I wouldn't go that far; many of them are controlled by the same homobox genes, but the genes didn't necessarily first express themselves as eyes.

Now for the mammalian knee...

Barbarian observes:
Actually, one can still walk and use the knee without a patella. The patella functions like one of those little extensions on construction cranes. It gives a mechanical advantage.

There are some mammalian knee's that do not have patella’s, but then patella is not part of the irreducible four bar mechanism.

Barbarian observes:
You can also lose a ligament or two, and still use it, although with reduced function

Actually, you cannot lose any of the four cruciate ligaments (cruciate, because they are essential to the working of the knee). Losing one of these ligaments causes the knee to deform. As soon as any pressure is applied on the knee, the knee buckles, and gives way under the pressure, most likely causing even more damage. The knee cannot be used in this state, it is to terms of functionality useless. The four bar mechanism in the mammalian knee is most certainly irreducibly complex. Furthermore, there has never been a working evolutionary model of how such a mechanism could evolve. Until you produce any evidence to support your claim that "The knee is not in any way irreducibly complex", then we'll remain certain, that it is the case that it is irreducibly complex, just like those that have done indepth studies on it know it to be.

Quote:

There are irreducibly complex biological systems, but they can easily evolve.

[quote:bcae3]No, they can't, that is the point, they are irreducibly complex, and there is no model to describe how they could evolve.[quote:bcae3]

You've been misled. For example, Barry Hall once was working with E. Coli to see how a new enzyme might evolve. That happened, but after it did, something remarkable happened. A regulator evolved. A regulator is a protein that does not allow expression of the enzyme unless the substrate is present. After the regulator evolved, the system was irreducibly complex, since the gene for the enzyme, the gene for the regulator, and the substrate were all required for the system to work. Removal of even one cased the system to cease working.

Take a look here for some other ways:
http://www.cs.colorado.edu/~lindsay/cre ... l#scaffold

[quote:bcae3]Take the example given, the human knee. The majority of biology text books call it a hinge joint, giving the impression that it is just a pivot between the upper and lower leg bones. However this is a gross over-simplification because the knee joint is actually a very sophisticated mechanism and a masterpiece of design.

Actually, it's a pretty shoddy bit of work, because it evolved as the hind leg of a quadruped. Hence, it's one of the parts that comes undone most often, because it's not quite adapted to our recent bipedality.

I used to be an ergonomist, and one of the more interesting jobs I had was a rash of knee degenerations in workers for a conversion van company. Humans kneel, although you don't see that much in any quadruped. The defect in this case was the way the articular cartilage responds to static loading, instead of the cyclical loading/release it was evolved for.

I use to play soccer. I can tell you that knees are definitely not adapted to the stresses of running and sudden turns. My daughter plays for her university, and she's had some problems. Interestingly, she's doing an internship in sports orthopedics right now. I'll ask her about it and get back to you in more detail.

More advanced books call the knee joint for what it is, a condylar joint, called so because of the rolling and sliding action (articulation) between the upper leg bone (femur) and the main lower leg bone (tibia). This action is controlled by the cruciate ligaments. It is this mechanism that is irreducible, and as I say, there has been no model to explain how it could have evolved.
[/quote:bcae3][/quote:bcae3][/quote:bcae3][/quote:bcae3]

No, that's wrong. Birds, for example, have no median cruciate ligament. The particular arrangement in mammals is a specific adaptation from a more generalized arrangement in the diapsids.
 
The Barbarian said:
Nope. In fact, we've tested it with knowns, and shown that it is very accurate. The Argon/Argon testing that got the date of the destruction of Pompeii was a good example. Likewise, C14 has been shown to be extremely accurate within the range it is used by scientists.

Carbon dating is notoriously unreliable. Lava from the 1980 Mt. St. Helen's eruption were dated as being millions of years old. Dinosaur fossils known to be millions of years old, have been dated as being only thousands of years old. Living crustaceans have been dated as being thousands of years old.

With many radiometric dating methods, the isotope concentrations CAN be measured very accurately, but isotope concentrations are not dates. To derive ages from isotope measurements, you have to make unprovable assumptions such as:

1) The starting conditions (for example, that there was no daughter isotope present at the start, or that we know how much was there).

2) Decay rates have always been constant.

3) Systems were closed or isolated so that no parent or daughter isotopes were lost or added.


Physicists, however, can apply known laws to see how older much more ancient material is. Even more convincing, the data from numerous methods all give the same ages. That's compelling evidence.

If this were true, it would be compelling. The truth is, different dating methods give widely different results for the same objects. The results are completely unreliable. The scientists just keep using different methods, and testing different samples until they get the number that they have already decided is the right one.


Actually, the two species we have today are not quite like the ancient ones. But they are obviously evolved from them. Why this should be so is a mystery to creationism, but is understandable in terms of natural selection.

Nice try. The Latimeria chalumnae (Cormoran population) is identical to fossils that were supposedly dated back 200 million years, while the Latimeria Menadoensis (Sulawesi population) is simply a different geographical population of the same coelacanth species. Those who discovered the Sulawesi coelacanth did declare it to be a different species, but when it was actually compared to the Cormoran population, they were not able to find differences in the morphological traits of the two populations.

http://www.ucmp.berkeley.edu/vertebrate ... anths.html

More importantly, evolutionists told us that the coelacanth was a missing link of sorts. That is crawled with its fins rather than swam, which was supposed to be proof that fish crawled onto land. We now know that the coelacanths swim, not crawl, and that the evolutionists were just making things up (as usual).
 
The imprecision of lava and shell fish dating is explainable.
And #2 of those "unprovable assumptions" is the same ones upon which nuclear science rests. And unless you have some significant evidence proving all of nuclear science wrong and somehow life still exists despite the changing of such constants like the sun not going out or going dark when such constants change, you can stick your argument someplace dark.
 
OK, as a bit of back ground information, I let you know that I am a research scientist (to be precise, a physicist) presently undertaking research in experimental high energy particle physics. Therefore my understanding of the broad base, and theories within physics is going to be particularly high, and especially so within my own field. Therefore when I read statements like:

And #2 of those "unprovable assumptions" is the same ones upon which nuclear science rests.

I cringe. You see, you are actually very, very wrong. Nuclear science, for example, takes a material that has atoms of large nucleus (a lot of protons and neutrons) such as uranium 235 (in the case of nuclear reactors - the most famous outcome of nuclear science so far). Now when bombarding uranium 235 with slow neutrons, these neutrons can be captured by a uranium-235 nucleus, rendering it unstable toward nuclear fission (splitting of the atom). A fast neutron will not be captured, so neutrons must be slowed down by moderation to increase their capture probability in fission reactors. A single fission event can yield over 200 million times the energy of the neutron which triggered it!

To explain this process further. When a neutron of appropriate velocity hits a uranium 235 atom, the uranium accepts it and becomes uranium 236. Uranium 236 is extremely unstable and immediately splits (fission). Each fission reaction yields fragments of intermediate mass, an average of 2.4 neutrons, and an average energy of about 215 MeV. The energy is gained from the binding energy released as the atom splits. Now iron is the most stable element in the periodic table, in order to get a useful net gain in energy from a nuclear reaction, one has to fuse (fusion) atoms that have a nucleus of mass below the mass of an iron nucleus, and split (fission) atoms that have a nucleus of mass above that of an iron atom. The further away the mass of nuclei of the starting atoms is from that of iron, the more energy can be successfully yielded.

This is nuclear science research, as are nuclear bombs, work on fusion reactors (that have thus far been unsuccessful), and such like. Nuclear science it not concerned by fluctuations in the natural decay rate of atoms; but is concerned with yielding nuclear energy, and the control of this process. Generally, the only time when a nuclear scientist is worried about the natural decay rate of a material is when he has created an isotope of particularly long half-life, and is trying to figure out what to do with it – i.e. how to safely discard of it! Such isotopes are usually of little use to nuclear science.

Nuclear science doesn't care whether there is an inconsistency in decay rates of materials with long half-life’s!

And unless you have some significant evidence proving all of nuclear science wrong and somehow life still exists despite the changing of such constants like the sun not going out or going dark when such constants change, you can stick your argument someplace dark.

Actually he is right, you are wrong, and nothing has been said which questions the foundations of nuclear science. And furthermore, stars work on nuclear fusion reactions that have nothing to do with the half-life’s of the atoms they contain. Stars can only burn in the periodic table up to a maximum of iron, at which point the core is too cool, gravity overcomes, and the star collapses to a neutron star, and if its of high enough mass, a black hole.

What do you think that has to do with the atoms that have long half-lifes in the star? Yes, nothing. The thing that does concern the activity of a star is, which atoms it is in the process of fusing (this relates to the output of the star)

If you want to know something really interesting. The amount of carbon 14 that we have in our atmosphere is directly related to the solar radiation emitted by the suns surface (which is also directly related to the number of sunspots). Carbon 14 is created from nitrogen 14 in the upper atmosphere of the earth. Radiation from the sun collides with atoms in the atmosphere. These collisions generate energetic neutrons which themselves have collisions of their own with Nitrogen 14 atoms. When these neutrons collide with nitrogen 14 in the atmosphere carbon-14 can be created. How? Well nitrogen normally occurs in a seven proton, seven neutron, state (nitrogen 14). When it collides with an energetic neutron it for an instant becomes nitrogen 15, with seven protons and eight neutrons, but it then immediately ejects a proton to become carbon 14 with six protons and 8 neutrons. Well, so what? The sun gives of solar radiation at an even rate doesn’t it? Well, no, it doesn’t. The sun has roughly an 11 year cycle, in this cycle there can be relatively large fluctuations in the amount of solar radiation it emits (due to the change in frequency of solar flares). This in turn affects the amount of C14 there is in the atmosphere, and thus this affects the ratio of C14:C12 in the atmosphere (the ratio assumed in carbon dating to be roughly constant). This in turn upsets the carbon dating method (if the animal died in the middle of a period where C14:C12 is high (representing high solar radiation) it would give a difference carbon dating result to one that died when the C14:C12 ratio was low (representing low solar radiation). This is because ratio of C14:C12 contained in the plants that it was eating and air it was breathing when it died. Furthermore, on top of the 11 year cycle, the sun itself has been measured to fluctuate in activity even in this period. We are in one of these periods at the moment. The activity of the suns surface is particularly high, when it should be in a dormant period of its 11 year cycle. This means that the C14:C12 ratio is going to be higher for a decade (the effects of the recent activity will be seen in a few years when the solar radiation from these recent flares reaches the earth, reacts with the atmosphere (forming C14) which in turn reacts with oxygen to become carbon(14) dioxide, this will then make its way down through the atmosphere to ground level, which will then be taken up by plants, which will then be eaten by animals ), than usual, making the dating of animals that died a long time ago seem even older.

But it gets even more interesting that this, since the C-14:C-12 ratio we have now is far different to that which would have been here even a thousand years ago. This is because the suns surface activity has been on the increase over the last century in particular. How do we know this? We have used Be10 concentrations in polar ice to reconstruct the average sunspot activity level for the period between the year 850 to present. The method uses physical models for processes for connecting the Be10 concentration with the sunspot number. The reconstruction shows the reliability that the period of high solar activity during the last 60 years especially is unique throughout the last 1150 years (C14 (promille) rising from +10 to +30), in that sunspot activity (and thus solar radiation) is far higher; and that the activity over the past 250 years is an average high (+10, from a previous average of +0 (with troughs going as low as -20, and as peaks as high as +15, with much fluctuation). If you’d like more information on this then see Phys.Rev.Lett.91 (21.11.2003). What this means is the only C14 references we have had since we started carbon dating have been very high, this in turn means carbon dating, with this problem alone is to all intents and purposes useless.

Even before this research was undertaken, carbon dating was at best a questionable method of dating, with the fact that it has been noted that various plants and animals take up C14 at differing rates, along with major discrepancies. In 1981, Dr. Robert Lee wrote an article for the Anthropological Journal of Canada, in which he stated:

"The troubles of the radiocarbon dating method are undeniably deep and serious. Despite 35 years of technological refinement and better understanding, the underlying assumptions have been strongly challenged, and warnings are out that radiocarbon may soon find itself in a crisis situation. Continuing use of the method depends on a fix-it-as-we-go approach, allowing for contamination here, fractionation there, and calibration whenever possible. It should be no surprise then, that fully half of the dates are rejected. The wonder is, surely, that the remaining half has come to be accepted…. No matter how useful it is, though, the radiocarbon method is still not capable of yielding accurate and reliable results. There are gross discrepancies, the chronology is uneven and relative, and the accepted dates are actually the selected dates.â€Â
I suggest that with recent discoveries about the suns activity even over the past 1150 years, that carbon dating as a method of measuring age, is beyond redemption.

Notice what he says at the end. “the accepted dates are actually selectedâ€Â. Dr Morris, who has a PhD in geology, wrote the following:

How many times have you opened the newspaper and read an article describing the discovery of a new fossil, archaeological find, or underground fault? After describing the nature of the discovery, the article explains how scientists are so thrilled with its confirmation of evolutionary theory. An age is reported, perhaps millions or hundreds of millions or even billions of years. No questions are raised concerning the accuracy of the date, and readers may feel they have no reason to question it either.

Did you ever wonder how they got that date? How do they know with certainty something that happened so long ago? It's almost as if rocks and fossils talk, or come with labels on them explaining how old they are and how they got that way.

As an earth scientist, one who studies rocks and fossils, I will let you in on a little secret. My geologic colleagues may not like me to admit this, but rocks do not talk! Nor do they come with explanatory labels.

I have lots of rocks in my own personal collection, and there are many more in the ICR Museum. These rocks are well cared for and much appreciated. I never did have a "pet" rock, but I do have some favorites. I've spent many hours collecting, cataloging, and cleaning them. Some I've even polished and displayed.

But what would happen if I asked my favorite rock, "Rock, how old are you?" "Fossil, how did you get that way?" You know what would happen--NOTHING! Rocks do not talk! They do not talk to me, and I strongly suspect they do not talk to my evolutionary colleagues either! So where then do the dates and histories come from?

The answer may surprise you with its simplicity, but the concept forms the key thrust of this book. I've designed this book to explain how rocks and fossils are studied, and how conclusions are drawn as to their histories. But more than that, I've tried to explain not only how this endeavor usually proceeds, but how it should proceed.

Before I continue, let me clearly state that evolutionists are, in most cases, good scientists, and men and women of integrity. Their theories are precise and elegant. It is not my intention to ridicule or confuse. It is my desire to show the mind trap they have built for themselves, and show a better way. Let me do this through a hypothetical dating effort, purely fiction, but fairly typical in concept.

How It's Usually Done

Suppose you find a limestone rock containing a beautifully preserved fossil. You want to know the age of the rock, so you take it to the geology department at the nearby university and ask the professor. Fortunately, the professor takes an interest in your specimen and promises to spare no effort in its dating.

Much to your surprise, the professor does not perform carbon-14 dating on the fossil. He explains that carbon dating can only be used on organic materials, not on rocks or even on the fossils, since they too are rock. Furthermore, in theory it's only good for the last few thousand years, and he suspects your fossil is millions of years old. Nor does this expert measure the concentrations of radioactive isotopes to calculate the age of the rock. Sedimentary rock, the kind which contains fossils, he explains, "can not be accurately dated by radioisotope methods. Such methods are only applicable to igneous rocks, like lava rocks and granite". Instead, he studies only the fossil's shape and characteristics, not the rock. "By dating the fossil, the rock can be dated", he declares.

For purposes of this discussion, let's say your fossil is a clam. Many species of clams live today, of course, and this one looks little different from those you have seen. The professor informs you that many different clams have lived in the past, the ancestors of modern clams, but most have now become extinct.

Next, the professor removes a large book from his shelf entitled Invertebrate Paleontology, and opens to the chapter on clams. Sketches of many clams are shown. At first glance many seem similar, but when you look closely, they're all slightly different. Your clam is compared to each one, until finally a clam nearly identical to yours appears. The caption under the sketch identifies your clam as an index fossil, and explains that this type of clam evolved some 320 million years ago.

With a look of satisfaction and an air of certainty, the professor explains, "Your rock is approximately 320 million years old!"

Notice that the rock itself was not examined. It was dated by the fossils in it, and the fossil type was dated by the assumption of evolutionary progression over time. The limestone rock itself might be essentially identical to limestones of any age, so the rock cannot be used to date the rock. The fossils date the rock, and evolution dates the fossils.

You get to thinking. You know that limestones frequently contain fossils, but some seem to be a fine-grained matrix with no visible fossils. In many limestones the fossils seem to be ground to pieces, and other sedimentary rocks, like sandstone and shale, might contain no visible fossils at all. "What do you do then?" you ask.

The professor responds with a brief lecture on stratigraphy, information on how geologic layers are found, one on top of the other, with the "oldest" ones (i.e., containing the oldest fossils) beneath the "younger" ones. This makes sense, for obviously the bottom layer had to be deposited before the upper layers, "But how are the dates obtained?" "By the fossils they contain!" he says.

It turns out that many sedimentary rocks cannot be dated all by themselves. If they have no fossils which can be dated within the evolutionary framework, then "We must look for other fossil-bearing layers, above and below, which can help us sandwich in on a date", the prof says. Such layers may not even be in the same location, but by tracing the layer laterally, perhaps for great distances, some help can be found.

"Fortunately, your rock had a good fossil in it, an index fossil, defined as an organism which lived at only one time in evolutionary history. It's not that it looks substantially more or less advanced than other clams, but it has a distinctive feature somewhat different from other clams. When we see that kind of clam, we know that the rock in which it is found is about 320 million years old, since that kind of clam lived 320 million years ago", he says. "Most fossils are not index fossils. Many organisms, including many kinds of clams, snails, insects, even single-celled organisms, didn't change at all over hundreds of millions of years, and are found in many different layers. Since they didn't live at any one particular time, we cannot use them to date the rocks. Only index fossils are useful, since they are only found in one zone of rock, indicating they lived during a relatively brief period of geologic history. We know that because we only find them in one time period. Whenever we find them, we date the rock as of that age."

Let me pause in our story to identify this thinking process as circular reasoning. It obviously should have no place in science.

Instead of proceeding from observation to conclusion, the conclusion interprets the observation, which "proves" the conclusion. The fossils should contain the main evidence for evolution. But instead, we see that the ages of rocks are determined by the stage of evolution of the index fossils found therein, which are themselves dated and organized by the age of the rocks. Thus the rocks date the fossils, and the fossils date the rocks.

Back to our story. On another occasion, you find a piece of hardened lava, the kind extruded during a volcanic eruption as red hot, liquid lava. Obviously, it contains no fossils, since almost any living thing would have been incinerated or severely altered. You want to know the age of this rock too. But your professor friend in the geology department directs you to the geophysics department. "They can date this rock", you are told.

Your rock fascinates the geophysics professor. He explains that this is the kind of rock that can be dated by using radioisotope dating techniques, based on precise measurements of the ratios of radioactive isotopes in the rock. Once known, these ratios can be plugged into a set of mathematical equations which will give the absolute age of the rock.

Unfortunately, the tests take time. The rock must be ground into powder, then certain minerals isolated. Then the rock powder must be sent to a laboratory where they determine the ratios and report back. A computer will then be asked to analyze the ratios, solve the equations, and give the age.

The geophysicist informs you that these tests are very expensive, but that since your rock is so interesting, and since he has a government grant to pay the bill, and a graduate student to do the work, it will cost you nothing. Furthermore, he will request that several different tests be performed on your rock. There's the uranium-lead method, the potassium-argon method, rubidium-strontium, and a few others. They can be done on the whole rock or individual minerals within the rock. They can be analyzed by the model or the isochron techniques. All on the same rock. "We're sure to get good results that way", you are told. The results will come back with the rock's absolute age, plus or minus a figure for experimental error.

After several weeks the professor calls you in and shows you the results. Finally you will know the true age of your rock. Unfortunately, the results of the different tests do not agree. Each method produced a different age! "How can that happen on a single rock?" you ask.

The uranium-lead method gave 500 plus or minus 20 million years for the rock's age.

The potassium-argon method gave 100 plus or minus 2 million years

The rubidium-strontium model test gave 325 plus or minus 25 million years

The rubidium-strontium isochron test gave 375 plus or minus 35 million years.

Then comes the all-important question. "Where did you find this rock? Were there any fossils nearby, above or below the outcrop containing this lava rock?" When you report that it was just below the limestone layer containing your 320 million year old fossil, it all becomes clear. "The rubidium-strontium dates are correct; they prove your rock is somewhere between 325 and 375 million years old. The other tests were inaccurate. There must have been some leaching or contamination." Once again, the fossils date the rocks, and the fossils are dated by evolution.

Our little story may be fictional, but it is not at all far-fetched. This is the way it's usually done. An interpretation scheme has already been accepted as truth. Each dating result must be evaluated--accepted or rejected--by the assumption of evolution. And the whole dating process proceeds within the backdrop of the old-earth scenario. No evidence contrary to the accepted framework is allowed to remain. Evolution stands, old-earth ideas stand, no matter what the true evidence reveals. An individual fact is accepted or rejected as valid evidence according to its fit with evolution.


Let me illustrate this dilemma with a few quotes from evolutionists. The first is by paleontologist Dr. David Kitts, a valued acquaintance of mine when we were both on the faculty at the University of Oklahoma. While a committed evolutionist, Dr. Kitts is an honest man, a good scientist, and an excellent thinker. He and many others express disapproval with the typical thinking of evolutionists.

"...the record of evolution, like any other historical record, must be construed within a complex of particular and general preconceptions, not the least of which is the hypothesis that evolution has occurred."

David B. Kitts, Paleobiology, 1979, pp. 353, 354.

"And this poses something of a problem: If we date the rocks by the fossils, how can we then turn around and talk about patterns of evolutionary change through time in the fossil record?"

Niles Eldridge, Time Frames, 1985, p. 52.

"A circular argument arises: Interpret the fossil record in the terms of a particular theory of evolution, inspect the interpretation, and note that it confirms the theory. Well, it would, wouldn't it?"
Tom Kemp, "A Fresh Look at the Fossil Record",
New Scientist, Vol. 108, Dec. 5, 1985, p. 67.

Why can’t we trust K-Ar and Ar-Ar dating methods? Well, here is an article written by the Institute for Creation Research, together with references to validate what they are saying:
http://www.icr.org/pubs/imp/imp-307.htm
For more than three decades potassium-argon (K-Ar) and argon-argon (Ar-Ar) dating of rocks has been crucial in underpinning the billions of years for Earth history claimed by evolutionists. Critical to these dating methods is the assumption that there was no radiogenic argon (40Ar*) in the rocks (e.g., basalt) when they formed, which is usually stated as self-evident. Dalrymple argues strongly:
The K-Ar method is the only decay scheme that can be used with little or no concern for the initial presence of the daughter isotope. This is because 40Ar is an inert gas that does not combine chemically with any other element and so escapes easily from rocks when they are heated. Thus, while a rock is molten, the 40Ar formed by the decay of 40K escapes from the liquid.1
However, this dogmatic statement is inconsistent with even Dalrymple's own work 25 years earlier on 26 historic, subaerial lava flows, 20% of which he found had non-zero concentrations of 40Ar* (or excess argon) in violation of this key assumption of the K-Ar dating method.2 The historically dated flows and their "ages" were:
Hualalai basalt, Hawaii (AD 1800-1801) 1.6±0.16 Ma; 1.41±0.08 Ma
Mt. Etna basalt, Sicily (122 BC) 0.25±0.08 Ma
Mt. Etna basalt, Sicily (AD 1972) 0.35±0.14 Ma
Mt. Lassen plagioclase, California (AD 1915) 0.11±0.03 Ma
Sunset Crater basalt, Arizona (AD 1064-1065) 0.27±0.09 Ma; 0.25±0.15 Ma
Far from being rare, there are numerous reported examples of excess 40Ar* in recent or young volcanic rocks producing excessively old K-Ar "ages":3
Akka Water Fall flow, Hawaii (Pleistocene) 32.3±7.2 Ma
Kilauea Iki basalt, Hawaii (AD 1959) 8.5±6.8 Ma
Mt. Stromboli, Italy, volcanic bomb (September 23, 1963) 2.4±2 Ma
Mt. Etna basalt, Sicily (May 1964) 0.7±0.01 Ma
Medicine Lake Highlands obsidian,
Glass Mountains, California (<500 years old) 12.6±4.5 Ma
Hualalai basalt, Hawaii (AD 1800-1801) 22.8±16.5 Ma
Rangitoto basalt, Auckland, NZ (<800 years old) 0.15±0.47 Ma
Alkali basalt plug, Benue, Nigeria (<30 Ma) 95 Ma
Olivine basalt, Nathan Hills, Victoria Land,
Antarctica (<0.3 Ma) 18.0±0.7 Ma
Anorthoclase in volcanic bomb, Mt Erebus,
Antarctica (1984) 0.64±0.03 Ma
Kilauea basalt, Hawaii (<200 years old) 21±8 Ma
Kilauea basalt, Hawaii (<1,000 years old) 42.9±4.2 Ma; 30.3±3.3 Ma
East Pacific Rise basalt (<1 Ma) 690±7 Ma
Seamount basalt, near East Pacific Rise (<2.5 Ma) 580±10 Ma; 700±150 Ma
East Pacific Rise basalt (<0.6 Ma) 24.2±1.0 Ma
Other studies have also reported measurements of excess 40Ar* in lavas.4 The June 30, 1954 andesite flow from Mt. Ngauruhoe, New Zealand, has yielded "ages" up to 3.5±0.2 Ma due to excess 40Ar*.5 Austin investigated the 1986 dacite lava flow from the post-October 26, 1980, lava dome within the Mount St. Helens crater, which yielded a 0.35±0.05 Ma whole-rock K-Ar model "age" due to excess 40Ar*.6 Concentrates of constituent minerals yielded "ages" up to 2.8±0.6 Ma (pyroxene ultra-concentrate).
Investigators also have found that excess 40Ar* is trapped in the minerals within lava flows.7 Several instances have been reported of phenocrysts with K-Ar "ages" 1-7 millions years greater than that of the whole rock, and one K-Ar "date" on olivine phenocrysts in a recent (<13,000 year old) basalt was greater than 110 Ma.8 Laboratory experiments have tested the solubility of argon in synthetic basalt melts and their constituent minerals, with olivine retaining 0.34 ppm 40Ar*.9 It was concluded that the argon is held primarily in lattice vacancy defects within the minerals.
The obvious conclusion most investigators have reached is that the excess 40Ar* had to be present in the molten lavas when extruded, which then did not completely degas as they cooled, the excess 40Ar* becoming trapped in constituent minerals and the rock fabrics themselves. However, from whence comes the excess 40Ar*, that is, 40Ar which cannot be attributed to atmospheric argon or in situ radioactive decay of 40K? It is not simply "magmatic" argon. Funkhouser and Naughton found that the excess 40Ar* in the 1800-1801 Hualalai flow, Hawaii, resided in fluid and gaseous inclusions in olivine, plagioclase, and pyroxene in ultramafic xenoliths in the basalt, and was sufficient to yield "ages" of 2.6 Ma to 2960 Ma.10 Thus, since the ultramafic xenoliths and the basaltic magmas came from the mantle, the excess 40Ar* must initially reside there, to be transported to the earth's surface in the magmas.
Many recent studies confirm the mantle source of excess 40Ar*. Hawaiian volcanism is typically cited as resulting from a mantle plume, most investigators now conceding that excess 40Ar* in the lavas, including those from the active Loihi and Kilauea volcanoes, is indicative of the mantle source area from which the magmas came. Considerable excess 40Ar* measured in ultramafic mantle xenoliths from Kerguelen Archipelago in the southern Indian Ocean likewise is regarded as the mantle source signature of hotspot volcanism.11 Indeed, data from single vesicles in mid-ocean ridge basalt samples dredged from the North Atlantic suggest the excess 40Ar* in the upper mantle may be almost double previous estimates, that is, almost 150 times more than the atmospheric content (relative to 36Ar).12 Another study on the same samples indicates the upper mantle content of 40Ar* could be even ten times higher.13
Further confirmation comes from diamonds, which form in the mantle and are carried by explosive volcanism into the upper crust and to the surface. When Zashu et al. obtained a K-Ar isochron "age" of 6.0±0.3 Ga for 10 Zaire diamonds, it was obvious excess 40Ar* was responsible, because the diamonds could not be older than the earth itself.14 These same diamonds produced 40Ar/39Ar "age" spectra yielding a ~5.7 Ga isochron.15 It was concluded that the 40Ar is an excess component which has no age significance and is found in tiny inclusions of mantle-derived fluid.
All this evidence clearly shows that excess 40Ar* is ubiquitous in volcanic rocks, and that the excess 40Ar* was inherited from the mantle source areas of the magmas. This is not only true for recent and young volcanics, but for ancient volcanics such as the Middle Proterozoic Cardenas Basalt of eastern Grand Canyon.16 In the mantle, this 40Ar* predominantly represents primordial argon that is not derived from in situ radioactive decay of 40K and thus has no age significance.
In conclusion, the fact that all the primordial argon has not been released yet from the earth's deep interior is consistent with a young Earth. Also, when samples of volcanic rocks are analyzed for K-Ar and Ar-Ar "dating," the investigators can never really be sure therefore that whatever 40Ar* is in the rocks is from in situ radioactive decay of 40K since their formation, or whether some or all of it came from the mantle with the magmas. This could even be the case when the K-Ar and Ar-Ar analyses yield "dates" compatible with other radioisotopic "dating" systems and/or with fossil "dating" based on evolutionary assumptions. Furthermore, there would be no way of knowing, because the 40Ar* from radioactive decay of 40K cannot be distinguished analytically from primordial 40Ar not from radioactive decay, except of course by external assumptions about the ages of the rocks. Thus all K-Ar and Ar-Ar "dates" of volcanic rocks are questionable, as well as fossil "dates" calibrated by them.

References
1 G.B. Dalrymple, The Age of the Earth (1991, Stanford, CA, Stanford University Press), p. 91.
2 G.B. Dalrymple, "40Ar/36Ar Analyses of Historic Lava Flows," Earth and Planetary Science Letters, 6 (1969): pp. 47-55.
3 For the original sources of these data, see the references in A.A. Snelling, "The Cause of Anomalous Potassium-Argon `Ages' for Recent Andesite Flows at Mt. Ngauruhoe, New Zealand, and the Implications for Potassium-Argon `Dating'," R.E. Walsh, ed., Proceedings of the Fourth International Conference on Creationism (1998, Pittsburgh, PA, Creation Science Fellowship), pp. 503-525.
4 Ibid.
5 Ibid.
6 S.A. Austin, "Excess Argon within Mineral Concentrates from the New Dacite Lava Dome at Mount St Helens Volcano," Creation Ex Nihilo Technical Journal, 10 (1996): pp. 335-343.
7 A.W. Laughlin, J. Poths, H.A. Healey, S. Reneau and G. WoldeGabriel, "Dating of Quaternary Basalts Using the Cosmogenic 3He and 14C Methods with Implications for Excess 40Ar," Geology, 22 (1994): pp. 135-138. D.B. Patterson, M. Honda and I. McDougall, "Noble Gases in Mafic Phenocrysts and Xenoliths from New Zealand," Geochimica et Cosmochimica Acta, 58 (1994): pp. 4411-4427. J. Poths, H. Healey and A.W. Laughlin, "Ubiquitous Excess Argon in Very Young Basalts," Geological Society of America Abstracts With Programs, 25 (1993): p. A-462.
8 P.E. Damon, A.W. Laughlin and J.K. Precious, "Problem of Excess Argon-40 in Volcanic Rocks," in Radioactive Dating Methods and Low-Level Counting (1967, Vienna, International Atomic Energy Agency), pp. 463-481.
9 C.L. Broadhurst, M.J. Drake, B.E. Hagee and T.J. Benatowicz, "Solubility and Partitioning of Ar in Anorthite, Diopside, Forsterite, Spinel, and Synthetic Basaltic Liquids," Geochimica et Cosmochimica Acta, 54 (1990): pp. 299-309. C.L. Broadhurst, M.J. Drake, B.E. Hagee and T.J. Benatowicz, "Solubility and Partitioning of Ne, Ar, Kr and Xe in Minerals and Synthetic Basaltic Melts," Geochimica et Cosmochimica Acta, 56 (1992): pp. 709-723.
10 J.G. Funkhouser and J.J. Naughton, "Radiogenic Helium and Argon in Ultramafic Inclusions from Hawaii," Journal of Geophysical Research, 73 (1968): pp. 4601-4607.
11 P.J. Valbracht, M. Honda, T. Matsumoto, N. Mattielli, I. McDougall, R. Ragettli and D. Weis, "Helium, Neon and Argon Isotope Systematics in Kerguelen Ultramafic Xenoliths: Implications for Mantle Source Signatures," Earth and Planetary Science Letters, 138 (1996): pp. 29-38.
12 M. Moreira, J. Kunz and C. Allègre, "Rare Gas Systematics in Popping Rock: Isotopic and Elemental Compositions in the Upper Mantle," Science, 279 (1998): pp. 1178-1181.
13 P. Burnard, D. Graham and G. Turner, "Vesicle-Specific Noble Gas Analyses of `Popping Rock': Implications for Primordial Noble Gases in the Earth," Science, 276 (1997): pp. 568-571.
14 S. Zashu, M. Ozima and O. Nitoh, "K-Ar Isochron Dating of Zaire Cubic Diamonds," Nature, 323 (1986): pp. 710-712.
15 M. Ozima, S. Zashu, Y. Takigami and G. Turner, "Origin of the Anomalous 40Ar-36Ar Age of Zaire Cubic Diamonds: Excess 40Ar in Pristine Mantle Fluids," Nature, 337 (1989): pp. 226-229.
16 S.A. Austin and A.A. Snelling, "Discordant Potassium-Argon Model and Isochron `Ages' for Cardenas Basalt (Middle Proterozoic) and Associated Diabase of Eastern Grand Canyon, Arizona," in R.E. Walsh, ed., Proceedings of the Fourth International Conference on Creationism (1998, Pittsburgh, PA, Creation Science Fellowship), pp. 35-51.
Note: "Ma" represents a million years (Mega-annum); "Ga" represents a billion years (Giga-annum).
* Dr. Snelling is Associate Professor of Geology at ICR.

Basically, there is no way for us to know how old the earth is. Why? Because our dating methods have no valid reference ratios for radiometric dating, and so our dating methods are calibrated by the evolutionary time scale, which obviously is going to produce wrong results!
 
Actually he is right, you are wrong, and nothing has been said which questions the foundations of nuclear science. And furthermore, stars work on nuclear fusion reactions that have nothing to do with the half-life’s of the atoms they contain. Stars can only burn in the periodic table up to a maximum of iron, at which point the core is too cool, gravity overcomes, and the star collapses to a neutron star, and if its of high enough mass, a black hole.

I haven't a lot of time, but I have a few comments:

Being a biologist, not a physicist...
I was under the impression that main sequence stars normally burn elements up to carbon, with supergiants burning elements up to iron.

I was also under the impression that the forces during the collapse of a supergiant were sufficient to cook the elements heavier than iron.

Is that wrong?

Incidentally, your source (perhaps unknowingly) makes a major and very misleading omission regarding the dating of sedimentary rocks. While index fossils are useful (and were used by the creationists who worked out the geological column) they do not "date" rocks. They merely indicate what period the rock is from. Dating is normally done by indexing where the rock happens to be relative to igneous rock that can be dated by radioisotope analysis. Ideally, there are deposits above and below the strata to be dated.

As you probably know, there is very good agreement between the various methods used, (assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens) and this sort of independent verification is compelling.
 
The Barbarian, I will reply to your other on Monday. It seems there are some issues where we have got our wires crossed. It seems that you have the impression that creationists believe that all living creatures today are the same as they were at creation (the alleged creationist "lawn" model as presented by Teaching about Evolution And The Nature Of Science), and that we do not believe that mutation and natural selection produce new species (I do not mean kinds), this is certainly not true. The kinds were created at the beginning, and within one generation there were a lot of species of that kind, but through over time by mutation and natural selection there have been produced a lot more species from them. But in such instances, though these are different speices, they are not different kinds. It's not that we do not believe in change, but that that change has limits.

In the classic textbook, Evolution, the late Theodosius Dobzhansky and three other famous evolutionists distinguish between SUBspeciation and TRANSspeciation. "Sub" is essentially variation within species, and "trans" is change from one species to another. The authors state their belief that one can "extrapolate" from variation within species to evolution between species. But they also admit that some of their fellow evolutionists believe that such extrapolation goes beyond all logical limits.

Creationists don't have a problem with subspeciation (even subspeciation due to mutation and natural selection), but they do with transspeciation.

It also seems that our definition of species and kinds are different from yours (I think you mentioned that evolution does not have the idea of set kinds? Am I right in this?).

On Monday I'll explain the difference. And by the way in short...

How exactly, would Adam and even have 300 alleles for hemoglobin?

He didn't. Creationists believe the other alleles that we find today are just variations on the origonal that Adam and Eve had. These other alleles that we find today have come about by mutation. But the point is, it's still hemoglobin, and humans are still humans.

Maybe I need to learn a thing or two about verbally expressing myself in a way that can be understood without being misunderstood. I guess this takes experience.

And as a further note, although I certainly don't agree with you that the blind spot is a defect, I shouldn't have made the comment about your comment 'being the hight of arrogance'. Whether or not I believe this to be true is irrelevent, such a comment should not have been written on a public message board. So I do apologise.
 
The Barbarian,

Call me Barb.

I will reply to your other on Monday. It seems there are some issues where we have got our wires crossed. It seems that you have the impression that creationists believe that all living creatures today are the same as they were at creation (the alleged creationist "lawn" model as presented by Teaching about Evolution And The Nature Of Science), and that we do not believe that mutation and natural selection produce new species (I do not mean kinds), this is certainly not true. The kinds were created at the beginning, and within one generation there were a lot of species of that kind, but through over time by mutation and natural selection there have been produced a lot more species from them. But in such instances, though these are different speices, they are not different kinds. It's not that we do not believe in change, but that that change has limits.

There are probably as many kinds of creationism as there are creationists. I think that your view, toward a limited evolution of higher taxa from "kinds" is becoming more common among creationists. When I was younger, few of them would even admit new species evolved.

In the classic textbook, Evolution, the late Theodosius Dobzhansky and three other famous evolutionists distinguish between SUBspeciation and TRANSspeciation. "Sub" is essentially variation within species, and "trans" is change from one species to another. The authors state their belief that one can "extrapolate" from variation within species to evolution between species. But they also admit that some of their fellow evolutionists believe that such extrapolation goes beyond all logical limits.

More precisely, a few supposed that speciation was something essentially different than variation within a species. Perhaps the wildest deviation was Goldschmidt's "Hopeful Monster" concept. While a mutation in genes controlling development might do that, observed instances of speciation seem to refute that as a major mode of macroevolution.

Creationists don't have a problem with subspeciation (even subspeciation due to mutation and natural selection), but they do with transspeciation.

It also seems that our definition of species and kinds are different from yours (I think you mentioned that evolution does not have the idea of set kinds? Am I right in this?).

"Kinds" are really a religious doctrine taken from the use of "bara" in Genesis. "Bara" does not, to this Christian's reading, mean anything technical at all. In truth, all living things are a kind. You and I have more in common with bacteria genetically and biochemically than things by which we differ.

It's just not a useful category in science. The only prediction made about "baramin" is that they must have a "wall" of some kind, beyond which further variation is somehow prevented. But no one can show any evidence for a wall, or even how it might exist.

On Monday I'll explain the difference. And by the way in short...

Barbarian asks:
How exactly, would Adam and even have 300 alleles for hemoglobin?

He didn't. Creationists believe the other alleles that we find today are just variations on the origonal that Adam and Eve had. These other alleles that we find today have come about by mutation. But the point is, it's still hemoglobin, and humans are still humans.

That's like saying chimps and humans are still primates. It's true, but it ignores a lot of variation. Further, when we look at the evolution of globin genes, we find that the differences sort out to give us the same phylogenies we get from genetics, fossil record, anatomy, etc. How does creationism explain that?

Maybe I need to learn a thing or two about verbally expressing myself in a way that can be understood without being misunderstood. I guess this takes experience.

Message boards require a different style than other writing, I think.

And as a further note, although I certainly don't agree with you that the blind spot is a defect, I shouldn't have made the comment about your comment 'being the hight of arrogance'. Whether or not I believe this to be true is irrelevent, such a comment should not have been written on a public message board. So I do apologise.

No need. You've been courteous and decent in your posts here. I was not offended. While I don't mind a good round of Irish tag, I will not do it here.
 
Yay! The boards in this topic 'regressed' back to their original width.

Hey, does this constitute a case for 'devolution'? :lol:
 
I was under the impression that main sequence stars normally burn elements up to carbon, with supergiants burning elements up to iron.

This is correct as far as I know. I was just trying to show that the mass limit for fusion is iron.

I was also under the impression that the forces during the collapse of a supergiant were sufficient to cook the elements heavier than iron.

Yes this should be true for the atoms that are collapsing with the star. In the end though the pressure ends up so great that there are no longer protons, neutrons and electrons, but that all the matter just collapses down to neutrons (neutron star), or if the star had a high enough mass to begin with, a black hole.

Now, as a supergiant collapses, the repulsive electrical forces between the particles of the outer layers of the star, may overcome gravity, and if this is correct, it should in theory cause a large short lived explosion (supernova). I guess in principle it could be true that atoms heavier than iron atoms, could form in this process. But whether or not they'd have an escape velocity high enough to evetually escape the collapsed stars gravitational well, I'm not entirely sure.

Incidentally, your source (perhaps unknowingly) makes a major and very misleading omission regarding the dating of sedimentary rocks.

I think he does mention this, but I just didn't included it, because there is no radio isotope dating method that is reliable.

While index fossils are useful (and were used by the creationists who worked out the geological column) they do not "date" rocks. They merely indicate what period the rock is from.

That is using the assumption that the evolutionary time scale is correct, in order to "help" date a rock. If the radioisotope methods were reliable, no such method of trying to locate the era of the rock would be required. Thus someone has to then add their bias (whether for creation or evolution) into the sum, when calculating the age of the rock.

Dating is normally done by indexing where the rock happens to be relative to igneous rock that can be dated by radioisotope analysis. Ideally, there are deposits above and below the strata to be dated.

Yes. One can "date" a rock relatively to neighbouring igneous rock that can be "dated" using radioisotope analysis. But since there is no reliable radioisotope method of dating igneous rock, no rock can be reliably dated. So we come back to dating rocks relative to how old we believe the fossils to be contained in the rocks. Because of this, a creationist and evolutionist would come up with very different values, neither of which can be substantially varified.

As you probably know, there is very good agreement between the various methods used, (assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens) and this sort of independent verification is compelling.

Well no, generally there is quite good disagreement between the various methods (even when assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens). Though they do seem to agree in general that long periods are involved. Even when they do agree long ages, with the evidence we have had from trying to date the rock of known age, its not hard to figure out that these dating records are at best unreliable, and most likely, completely wrong.

Radioisotope dating cannot reliably give us any information about the date of rocks, and thus the age of the earth.
 
Barbarian observes:
I was under the impression that main sequence stars normally burn elements up to carbon, with supergiants burning elements up to iron.

This is correct as far as I know. I was just trying to show that the mass limit for fusion is iron.

Barbarian observes:
I was also under the impression that the forces during the collapse of a supergiant were sufficient to cook the elements heavier than iron.

Yes this should be true for the atoms that are collapsing with the star. In the end though the pressure ends up so great that there are no longer protons, neutrons and electrons, but that all the matter just collapses down to neutrons (neutron star), or if the star had a high enough mass to begin with, a black hole.

Now, as a supergiant collapses, the repulsive electrical forces between the particles of the outer layers of the star, may overcome gravity, and if this is correct, it should in theory cause a large short lived explosion (supernova).

Apparently, this is the consensus among astrophysicists.

I guess in principle it could be true that atoms heavier than iron atoms, could form in this process. But whether or not they'd have an escape velocity high enough to evetually escape the collapsed stars gravitational well, I'm not entirely sure.

I'm told we have observed this occuring in the form of material blowing off of supernova remnants.

http://www-new.gsi.de/zukunftsprojekt/k ... sik_e.html
http://www.union.edu/PUBLIC/GEODEPT/COU ... /stars.htm

Through spectroscopy, we can analyze what elements exist in these masses being blown off of explolding stars.

Barbarian observes:
Incidentally, your source (perhaps unknowingly) makes a major and very misleading omission regarding the dating of sedimentary rocks.

I think he does mention this, but I just didn't included it, because there is no radio isotope dating method that is reliable.

Among others, Argon/Argon methods got the date of the destruction of Pompeii to a very high accuracy. Carbon 14 has been tightly correlated to known ages of varves in a number of lakes.

Barbarian observes:
While index fossils are useful (and were used by the creationists who worked out the geological column) they do not "date" rocks. They merely indicate what period the rock is from.

That is using the assumption that the evolutionary time scale is correct, in order to "help" date a rock. If the radioisotope methods were reliable, no such method of trying to locate the era of the rock would be required.

First, this was done by creationists before Darwin, so there could be no expectations of an "evolutionary time scale". Second, the index fossils are merely a useful way of determining the era, without saying what the actual age would be. That, as I pointed out, could not have been accurately determined before radioisotope dating.

Thus someone has to then add their bias (whether for creation or evolution) into the sum, when calculating the age of the rock.

I don't see how.

Barbarian observes:
Dating is normally done by indexing where the rock happens to be relative to igneous rock that can be dated by radioisotope analysis. Ideally, there are deposits above and below the strata to be dated.

Yes. One can "date" a rock relatively to neighbouring igneous rock that can be "dated" using radioisotope analysis. But since there is no reliable radioisotope method of dating igneous rock, no rock can be reliably dated.

The consensus of physicists is that it can. I'm very impressed by the fact that so many diverse methods give us the same answers. Creationism can't explain this, but it seems obvious in terms of mainstream science.

So we come back to dating rocks relative to how old we believe the fossils to be contained in the rocks.

Some might. But scientists don't. They rely on radioisotope dating, which have some spectacular successes.

Barbarian observes:
As you probably know, there is very good agreement between the various methods used, (assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens) and this sort of independent verification is compelling.


Well no, generally there is quite good disagreement between the various methods (even when assuming that they don't violate good sampling procedure as Austin did by including xenocrysts in his sample from Mt. St. Helens).

Here's some information from a gentleman I know who specializes in that area. Dr. Meert has done extensive work on that question, and his results are very good:
http://gondwanaresearch.com/radiomet.htm

This, from his site, seems particularly of interest regarding the claims of circularity in dating rocks:
http://gondwanaresearch.com/hp/crefaqs.htm#where

Though they do seem to agree in general that long periods are involved. Even when they do agree long ages, with the evidence we have had from trying to date the rock of known age, its not hard to figure out that these dating records are at best unreliable, and most likely, completely wrong.

Perhaps you are thinking of Austin and his xenocrysts. He took material from a fresh eruption, but with ancient and unmelted material included in it. It's not surprising that he got weird ages. That's why the lab warned him about using such samples.

Radioisotope dating cannot reliably give us any information about the date of rocks, and thus the age of the earth.

No, that's been refuted by the Argon/Argon analysis of the rocks at Pompeii.


_________________
 
Barbarian observes, re: Coelacanths:
Actually, the two species we have today are not quite like the ancient ones. But they are obviously evolved from them. Why this should be so is a mystery to creationism, but is understandable in terms of natural selection.

Nice try. The Latimeria chalumnae (Cormoran population) is identical to fossils that were supposedly dated back 200 million years,

I don't see Latimeria chalumnae in any fossil collection, but it is very similar in skeleton to Macropoma mantelli, a Creataceous coelacanth. Close, but no cigar.

while the Latimeria Menadoensis (Sulawesi population) is simply a different geographical population of the same coelacanth species. Those who discovered the Sulawesi coelacanth did declare it to be a different species, but when it was actually compared to the Cormoran population, they were not able to find differences in the morphological traits of the two populations.

http://www.ucmp.berkeley.edu/vertebrate ... anths.html

Right off the top, one is steel blue and the other is brownish. They are currently considered to be two separate species.

More importantly, evolutionists told us that the coelacanth was a missing link of sorts.

No. For two reasons. First "missing link" is not a scientific term, and scientists don't use it, except to talk about misconceptions people have. Second, coelacanths are not considered to be transitional to tetrapods.

img017.jpg


Here are two fins from more advanced sarcopterygians. One is a lungfish, and the bottom one is from Eusthenopteron, a sarcopterygian very close to the form that gave rise to tetapods. Note that the latter has identifiable humerous, ulna, radius, and carpal bones. You've been misled about the "missing link" stuff.

That is crawled with its fins rather than swam, which was supposed to be proof that fish crawled onto land.

Nope. the first one in that lineage known to have crawled was Acanthostega:

fig41.gif


The first four are from relatively advanced crossopterygians, and E is from Acanthostega. It is the only one that actually walked on fins. Since the joints are relatively flimsy and the spine wasn't sturdy enough to hold the animal up out of water, it's clear that it used it's legs to crawl about on the bottom of ponds and streams.

There is a slightly sturdier species that seems to have been able to move about on land a bit. It's difficult at this point to sort out exactly where the fish leave off and amphibians begin.

We now know that the coelacanths swim, not crawl, and that the evolutionists were just making things up (as usual).

Someone had a little fun with your trust. I've never seen anything in the professional literature that says anything about coelacanths walking. They really had very little but bone-supported fins. Later sarcopterygians were better fitted for that, and clearly did walk.
 
Back
Top