Wednesday, 30 January 2008

The Ages of Man


The familiar terms “Stone Age”, “Bronze Age” and “Iron Age” are part of the so-called Three Age system, introduced by the Danish archaeologist Christian Jurgensen Thomsen in 1819 when he was curator of the collection of antiquities that subsequently became the National Museum of Denmark in Copenhagen. Thomsen was looking for a simple and logical system by which to arrange the collection which in common with those of other museums was in a chaotic state, overrun with prehistoric artefacts from all over the world. Thomsen was not the first to think of applying tool-making materials as a basis for classifying prehistoric cultures, although he was the first to actually do so. Thomsen lacked any means of dating his artefacts, but correctly guessed that stone had preceded bronze, which in turn had preceded iron. At the time – forty years before Darwin’s Origin of the Species – few suspected the true antiquity of mankind, with many still believing that the Earth was just 6,000 years old. Although the Scottish geologist Charles Hutton and others had begun to call this figure into question, in the early 19th Century it was still widely accepted.

As far back as 1860s, Thomsen’s original scheme was beginning to look lopsided and in 1865 the archaeologist Sir John Lubbock, a friend of Charles Darwin, published Pre-historic Times, which was probably the most influential archaeological textbook of the 19th Century. In it he introduced the terms “Palaeolithic” (Old Stone Age) and “Neolithic” (New Stone Age). We now know that the Palaeolithic encompasses all but a tiny fraction of human prehistory, beginning approximately 2.5 million years ago with the emergence of the first members of Genus Homo – i.e. the first human beings. Accordingly the Palaeolithic is in turn divided into Lower, Middle and Upper. The Lower/Middle transition is taken to be the point at which Mode 3 industries enter the archaeological record such as the predominantly Neanderthal Mousterian culture, at very roughly 300,000 years ago. The Middle/Upper transition, approximately 40,000 years ago, is the point at which unequivocal evidence for modern human behaviour is found.

In Africa the terms Early, Middle and Late Stone Age, or ESA, MSA and LSA respectively, are preferred, but the LSA also encompasses the Neolithic and Bronze Age as neither metallurgy nor agriculture reached sub-Saharan Africa until Iron Age times. To avoid confusion, I shall use only the term “Palaeolithic”, with its sub-divisions occurring at different times in different parts of the world. Such a scheme is generally used for later prehistory and I see no reason not to use it here also.

The division between the Palaeolithic and the Neolithic is now taken to be the Pleistocene/Holocene boundary, that is to say the end of the last ice age, at around 11,550 years ago. This is somewhat illogical division, equating a purely geological change to a system based on technology. Agriculture was independently adopted in several parts of the world and spread outwards from these nuclear zones, taking many millennia to reach some places, and necessitating the introduction of another division, the Mesolithic (Middle Stone Age) for regions where hunter gathering persisted. Conversely in parts of the world where proto-agriculture was practiced in late Pleistocene times, such as the Levant, the term Epipalaeolithic is used.

The transition from Neolithic to Bronze Age is equally ill-defined - there is generally a transitional period where stone and native copper tools are in mixed use; this transitional period is referred to variously as Chalcolithic, Eneolithic or simply Copper Age. This transition began at different times in different parts of the world, and was of different duration – the Copper Age began earlier in the Middle East, but in Europe the transition to the fully-fledged Bronze Age was more rapid.

The working of iron begins around 1200 BC in India, the Middle East and Greece, but again took time to spread to other parts of the world. The Iron Age continues on into historical times, not ending in Northern Europe until the Middle Ages.

This does to all intents and purposes give us a nine-age system:

Table 1.0: The career of Mankind (YA = Years Ago)

Archaeological/

Geological Time period

Events

Miocene (26m - 5m YA)

Proconsul (27m-17m YA)

Pliocene (5.0m – 1.64m YA)

Ardipithecus ramidus (5m – 4.2m YA)

Australopithecus anamensis (4.2m – 3.9m YA)

A. afarensis (4.0m 3.0m YA)

A. africanus (3.3m – 2.5m YA)

A. Garhi ()

Paranthropus aethiopicus (2.5m – 2.4m YA)

P. robustus (2.4m – 1.2m YA)

P. boisei (2.3m – 1.2m YA)

Lower Palaeolithic

(2.4m – 200,000 YA)

2.4m YA. Earliest true humans appear in Africa, though apparently sympatric with later “robust” australopithecines (Paranthropus). Now believed that early fossil hominids represent at least two synchronous (though not sympatric) human species, Homo habilis (brain size 590-690 cc) and Homo rudolfensis (750 cc). It is not known which if either was ancestral to later types.

Tools: Mode 1 Oldowan (2.4m – 1.5 m YA) flakes and choppers.

Mode 2 Acheulian (1.4m – 100,000 YA) handaxes and cleavers.

Lower Pleistocene (1.64m – 900,000 YA)

Middle Pleistocene (900,000 – 127,000 YA)

1.9m YA. Homo ergaster (brain size 700-850 cc) appears in Africa; migrates to Far East; migrants now widely regarded as becoming a separate species, Homo erectus (orig. both classed as erectus).

500,000 YA (poss. as early as 1.0m YA). Use of fire.

800,000 YA. Homo Antecessor. Controversial taxon known only from Atapuerca in Northern Spain, believed by some to be the common ancestor of both modern man and the Neanderthals.

500,000 YA. Larger-brained (1,200 cc) and bigger-boned hominids are found in the fossil record in Africa, Asia and Europe. Traditionally referred to as “archaic Homo sapiens” but Homo heidelbergensis now favoured. Other types have been proposed such as Homo rhodesiensis and H. helmei. It’s all very confusing!

250,000 YA. Homo neanderthalensis “the Neanderthals” appear in Europe, possibly descended from Homo heidelbergensis. They later spread to the Middle East.

250,000 – 35,000 YA. Mousterian culture in Europe.

Middle Palaeolithic (200,000 – 45,000 YA)

Late Pleistocene (127,000 – 11,600 YA)

Tools: Mode 3. (from 200,000 YA) flaking of prepared cores. Increasing use of the Levallois method to prepare cores, though this method was also used in late Acheulian times.

160,000 YA. Earliest near-anatomically modern humans, Homo sapiens idaltu, Herto, Ethiopia.

150,000 YA. Birth of putative “mitochondrial Eve” in East Africa.

100,000 YA. Homo sapiens in Israel (Skhul and Qafzeh).

50-60,000 YA. H. sapiens in Australia (Lake Mungo).

Upper Palaeolithic (45,000 – 11,600 YA)

43,000 YA. H. sapiens reach Europe.

Tools: Mode 4 (narrow blades struck from prepared cores).

35-29,000 YA. Châtelperronian culture, central and south-western France, final phase of Neanderthal industry.

34-23,000 YA. Aurignacian culture in Europe and south-west Asia.

32,000 YA. Chauvet-Pont-d'Arc cave paintings, southern France.

28,000 YA. Last Neanderthals die out.

28-22,000 YA. Gravettian culture, Dordogne, France. “Venus” figurines.

21-17,000 YA. Solutrean culture, France and Spain.

20-18,000 YA. Last Glacial Maximum (LGM), maximum glacier extent of last Ice Age.

16,500 YA. Lascaux cave paintings, Dordogne, France.

15-11,600 YA. Magdelanian culture in western Europe, final European Palaeolithic culture.

15,000-12,900 YA. Bølling-Allerød interstadial.

12,900 YA. Beginning of the Younger Dryas stadial.

12,000 YA. Jōmon culture in Japan, first use of pottery.

Epipalaeolithic (20,000 – 11,600 YA)

Ohalo II (20-19,000 YA)

Natufian culture (14,000-11,600 YA) in the Levant.

Holocene

Mesolithic (11,600 YA until adoption of agriculture)

11,600 YA. Last Ice Age ends.

11.6-6,000 YA. Hunter-gathering persists in many parts of the world.

Neolithic (11,600 – 6,500 YA and later in various parts of the world)

11,600 YA. Rapid transition to agriculture in Middle East and Anatolia.

Tools: Mode 5 (microliths).

9,200 YA Catalhoyuk – very large Neolithic settlement in Anatolia.

9,000 YA. Beginning of the “Wave of Advance” – expansion of proto Indo-European farmers from Anatolia.

9,500 YA. Çatal Höyük, Anatolia, apparently no more than a very large village.

8,500 YA. As sea levels rise, Britain becomes an island.

Chalcolithic (6,500 – 4,000 YA in various parts of the world)

Copper and stone tools in mixed use.

6,500 – 3,500 YA. The age of the great megaliths in Europe.

5,100 – 4,000 YA. Construction of Stonehenge.

Bronze Age (5,300 – 2,700 YA in various parts of the world)

4,500 YA. Construction of the pyramids in Egypt.

5,300-2,700 YA. Indus Valley civilization, India.

4,700-3,450 YA. Minoan civilization, Crete.

3,600-2,100 YA. Mycenaean civilization, Greece.

2,200 YA. Mediterranean Bronze Age collapse.

Iron Age (1800 BC into historical times)

1800 BC. First working of iron, in India.

800-450 BC. Hallstatt culture,

Central Europe.

450 BC. La Tene culture.

AD 43. Romans invade and conquer Britain.

Taxonomy

Within Class Mammalia (the mammals) humans are grouped with apes, monkeys and prosimians (lemurs, lorises, etc) within the order Primates. The term is due to Linnaeus, representing his view that humanity sat firmly at the top of creation’s tree (the self-styled Prince of Botany was also responsible for the term “mammal”, reflecting his now quite fashionable views about breast-feeding).

The majority of the 200 or so living species of primate are tropical or subtropical, living in rainforests. Most are arboreal (tree-dwelling) or at least spending much of their time in the trees. Even those that have forsaken this habit show arboreal adaptations in their ancestry. These include manipulative hands and often feet, with opposable thumbs and big toes; replacement of claws with nails; a reduced sense of smell and enhanced sight including colour and stereoscopic vision; locomotion based heavily on hind limbs and a common adoption of an upright posture; and finally a tendency for larger brains than comparably-sized mammals of other orders.

The anthropoids or simians (Suborder Anthropoidea) basically comprise the more human-like primates and include Old World monkeys, New World monkeys (including marmosets and tamarins), apes and finally humans. Other primates are traditionally lumped together as prosimians.

Historically, membership of Family Hominidae was restricted to humans and australopithecines, with the Great Apes being banished to a separate family, Pongidae. Both families were grouped with the gibbons, etc. in Superfamily Hominoidea (the Hominoids).

However this scheme is now known to be incorrect as chimps and gorillas are more closely related to humans than they are to orang-utans. Accordingly Pongidae is now “sunk” into Hominidae (it would also be incorrect to give the orang-utans their own family). The term “hominin” (from Tribe Hominini) is now gaining popularity, because it comprises humans and australopithecines, i.e. the “traditional” hominids. The term “hominine” (from Subfamily Homininae) is also sometimes encountered; this grouping adds gorillas and chimps, but not orang-utans. To get back to the original meaning of “hominid” and subtract the chimps we have to go down to the level of Subtribe Hominina. To my mind this is very confusing and pushing the envelope of what we can reasonably ask from Linnaean taxonomy, which is after all firmly rooted in Platonic Realism (Linnaeus was a creationist), rather than Darwinian principles. I see nothing wrong with the use of the term “hominid” so long as we are aware that it includes our cousins, the Great Apes.

Table 2.0 Family Hominidae (The Hominids)

Species

Av. Brain size/cc

Dates known/years ago

Distribution

Pongo pygmaeus(Orang-utan)

400

Present day

Sumatra, Borneo

Gorilla gorilla (Gorilla)

500

Present day

central and west Africa

Pan trogladytes (Chimpanzee)

400

Present day

central and west Africa

Pan paniscus (Bonobo)

400

Present day

DR Congo

Ardipithecus ramidus

400 – 500

5.8m – 4.4m

Australopithecus anamensis

400 – 500

5.0m – 4.2m

A. afarensis

400 – 500

4.0m - 3.0m

A. africanus

400 – 500

3.3m – 2.5m

A. garhi

400 – 500

3.0m – 2.0m

Parantropus aethiopicus

400 – 500

2.5m – 2.4m

P. robustus

410 – 530

2.4m – 1.2m

P. boisei

410 – 530

2.3m – 1.2m

Homo habilis

500 – 650

2.4m – 1.6m

H. rudolfensis

600 – 800

2.0m – 1.6m

H. ergaster

750 – 1,250

1.9m – 1.5m

H. erectus

750 – 1,250

1.8m – 400,000 (poss. later)

H. antecessor

>1,000?

800,000

Atapuerca, Spain

H. heidelbergensis

1,100 – 1,400

500,000 – 250,000

H. neanderthalensis

1,200 – 1,750

250,000 – 30,000

Europe, Middle East

H. sapiens idaltu

1,200 – 1,700

160,000

Herto, Ethiopia

H. sapiens sapiens

1,200 – 1,700

From 115,000

Worldwide


© Christopher Seddon 2008

Monday, 28 January 2008

Plato's Theory of Forms

Plato (circa 427-347 BC) made contributions to practically every field of human interest and is undoubtedly one of the greatest thinkers of all times. However it is just as well that his political ideas didn’t catch on (except possibly in North Korea); additionally Platonic Realism bogged down biological science until Darwin and Wallace’s time.

Plato was influenced by Pythagoras, Parmenides, Heraclitus and Socrates (Russell (1946)). From Pythagoras he derived the Orphic elements in his philosophy: religion, belief in immortality, other-worldliness, the priestly tone, and all that is involved in the allegory of the cave; mathematics and his intermingling of intellect and mysticism. From Parmenides he derived the view that reality is eternal and timeless and that on logical grounds, all change must be an illusion. From Heraclitus he derived the view that there is nothing permanent in the world of our senses. Combining this with the doctrine of Parmenides led to the conclusion that knowledge is not to be derived from the senses but achieved by intellect – which ties in with Pythagoras. Finally from Socrates came his preoccupation with ethics and his tendency to seek teleological rather than mechanical explanations.

Realism, as opposed to nominalism, refers to the idea that general properties or universals have a mode of existence or form of reality that is independent of the objects that possess them. A universal can be a type, a property or a relation. Types are categories of being, or types of things – e.g. a dog is a type of thing. A specific instance of a type is known as a token, e.g. Rover is a token of a dog. Properties are qualities that describe an object – size, colour, weight, etc, e.g. Rover is a black Labrador. Relations exist between pairs of objects, e.g. if Rover is larger than Gus then there is a relation of is-larger-than between the two dogs. In Platonic Realism universals exist, but only in a broad abstract sense that we cannot come into contact with. The Form is one type of universal.

The Theory of Forms (or Ideas) is referred to in Plato’s Republic and other Socratic Dialogues and follows on from the work of Parmenides and his arguments about the distinction between reality and appearance. The theory states that everything existing in our world is an imperfect copy of a Form (or Idea), which is a perfect object, timeless and unchanging, existing in a higher state of reality; for example there are many types of beds, double, single, four-poster etc but they are only imperfect copies of the Form of the bed, which is the only real bed. Plato frowned upon the idea of painting a bed because the painting would merely be a copy of a copy, and hence even more flawed. The world of Forms contains not only the bed Form but a form for everything else – tables, wristwatches, dogs, horses, etc. Forms are related to particulars (instances of objects and properties) in that a particular is regarded as a copy of its form. For example, a particular apple is said to be a copy of the form of Applehood and the apple's redness is a copy of the form of Redness. Participation is another relationship between forms and particulars. Particulars are said to participate in the forms, and the forms are said to inhere in the particulars, e.g. redness inheres in an apple. Not all forms are instantiated, but all could be. Forms are capable of being instantiated by many different particulars, which would result in the form having many copies, or inhering many particulars.

Needless to say, the world of the Forms was only accessible to philosophers, a view which justified the Philosopher Kings of the Republic, and casts philosophers in the same role as shamans and priests as people with exclusive access to worlds better than our own, and hence the basis of a ruling elite. That animals have ideal Forms is a view that bogged down biological science for centuries, as it rules out any notion of evolution. (The Republic also advocated such unsavoury practices as eugenics (dressed up as a rigged mating lottery); abolition of the family; censorship of art; and a caste-system based on a “noble lie” of the “myth of metals” (which I suppose is better than a war based on the ignoble lie of the myth of weapons of mass destruction). The Republic seems to have influenced Huxley’s Brave New World, Orwell’s 1984 and the Federation of Heinlein’s Starship Troopers).

The inheritance criticism questions what it means to say that the form of something inheres in a particular or that the particular is a copy of the form. If the form is not spatial, it cannot have a shape, so the particular cannot be the same shape as the form.

Arguments against the inherence criticism claim that a form of something spatial can lack a concrete location and yet have abstract spatial qualities. An apple, for example, can have the same shape as its form. Such arguments typically claim that the relationship between a particular and its form is very intelligible and people apply Platonic theory in everyday life, for example “car”, “aeroplane”, “cat” etc don’t have to refer to specific vehicles, aircraft or cats.

Another criticism of forms relates to the origin of concepts without the benefit of sense-perception. For example, to think of redness-in-general is to think of the form of redness. But how can one have the concept of a form existing in a special realm of the universe, separate from space and time, since such a concept cannot come from sense-perception. Although one can see an apple and its redness, those things merely participate in, or are copies of, the forms. Thus to conceive of a particular apple and its redness is not to conceive of applehood or redness-in-general.

Platonic epistemology, however, addresses such criticism by saying that knowledge is innate and that souls are born with the concepts of the forms. They just have to be reminded of those concepts from back before birth, when they were in close contact with the forms in the Platonic heaven. Plato believed that each soul existed before birth with "The Form of the Good" and a perfect knowledge of everything. Thus, when something is "learned" it is actually just "recalled."

Plato stated that knowledge is justified true belief, i.e. if we believe something, have a good reason for doing so, and it is in fact true, then the belief is knowledge. For example, if I believe that the King’s Head sells London Pride (because I looked it up in the Good Beer Guide), I get a bus to the pub and see a Fullers sign outside, then I have knowledge that it sells London Pride. This view has been central to epistemological debate ever since Plato’s time.

Plato drew a sharp distinction between knowledge which is certain, and mere opinion which is not certain. Opinions derive from the shifting world of sensation; knowledge derives from the world of timeless forms, or essences. In the Republic, these concepts were illustrated using the metaphor of the sun, the divided line and the allegory of the cave.

Firstly, the metaphor of the sun is used for the source of "intellectual illumination", which Plato held to be The Form of the Good. The metaphor is about the nature of ultimate reality and how we come to know it. It starts with the eye, which is unusual among the sense organs in that it needs a medium, namely light, in order to operate. The strongest source of light is the sun; with it, we can discern objects clearly. By analogy, we cannot attempt to understand why intelligible objects are as they are and what general categories can be used to understand various particulars around us without reference to forms. "The domain where truth and reality shine resplendent" is Plato's world of forms, illuminated by the highest of all the forms – the Form of the Good. Since true being resides in the world of the forms, we must direct our intellects there to have knowledge. Otherwise we have mere opinion, i.e that which is not certain.

Secondly, the divided line has two parts that represent the intelligible world and the smaller visible world. Each of those two parts is divided, the segments within the intelligible world represent higher and lower forms and the segments within the visible world represent ordinary visible objects and their shadows, reflections, and other representations. The line segments are unequal and their lengths represent "their comparative clearness and obscurity" and their comparative "reality and truth," as well as whether we have knowledge or instead mere opinion of the objects. Hence, we are said to have relatively clear knowledge of something that is more real and "true" when we attend to ordinary perceptual objects like rocks and trees; by comparison, if we merely attend to their shadows and reflections, we have relatively obscure opinion of something not quite real.

Finally Plato drew an analogy between human sensation and the shadows that pass along the wall of a cave - the allegory of the cave. Prisoners inside a cave see only the shadows of puppets in front of a fire behind them. If a prisoner is freed, he learns that his previous perception of reality was merely a shadow and that the puppets are more real. If the learner moves outside of the cave, they learn that there are real things of which the puppets are themselves mere imitations, again achieving a greater perception of reality. Thus the mere opinion of viewing only shadows is steadily replaced with knowledge by escape from the cave, into the world of the sun and real objects. Eventually, through intellectualisation, the learner reaches the forms of the objects – i.e. their true reality.

© Christopher Seddon 2008

Sunday, 27 January 2008

Radiometric dating techniques

A major problem for archaeologists and palaeontologists is the reliable determination of the ages of artefacts and fossils.

As far back as the 17th Century the Danish geologist Nicolas Steno proposed the Law of Superimposition for sedimentary rocks, noting that sedimentary layers are deposited in a time sequence, with the oldest at the bottom. Over a hundred years later, the British geologist William Smith noticed that sedimentary rock strata contain fossilised flora and fauna, and that these fossils succeed each other from top to bottom in a consistent order that can be identified over long distances. Thus strata can be identified and dated by their fossil content. This is known as the Principle of Faunal succession. Archaeologists apply a similar principal, artefacts and remains that are buried deeper are usually older.

Such techniques can provide reliably relative dating along the lines of “x is older than y”, but to provide reliable absolute values for the ages of x and y is harder. Before the introduction of radiometric dating in the 1950s dating was a rather haphazard affair involving assumptions about the diffusion of ideas and artefacts from centres of civilization where written records were kept and reasonably accurate dates were known. For example, it was assumed – quite incorrectly as it later turned out - that Stonehenge was more recent than the great civilization of Mycenaean Greece.

The idea behind radiometric dating is fairly straightforward. The atoms of which ordinary matter is composed each comprise a positively charged nucleus surrounded by a cloud of negatively charged electrons. The nucleus itself is made up of a mixture of positively charged protons and neutral neutrons. The atomic weight is total number of protons plus neutrons in the nucleus and the atomic number is the number of protons only. The atom as a whole has the same number of electrons as it does protons, and is thus electrically neutral. It is the number of electrons (and hence the atomic number) that dictate the chemical properties of an atom and all atoms of a particular chemical element have the same atomic number, thus for example all carbon atom have an atomic number of six. However the atomic weight is not fixed for atoms of a particular element, i.e. the number of neutrons they have can vary. For example carbon can have 6, 7 or 8 neutrons and carbon atoms with atomic weights of 12, 13 and 14 can exist. Such “varieties” are known as isotopes.

The physical and chemical properties of various isotopes of a given element vary only very slightly but the nuclear properties can vary dramatically. For example naturally-occurring uranium is comprised largely of U-238 with only a very small proportion of U-235. It is only the latter type that can be used as a nuclear fuel – or to make bombs. Many elements have some unstable or radioactive isotopes. Atoms of an unstable isotope will over time decay into “daughter products” by internal nuclear change, usually involving the emission of charged particles. For a given radioisotope, this decay takes place at a consistent rate which means that the time taken for half the atoms in a sample to decay – the so called half-life – is fixed for that radioisotope. If an initial sample is 100 grams, then after one half-life there will only be 50 grams left, after two half-lives have elapsed only 25 grams will remain, and so on.

It is upon this principle that radiometric dating is based. Suppose a particular mineral contains an element x which has a number of isotopes, one of which is radioactive and decays to element y with a half-life of t. The mineral when formed does not contain any element y, but as time goes by more and more y will be formed by decay of the radioisotope of x. Analysis of a sample of the mineral for the amount of y contained will enable its age to be determined provided the half-life t and isotopic abundance of the radioisotope is known.

The best-known form of radiometric dating is that involving radiocarbon, or C-14. Carbon – as noted above – has three isotopes. C-12 (the most common form) and C-13 are stable, but C-14 is radioactive, with a half-life of 5730 years, decaying to N-14 (an isotope of nitrogen) and releasing an electron in the process (a process known as beta decay). This is an infinitesimal length of time in comparison to the age of the Earth and one might have expected all the C-14 to have long since decayed. In fact the terrestrial supply is constantly being replenished from the action of interstellar cosmic rays upon the upper atmosphere where moderately energetic neutrons interact with atmospheric nitrogen to produce C-14 and hydrogen. Consequently all atmospheric carbon dioxide (CO2) contains a very small but measurable percentage of C-14 atoms.

The significance of this is that all living organisms absorb this carbon either directly (as plants photosynthesising) or indirectly (as animals feeding on the plants). The percentage of C-14 out of all the carbon atoms in a living organism will be the same as that in the Earth’s atmosphere. The C-14 atoms it contains are decaying all the time, but these are replenished for as long as the organism lives and continues to absorb carbon. But when it dies it stops absorbing carbon, the replenishment ceases and the percentage of C-14 it contains begins to fall. By determining the percentage of C-14 in human or animal remains or indeed anything containing once-living material, such as wood, and comparing this to the atmospheric percentage, the time since death occurred can be established.

This technique was developed by Willard Libby in 1949 and revolutionised archaeology, earning Libby the Nobel Prize for Chemistry in 1960. The technique does however have its limitations. Firstly it can only be used for human, animal or plant remains – the ages of tools and other artefacts can only be inferred from datable remains, if any, in the same context. The second is that it only has a limited “range”. Beyond 60,000 years (10 half-lives) the percentage of C-14 remaining is too small to be measured, so the technique cannot be used much further back than the late Middle Palaeolithic. Another problem is the cosmic ray flux that produces C-14 in the upper atmosphere is not constant as was once believed. Variations have to be compensated for by calibration curves, based on samples that have an age that can be attested by independent means such as dendochronology (counting tree-rings). Finally great care must be taken to avoid any contamination of the sample in question with later material as this will introduce errors.

The conventions for quoting dates obtained by radiocarbon dating are a source of considerable confusion. They are generally quoted as Before Present (BP) but “present” in this case is taken to be 1950. Calibrated dates can be quoted, but quite often a quoted date will be left uncalibrated. Uncalibrated dates are given in “radiocarbon years” BP. Calibrated dates are usually suffixed (cal), but “present” is still taken to be 1950. To add to the confusion, Libby’s original value for the half-life of C-14 was later found to be out by 162 years. Libby’s value of 5568 years, now known as the “Libby half-life”, is rather lower than the currently-accepted value of 5730 years, which is known as the Cambridge half-life. Laboratories, however, continue to use the Libby half-life! In fact this does make sense because by quoting all raw uncalibrated data to a consistent standard means any uncalibrated radiocarbon date in the literature can be converted to a calibrated date by applying the same set of calculations. Furthermore the quoted dates are “futureproofed” against any further revision of the C-14 half-life or refinement of the calibration curves.

If one needs to go back further than 60,000 years other techniques must be used. One is Potassium-Argon dating, which relies on the decay of radioactive potassium (K-40) to Ar-40. Due to the long half-life of K-40, the technique is only useful for dating minerals and rocks that are over than 100,000 years old. It has been used to bracket the age of archaeological deposits at Olduvai Gorge and other east African sites with a history of volcanic activity by dating lava flows above and below the deposits.

© Christopher Seddon 2008

Wednesday, 23 January 2008

The Day of the Triffids, by John Wyndham (1951)

Science Fiction does not often make an appearance on the school curriculum, but The Day of the Triffids is one work that has been required reading for generations of pupils. I first encountered the book nearly forty years ago, in fact just months after the death of its author at the comparatively early age of 66. At school, I must confess, my enthusiasm for Wuthering Heights, Return of the Native and I Claudius was (shamefully!) less than these great works warranted. But The Day of the Triffids was unputdownable. Instead of reading the two chapters set for homework that evening, I read the entire book!

It is reasonable to say that I could have been presented with many other works of science fiction and devoured them with equal gusto. Few of these would be regarded as great works of SF, let alone English Literature. But no other book has ever appealed to two more differing arbiters of what constitutes a good read, myself at the age of fourteen and those seemingly determined to stuff down pupils' throats the dullest books imaginable.

So why is a somewhat dated science fiction novel, written from a seemingly rather prim post-war middle class perspective, still popular now - almost half a century after it was written?

Read the first few pages and you will see why. There is something for everybody, from the most inattentive schoolboy to the stodgiest academic. The first line is one of the finest opening sentences to any book ever written, SF or otherwise....

When a day you happen to know is Wednesday starts off by sounding like Sunday, there is something seriously wrong somewhere.

Tension mounts immediately as we sense that the hospitalised narrator, not named until the tenth page as Bill Masen, is helpless. Realisation is slow to come that he is blind - at least temporarily so. His eyes are bandaged following emergency treatment to save his sight. And his plight is nightmarish. Not just the hospital, but the world outside, has apparently ceased to function. Nothing can be heard - not a car, not even a distant tugboat. Nothing but church clocks, with varying degrees of accuracy, announcing first eight o'clock, then quarter-past, then nine...
We learn that the previous night, the whole Earth had been treated to a magnificent display of green meteors, believed at first to be comet debris. Masen is bitterly disappointed at being one of the few people to miss the display. He wonders has the whole hospital, the whole of London made such a night of it that nobody has yet pulled round. Eventually, he takes off the bandages, which were in any case due to come off, by himself. He is greatly relieved to find that he can see - he soon finds out that is one of the few people left who can.

The hospital has been transformed into a Doréan nightmare of blinded patients milling helplessly around. The only doctor Masen encounters hurls himself from a fifth floor window after finding his telephone is dead. After giving only cursory consideration to trying to help the blinded, he flees the hospital. What, he rationalises, would he do if he did succeed in leading them outside? It is already becoming apparent that the scale of the disaster extends way beyond the hospital. He makes for the nearest pub, desperately in need of a drink. But this is a nightmare from which there is no escape. The pub landlord is also blind, to say nothing of blind drunk. He blames the meteor shower for his condition. He says that having discovered their children were also blinded, his wife gassed them and herself, and he intends to join them once he is drunk enough.
Anybody who describes this as "cosy catastrophism" really needs to re-read just this first chapter to be firmly disabused of the notion.

At a single stroke, mankind's complex civilisation has been brought down, all but a tiny handful of the world's population blinded. Nor is this the extent of humanity's troubles. Within hours, triffids have broken out of captivity and are running amok, and within a week London is smitten by plague. Only near the end of the book do we learn that mankind, in all probability, brought this triple-whammy down upon himself.

The Day of the Triffids is set in the near future, although no date is given. Masen, who is apparently an only child, is in his late twenties when the story begins and his father had reached adulthood before the war. The catastrophe, that turns out to have been caused by a satellite weapon having been accidentally set off in space rather than close to the ground, probably occurs around 1980.

Masen lives in a world in which food shortages are the biggest challenge to mankind. The triffid, a mobile carnivorous plant equipped with a lethal sting, is being farmed world-wide as a source of vegetable oil and cattle-food. Originally bred in secret in the Soviet Union, they are distributed world-wide when an attempt to steal a case of fertile triffid seeds backfires. Masen himself is making a successful career in the triffid business and is hospitalised when one stings him in the eyes - thus it is the triffids who are responsible for his escaping the almost universal blindness.

The story follows the adventures of Masen and fellow-survivor Josella Playton and explores the differing attempts of various groups to deal with the catastrophe. Some want to somehow cling on to a vestige of the social and moral status quo, others see the situation as an opportunity for personal advancement. The well-meaning but ultimately hopeless attempts of Wilfred Coker to keep as many blind people alive for as long as possible end in failure within a week when the plague strikes. Miss Durrant's attempt to build a Christian community fares little better, and it too succumbs to the plague. The dictator Torrence tries to set up a feudal state, using the blind as slave labour, fed upon mashed triffid.

From the start, though, Masen and Ms. Playton take the same view as Michael Beadley, the avuncular leader of a group of survivors holed up in Senate House. Nothing can be done for the vast majority of the blind - mankind's best hope for the future is to set up a community of largely sighted survivors, in a place of comparative safety.

Thus Wyndham explores from different angles the question of how ordinary people face up to the task of trying to run a small community, something that is quite challenging under even normal circumstances, with everybody seemingly having different views on how things should be done.

Coker's shenanigans see to it that many adventures must pass before Masen and Ms. Playton eventually link up with Beadley's group, by now ensconced on a triffid-free Isle of Wight.

The Day of the Triffids has been likened to Orwell's Nineteen Eighty-four for both its cold-war extrapolations and its gloomy perspective of misery for evermore. But this view is wrong on both counts. Wyndham's remarks about the Soviet Union could have been written by almost any author between the end of the war and the rise of Mikhail Gorbachev. And despite the magnitude of the disaster to have overtaken mankind, the tone of The Day of the Triffids is an optimistic one. Its recurring message is that a portion of mankind has been spared to begin again, and the human race has in fact escaped the even worse fate that was becoming increasingly inevitable in a world threatened by both global nuclear war and mass starvation. The triffids' possession of the world will be a temporary thing, and in the last paragraph of the book, Wyndham suggests that research into ways to destroy them is well underway. Within two or three generations at most, mankind will be in a position to strike back and reclaim all he has lost.

It is perhaps the upbeat endings and veneer of British middle-class values, a constant feature of Wyndham's work, which fools people into labelling him with the "cosy catastrophe" tag. In fact, there is much more to his work than met even my enthusiastic eye when, in the Autumn of 1969, I first encountered an author I still count as one of my great favourites.

The Day of the Triffids was made into a truly appalling Hollywood movie, starring country and western singer Howard Keel (1963), and a superior BBC television series (1981). Simon Clark wrote a sequel, The Night of the Triffids, in 2001. My personal feeling is that another movie version is long overdue.

© Christopher Seddon 2008

Saturday, 19 January 2008

Nightfall, by Isaac Asimov

If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God?

In 1941, this quote by American poet Ralph Waldo Emerson inspired a young and then little-known science fiction writer to produce what is arguably the greatest science fiction story of all time.

On the planet Lagash, a group of astronomers try to warn a disbelieving public that a doomsday cult is correct and the end of the world is indeed nigh. Lagash is one of the most remarkable planets in the galaxy - it is part of a system comprising six suns, of which at least one is always in the sky. Night is unknown - or almost unknown.

The astronomers, investigating anomalies in Lagash's orbit, which threaten to overturn the recently established Law of Universal Gravitation, have made an alarming discovery. The problem with the orbit can be resolved by postulating that Lagash has a hitherto undiscovered moon, invisible in the glare of the eternal day. When the moon's orbit is calculated, the astronomers learn that it can cause an eclipse of one of the suns, the red dwarf Beta. The phenomenon can only occur with Beta alone in its hemisphere, at maximum distance from Lagash, with the moon at minimum distance - a configuration that only occurs every 2049 years. The eclipse covers the entire planet and lasts well over half a day, so that no spot on Lagash escapes being plunged into darkness.

The psychological effects on a population unused to darkness will be catastrophic - and an eclipse is imminent....

WARNING: SPOILER ALERT!

One of the reasons Nightfall is such a powerful tale is the mounting sense of terror Asimov manages to convey to his readers in his description of what is after all an everyday occurrence here on earth - the fall of dusk. He does this by the clever choice of a red dwarf as the sun that is eclipsed. He describes Beta as "glowering redly at zenith, dwarfed and evil" and makes frequent comparisons between its red light and blood. As the eclipse proceeds, the sky is described as turning "a horrible deep purple-red". It is powerful, almost apocalyptic stuff.

No less intense is the description of the claustrophobia experienced by the group of astronomers as the gloom deepens. Outside, even the insects are frightened into silence.

Few short stories manage to draw together as many diverse, thought-provoking ideas as Nightfall. Archaeological records that tell of a series of earlier civilisations, all destroyed by fire at the height of their culture; a doomsday cult that claims Lagash enters a cave every 2050 years, plunging it into darkness; and a fairground ride that has caused people to go mad and even die of fright - all this inexorably heightens the sense of impending doom.

In 1990, almost half a century after Nightfall first appeared, Asimov collaborated with Robert Silverberg, to produce a novel based on the original short story.

When two of the world's greatest SF writers team up on such a project, expectations are bound to be very high and this was possibly why Nightfall the novel met with a mixed reception. Some loved it, but many hated it, going as far as to describe it as the weakest offering from either author in a decade. IMHO, the truth lies somewhere between the two extremes.

The first two-thirds of the novel expands on the events and ideas described in the short story. The two versions are very consistent, even featuring the same characters, though with the addition of an archaeologist, who makes the crucial discoveries about the planet's past history. There are some trifling name changes - the six suns are given proper names rather than Greek letters, and for some reason the planet itself is renamed Kalgash. (We will conjecture that Kalgash is a more accurate English rendering of the planet's name, just as Peking is now usually referred to as Beijing. For simplicity, though, I will continue to use the original names.)

The last third of the novel follows events after the eclipse, as survivors who have retained their faculties try to regroup in a world rapidly reverting to feudalism. I have to agree with those who say that the ending is weak. It is true that the idea of using religious superstition to hold together a disintegrating society also appears in Asimov's Foundation Trilogy, but an open ending with the feudal leaders, cultists and scientists battling for control of Lagash would have been better.

The novel's strong points is that it paints a picture of day to day life on of world very different to Earth in some ways, yet very similar in others. It develops and draws together the same diverse ideas as the original, with a scientific community and general public reacting to events in a manner that is completely believable.

We learn that Lagash is centuries behind Earth in the sciences of astronomy, cosmology and physics, but at a similar level in terms of engineering and technology. Presumably, though, the Lagashans have not yet managed to send even an unmanned vehicle beyond their atmosphere, or they would have learned of the existence of the Stars. With gravitation such a recent discovery, though, this is hardly surprising.

We also learn something about the system to which Lagash belongs. The planet orbits a yellow sun at a distance of ten light minutes (slightly further than Earth is from the Sun), there is a binary pair of blue suns at one hundred and ten light minutes away (somewhat closer than Uranus is from Earth) and the system also comprises a red dwarf and a binary pair of white suns.

The problem with the novel is that it exposes the intriguing and unusual elements that make up the story to a scrutiny under which they cannot entirely hold up.

Just how valid is the story's central premise, that Darkness combined with the Stars will cause universal madness among a people utterly unused to such things? Is something going to cause madness simply because it has not been previously experienced and is unnatural? For example, for 99.9 percent of his history, mankind was utterly unused to flying. To man, a primate, flying is completely unnatural. Yet millions now do so every year without going mad. Even those with a fear of flying can generally tolerate it (exceptions include the former Arsenal and Netherlands footballer Dennis Bergkamp, and (alledgedly) The Good Doctor himself).

We must also question whether an advanced technological society could evolve given the handicap of a pathological fear of darkness. On Earth, after all, dependency on artificial lighting, even during daytime, has always been perfectly normal. Underground mines have existed since prehistoric times. But would Neolithic and Bronze Age man have constructed them faced with a deep-rooted phobia of entering such places and knowing that they risked instant madness were their crude illumination to fail? Without the Bronze Age, the science of metallurgy and all subsequent human advances would never have happened.

Crucial to the plot is the fact that Lagash's moon cannot be seen in the eternal daylight due to its being composed of bluish rock. Would this be the case? Earth's moon, composed of greyish rock (which will have a lower albedo), is easy to see by day. Possibly the Lagashan eye is less sensitive to relatively faint objects than the human eye (but it is curious that their eyes can dark-adapt like ours. How did this ability evolve on Lagash?).

Even if the moon cannot normally be seen, what happens during the total eclipse of Beta? Surely the moon, illuminated by the light of the other suns, would become visible. With these suns shining on it from various angles it would appear full - and at minimum distance, seven times the apparent diameter of Beta, almost certainly bright enough to drown out all but the brightest Stars. (We can rule out the possibility of Lagash itself eclipsing its moon, since one of the other suns set only four hours prior to totality.)

If only in comparison to the stunning original version, Nightfall doesn't entirely succeed as a novel and for this reason, the short story remains the definitive version.

© Christopher Seddon 2008

Saturday, 12 January 2008

The rise and fall of the quartz watch

To those of us old enough to remember it, the autumn of 1973 was not perhaps what Charles Dickens would have classified as “the best of times”. War had broken out in the Middle East, the Watergate scandal was making life difficult for the newly-re-elected Richard Nixon and the late and thoroughly unlamented General Pinochet had just seized power in Chile. Britain had begun the year joining the EEC (the forerunner of the EU) but was now in the grip of the Three Day Week as the confrontation between the Tory Prime Minister Edward Heath and the miners showed no sign of abating. Inflation was spiralling out of control and recession seemed inevitable.

It would have been about that time that I saw in the window of a jewellers shop in Wendover in Buckinghamshire something that caught my imagination – a Seiko quartz watch. I knew from the encyclopaedia that we had had at home since my early childhood that a quartz clock was an extremely accurate timepiece, but it was completely news to me that somebody had managed to shrink the complex electronics to the size of something that could be fitted into a wristwatch. In fact the first quartz watches appeared in Japan in 1969, but it obviously took time for them to make their way to the Home Counties (it must also be remembered that wide-spread access to the internet was still a quarter of a century off).

The watch had a claimed accuracy of 1 minute per year, which was quite sensational because even a well-regulated mechanical watch could – and still can – be off by that amount in a few days. It cost £100 – a considerable sum of money for the time. Soon after Seiko began marketing their watches very actively in the UK with the advertising tag “Some day all watches will be made this way”.

Rarely if ever has an advertising slogan proved more accurate; within a decade the mechanical wristwatch had all but disappeared from the windows of high street retailers. The first cheap quartz watches appeared around the second half of 1975. Unlike the analogue Seiko, these watches featured digital displays. The first models used light emitting diode (LED) displays of the type used by the electronic calculators of that time (calculators were also considered cool cutting-edge gadgets in the mid ‘70s) but had the major disadvantage that it was necessary to press a button in order to read off the time (I possessed one made by Samsung – a company virtually unknown in the West at the time). This type of display soon gave way to the now-familiar liquid crystal display (LCD) still found in brands like the ever-popular Casio G-Shock. A watch where one can read of the time as – say – 1:52 PM rather than “just after ten to two” might seem to be at a major advantage, but here the quartz revolution stuttered slightly. Most people actually preferred the older analogue displays and these days the majority of wristwatches have this type of display.

For the Swiss watch industry, quartz represented a major challenge. What happened next is best considered through the very different directions taken by two of Switzerland’s most prominent watchmakers – Rolex and Omega. Omega embraced the new technology full on. In 1974 they launched the Megaquartz Marine Chronometer, which remains to this day the most accurate wristwatch ever made. But – not helped by the adverse economic conditions of the time – Omega struggled and only within the last decade has the brand begun to regain its former strength. Rolex for their part did absolutely nothing. They carried on making exactly the same models – and they kept on selling! This policy was successful - today Rolex is by far the world's largest producer of luxury wristwatches. It was many years before they even bothered to produce a quartz watch – the Oysterquartz. But despite an accuracy of 5 seconds per year – not far off the Omega Megaquartz – it was not a success and was eventually discontinued.

Round about the end of the 1980s the tide turned as more and more purchasers of high-end watches bean to reject quartz in favour of traditional mechanicals. Why one might ask, when a quartz watch is so much more accurate? There are a number of possible reasons – one obvious advantage a mechanical watch has over its quartz counterpart is that it never needs a battery. But battery-less technologies such as eco-drive (solar) and kinetic (rotor-driven dynamo) have largely failed to penetrate the high-end market. And in any case changing the battery every few years is far cheaper and less time-consuming than the regular servicing mechanical watches require to keep them in working order.

The answer is to some extent to be found with the so-called “display back”. Many mechanical watches now have a transparent back, so the movement can be viewed. Look at the intricate and exquisitely-finished movement in a Patek Phillippe or a Lange and compare it with an electronic chip. No contest! Even the nicely-decorated UNITAS hand-wound movements found in many mid-range watches such as the Stowa Marine Original beats a quartz movement hands down in the beauty stakes. To be blunt, one is a micro-machine, a marvel of precision engineering; the other is nothing more than an electrical appliance.

Today the vast majority of luxury watches are mechanical. Most of the high-end quartz watches, such as the Omega Megaquartz, the Rolex Oysterquartz and the Longines Conquest VHP, have long since ceased production. The Citizen Chronomaster, rated to within 5 seconds a year, remains a current model but it is not widely available outside of Japan. The advent of radio control, whereby a watch can synchronize itself to the time signals from Rugby, Frankfurt, Colorado etc has meant that super-accurate quartz movements are now largely redundant, virtually killing off innovation in the field. Most modern quartz watches, when not synchronized to a time signal, are actually far less accurate than the Seiko I saw in that jeweller’s shop window almost three and a half decades ago.

© Christopher Seddon 2008

Monday, 7 January 2008

Biological Classification and Systematics

The Linnaean classification

Scientific classification or biological classification is how species both living and extinct are grouped and categorized. Man’s desire to classify the natural world seems to be very deep rooted and the fact that many traditional societies have highly sophisticated taxonomies suggests the practice goes back to prehistoric times. However the earliest system of which we have knowledge was that of Aristotle, who divided living organisms into two groups – animals and plants. Animals were further divided into three categories - those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (those “without blood” would now be classed as invertebrates). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, but the science of biological classification remained in a confused state until the time of Linnaeus, who published the first edition of his Systema Naturae in 1735. In this work, he re-introduced Gaspard Bauhin’s binomial nomenclature and grouped species according to shared physical characteristics for ease of identification. The scheme of ranks, as used today, differs very little from that originally proposed by Linnaeus. A taxon (plural taxa), or taxonomic unit, is a grouping of organisms. A taxon will usually have a rank and can be placed at a particular level in the hierarchy.

The ranks in general use, in hierarchical order, are as follows:

Domain
Kingdom
Phylum (animals or plants) or Division (plants only)
Class
Order
Cohort
Family
Tribe
Genus
Species

The prefix super- indicates a rank above; the prefix sub- indicates a rank below. The prefix infra- indicates a rank below sub-. For instance:

Superclass
Class
Subclass
Infraclass

Even higher resolution is sometimes required and divisions below infra- are sometimes encountered, e.g. parvorder. Domains are a relatively new grouping. The three-domain system (Archaea, Bacteria and Eukaryota) was first proposed in 1990 (Woese), but not generally accepted until later. Many biologists to this day still use the older five-kingdom system (Whittaker). One main characteristic of the three-domain system is the separation of Archaea and Bacteria, previously grouped into the single prokaryote kingdom Bacteria (sometimes Monera). As a compromise, some authorities add Archaea as a sixth kingdom.

It should be noted that taxonomic rank is relative, and restricted to the particular scheme used. The idea is to group living organisms by degrees of relatedness, but it should be bourn in mind that rankings above species level are a bookkeeping idea and not a fundamental truth. Groupings such as Reptilia are a convenience but are not proper taxonomic terms. One can become too obsessed with whether a thing belongs in one artificial category or another – e.g. is Pluto a planet or (closer to home) does habilis belong in Homo or Australopithecus; does it really matter if we lump the robust australopithecines into Australopithecus or split them out into Paranthropus?

Systematics

Systematics is the study of the evolutionary relationships between organisms and grouping of organisms. There are three principle schools of systematics – evolutionary taxonomy (Linnaean or “traditional” taxonomy), phenetics and cladistics. Although there are considerable differences between the three in terms of methodologies used, all seek to determine taxonomic relationships or phylogenies between different species or between different higher order groupings and should, in principle, all come to the same conclusions for the species or groups under consideration.

Some Terminology and concepts

One of the most important concepts in systematics is that of monophyly. A monophyletic group is a group of species comprising an ancestral species and all of its descendants, and so forming one (and only one) evolutionary group. Such a group is said to be a natural group. A paraphyletic group also contains a common ancestor, but excludes some of the descendants that have undergone significant changes. For instance, the traditional class Reptilia excludes birds even though they evolved from an ancestral reptile. A polyphyletic group is one in which the defining trait evolved separately in different places on the phylogenetic tree and hence does not contain all the common ancestors, e.g. warm-blooded vertebrates (birds and mammals, whose common ancestor was cold-blooded). Such groups are usually defined as a result of incomplete knowledge. Organisms forming a natural group are said to form a clade, e.g. the amniotes. If however the defining feature has not arisen within a natural group, it is said to be a grade, e.g. flightless birds (flight has been given up by many unrelated groups of birds).

Characters are attributes or features of organisms or groups of organisms (taxa) that biologists use to indicate relatedness or lack of relatedness to other organisms or groups of organisms. A character can be just about anything that can be measured from a morphological feature to a part of its genetic makeup. Characters in organisms that are similar due to descent from a common ancestor are known as homologues and it is crucial to systematics to determine if characters under consideration are indeed homologous, e.g. wings are homologous if we are comparing two birds, but if a bird is compared with, say, a bat, they are not, having arisen through convergent evolution, a process where structures similar in appearance and function appear in unrelated groups of organisms. Such characters are known as homoplasies. Convergences are not the same as parallelisms which are similar structures that have arisen more than once in species or groups within a single extended lineage, and have followed a similar evolutionary trajectory over time.

Character states can be either primitive or derived. A primitive character state is one that has been retained from a remote ancestor; derived character states are those that originated more recently. For example the backbone is a defining feature of the vertebrates and is a primitive state when considering mammals; but the mammalian ear is a derived state, not shared with other vertebrates. However these things are relative. If one considers Phylum Chordata as a whole, the backbone is a derived state of the vertebrates, not shared with the acrania or the tunicates. If a character state is primitive at the point of reference, it is known as a pleisiomorphy; if it is derived it is known as an apomorphy (note that “primitive” trait in this context does not mean it is less well adapted than one that is not primitive).

Current schools of thought in classification methodology

Biologists devote much effort to identifying and unambiguously defining monophyletic taxa. Relationships are generally presented in tree-diagrams or dendrograms known as phenograms, cladograms or evolutionary trees depending on the methodology used. In all cases they represent evolutionary hypotheses i.e. hypotheses of ancestor-descendant relationships.

Phenetics, also known as numerical taxonomy, was developed in the late 1950s. Pheneticists avoid all considerations of the evolution of taxa and seek instead to construct relationships based on overall phenetic similarity (which can be based on morphological features, or protein chemistry, or indeed anything that can be measured), which they take to be a reflection of genetic similarity. By considering a large number of randomly-chosen phenotypic characters and giving each equal weight, then the sums of differences and similarities between taxa should serve as the best possible measure of genetic distance and hence degree of relatedness. The main problem with the approach is that it tends to group taxa by degrees of difference rather than by shared similarities. Phenetics won many converts in the 1960s and 1970s, as more and more “number crunching” computer techniques became available. Though it has since declined in popularity, some believe it may make a comeback (Dawkins, 1986).

By contrast, cladistics is based on the goal of producing testable hypotheses of genealogical relationships among monophyletic groups of organisms. Cladistics originated with Willi Hennig in 1950 and has grown in popularity since the mid-1960s. Cladists rely heavily on the concept of primitive versus derived character states, identifying homologies as pleisiomorphies and apomorphies. Apomorphies restricted to a single species are referred to as autapomorphies, where as those shared between two or more species or groups are known as synapomorphies.

A major task for cladists is identifying which is the pleisiomorphic and which is the apomorphic form of two character states. A number of techniques are used; a common approach is outgroup analysis where clues are sought to ancestral character states in groups known to be more primitive than the group under consideration.

In constructing a cladogram, only genealogical (ancestor-descendent) relationships are considered; thus cladograms may be thought of as depicting synapomorphy patterns or the pattern of shared similarities hypothesised to the evolutionary novelties among taxa. In drawing up a cladogram based on significant numbers of traits and significant numbers of taxa, the consideration of every possibility is beyond even a computer; computer programs are therefore designed to reject unnecessarily complex hypotheses using the method of maximum parsimony, which is really an application of Occam’s Razor.

The result will be a family tree – an evolutionary pattern of monophyletic lineages; one that can be tested and revised as necessary when new homologues and species are identified. Trees that consistently resist refutation in the face of such testing are said to be highly corroborated.

A cladogram will often be used to construct a classification scheme. Here cladistics differs from traditional Linnaean systematics. Phylogeny is treated as a genealogical branching pattern, with each split producing a pair of newly-derived taxa known as sister groups (or sister species). The classification is based solely on the cladogram, with no consideration to the degree of difference between taxa, or to rates of evolutionary change.

For example, consider these two classification schemes of the Phylum Chordata.

Classification Scheme A (Linnaean):

Phylum Chordata
Subphylum Vertebrata (vertebrates)
Superclass Pisces (fish)
Class Amphibia (amphibians)
Class Reptilia (turtles, crocodiles, snakes and lizards)
Class Mammalia (mammals)
Class Aves (birds)

Classification Scheme B (Cladistic):

Phylum Chordata
Subphylum Vertebrata
Superclass Tetrapoda
Subclass Lissamphibia (recent amphibians)
Superclass Amniota
Class Mammalia (mammals)
Class Reptilomorpha
Subclass Anapsida (turtles)
Subclass Diapsida
Infraclass Lepidosaura (snakes, lizards, etc)
Infraclass Archosauria
Order Crocodilia (crocodiles, etc)
Class Aves (birds)

In Scheme A, crocodiles are grouped with turtles, snakes and lizards as “reptiles” (Class Reptilia) and birds get their own separate grouping (Class Aves). This scheme considers physical similarities as well as genealogy; but the result is the scheme contains paraphyletic taxa. Scheme B strictly reflects cladistic branching patterns; the reptiles are broken up, with birds and crocodiles as a sister group Archosauria (which also included the dinosaurs). All the groupings in this scheme are monophyletic. It will be noted that attempts to append traditional Linnaean rankings to each group runs into difficulties – birds should have equal ranking with the Crocodilia and should therefore be also categorised as an order within the Archosauria; not their own class, as is traditional.

Traditional Linnaean systematics, now referred to as evolutionary taxonomy, seeks to construct relationships on basis of both genealogy and overall similarity/dissimilarity; rates of evolution are an important consideration (in the above example, birds have clearly evolved faster than crocodiles); classification reflects both branching pattern and degree of difference of taxa. The approach lacks a clearly-defined methodology; tends to be based on intuition; and for this reason does not produce results amenable to testing and falsification.


© Christopher Seddon 2008

Sunday, 6 January 2008

Linnaeus - Princeps Botanicorum

There are very few examples of scientific terminology that have become sufficiently well-known to have become a part of popular culture. The chemical formula for water – H2O – is certainly one; it is so familiar it has even featured in advertisements. Another is the equation E = mc squared – while not everybody knows that it defines a relationship between mass and energy, most will have heard of it and will be aware it was formulated by Albert Einstein.

But the most familiar scientific term of all has to be Homo sapiens – Mankind’s scientific name for himself.

The term was originated by the 18th Century Swedish scientist Carl von Linné (1707-78), better known as Linnaeus, who first formally “described” the human species in 1758. It means (some would say ironically!) “wise man” or “man the thinker”. It is an example of what biologists call the binomial nomenclature, a system whereby all living things are assigned a double-barrelled name based on their genus and species. These latter terms are in turn part of a bigger scheme of classification known as the Linnaean taxonomy, which – as the name implies – was introduced by Linnaeus himself.

Man has been studying and classifying that natural world throughout recorded history and probably much longer. A key concept in classification of living organisms is that they all belong to various species, and this is a very old idea indeed, almost certainly prehistoric in origin. For example, it would have been obvious that sheep all look very much alike, but that they don’t look in the least bit like pigs, and that therefore all sheep belong to one species and all pigs belong to another. Today we refer to organisms so grouped as morphological species.

In addition, the early Neolithic farmers must soon have realised that while a ewe and a ram can reproduce, and likewise a sow and a boar; a ewe and a boar, or a sow and a ram cannot. Sheep and pigs are different biological species, though this definition of a species was not formalised until much later, by John Ray (1628-1705), an English naturalist who proclaimed that “one species could never spring from the seed of another”.

The first attempt at arranging the various species of living organisms into a systematic classification was made by the Greek philosopher Aristotle (384-322 BC), who divided them into two groups – animals and plants. Animals were further divided into three categories - those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (broadly speaking, those “without blood” would now be classed as invertebrates, or animals without a backbone). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, Man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, with some notable works being published by Conrad Gessner (1516-65), Andrea Cesalpino (1524-1603) and John Ray (1628-1705).

In addition Gaspard Bauhin (1560-1624) introduced the binomial nomenclature that Linnaeus would later adopt. Under this system, a species is assigned a generic name and a specific name. The generic name refers to the genus, a group of species more closely related to one another than any other group of species. The specific name represents the species itself. For example lions and tigers are different species, but they are similar enough to both be assigned to the genus Panthera. The lion is Panthera leo and the tiger Panthera tigris.

Despite these advances, the science of biological classification at the beginning of the 18th Century remained in a confused state. There was little or no consensus in the scientific community on how things should be done and with new species being discovered all the time, the problem was getting steadily worse.

Step forward Carl Linné, who was born at Rashult, Sweden, in 1707, the son of a Lutherian curate. He is usually known by the Latinised version of his name, Carolus Linnaeus. It was expected that young Carl would follow his father into the Church, but he showed little enthusiasm for this proposed choice of career and it is said his despairing father apprenticed him to a local shoemaker before he was eventually sent to study medicine at the University of Lund in 1727. A year later, he transferred to Uppsala. However his real interest lay in Botany (the study of plants) and during the course of his studies he became convinced that flowering plants could be classified on the basis of their sexual organs – the male stamens (pollinating) and female pistils (pollen receptor).

In 1732 he led an expedition to Lapland, where he discovered around a hundred new plant species, before completing his medical studies in the Netherlands and Belgium. It was during this time that he published the first edition of Systema Naturae, the work for he is largely remembered, in which he adopted Gaspard Bauhin’s binomial nomenclature, which to date had not gained popularity. Unwieldy names such as physalis amno ramosissime ramis angulosis glabris foliis dentoserratis were still the norm, but under Bauhin’s system this became the rather less wordy Physalis angulata.

This work also put forward Linnaeus’ taxonomic scheme for the natural world. The word taxonomy means “hierarchical classification” and it can be used as either a noun or an adjective. A taxonomy (noun) is a tree structure of classifications for any given set of objects with a single classification at the top, known as the root node, which applies to all objects. A taxon (plural taxa) is any item within such a scheme and all objects within a particular taxon will be united by one or more defining features.

For example, a taxonomic scheme for cars has “car” as the root node (all objects in the scheme are cars), followed by manufacturer, model, type, engine size and colour. Each of these sub-categories is known as a division. An example of a car classified in the scheme is Car>Ford>Mondeo>Estate>2.3 Litre>Metallic silver. An example of a taxon is “Ford”; all cars within it sharing the defining feature of having been manufactured by the Ford Motor Company.

The taxonomy devised by Linnaeus, which he refined and expanded over ten editions of Systema Naturae, had six divisions. At the top, as in the car example is the root note, which Linnaeus designated Imperium (Empire), of which all the natural world is a part. The divisions below this were Regnum (Kingdom), Classis (Class), Ordo (Order), Genus and Species.

The use of Latin in this and other learned texts is worth a brief digression. At the time few scientists spoke any contemporary language beyond their own native tongue, but most had studied the classics and so nearly all scientific works were published in Latin, including Sir Isaac Newton’s landmark Philosophiae Naturalis Principia Mathematica (The Philosophy of Natural Mathematical Principles) and Linnaeus’ own Systema Naturae. One notable exception was Galileo’s The Dialogue Concerning the two Chief World Systems, aimed at a wider audience and thus an early example of “popular science” (though it certainly wasn’t very popular with the Inquisition!).

Linnaeus recognised three kingdoms in his system, the Animal kingdom, the Plant Kingdom and the Mineral Kingdom. Each kingdom was subdivided by Class, of which the animal kingdom had six: Mammalia (mammals), Aves (birds), Amphibia (amphibians), Pisces (fish), Insecta (insects) and Vermes (worms). The Mammalia (mammals) are those animals that suckle their young. It is said that Linnaeus adopted this aspect as the defining feature of the group because of his strongly-held view that all mothers should breast feed their on babies. He was strongly opposed to the then-common practice of “wet nursing” and in this respect he was very much in tune with current thinking.

Each class was further subdivided by Order, with the mammals comprising eight such orders, including the Primates. Orders were subdivided into Genera, with each Genus containing one or more Species. The Primates comprised the Simia (monkeys, apes, etc) and Homo (man), the latter containing a single species, sapiens (though Linnaeus initially also included chimpanzees and gibbons).

The Linnaean system did not accord equal status to apparently equal divisions; thus the Mineral Kingdom was ranked below the Plant Kingdom; which in turn sat below the Animal Kingdom. Similarly the classes were assigned ranks with the mammals ranking the highest and the worms the lowest. Within the mammals the Primates received top billing, with Homo sapiens assigned to pole position therein.

This hierarchy within a hierarchy reflected Linnaeus’ belief that the system reflected a Divine Order of Creation, with Mankind standing at the top of the pile and indeed the term “primate” survives to this day as a legacy of that view. It should be remembered that the prevalent belief at the time of Linnaeus was that the Earth and all living things had been produced by God in their present forms in a single act. This view, now known as Creationism, wasn’t seriously challenged until the 19th Century.

Linnaeus’ system was an example of natural theology, which is the study of nature with a view to achieving a better understanding of the works of God. It was heavily relied on by the deists of that time. Deists believe that knowledge of God can be deduced from nature rather than having to be revealed directly by supernatural means. Deism was very popular in the 18th Century and its adherents included Voltaire, Thomas Jefferson and Benjamin Franklin.

Though some were already beginning to question Creationism, Linnaeus was not among them and he proclaimed that “God creates, Linnaeus arranges”. It has to be said that modesty wasn’t Linnaeus’ strongest point and he proposed that Princeps Botanicorum (Prince of Botany) be engraved on his tombstone. He was no doubt delighted with his elevation to the nobility in 1761, when he took the name Carl von Linné.

Linnaeus did have his critics and some objected to the bizarre sexual imagery he used when categorising plants. For example, “The flowers' leaves...serve as bridal beds which the Creator has so gloriously arranged, adorned with such noble bed curtains, and perfumed with so many soft scents that the bridegroom with his bride might there celebrate their nuptials with so much the greater solemnity...”. The botanist Johann Siegesbeck denounced this "loathsome harlotry" but Linnaeus had his revenge and named a small and completely useless weed Siegesbeckia! In the event Linnaeus’ preoccupation with the sexual characteristics of plants gave poor results and was soon abandoned.

Nevertheless, Linnaeus’ classification system, as set out in the 10th edition of Systema Naturae, published in 1758, is still is considered the foundation of modern taxonomy and it has been modified only slightly.

Linnaeus continued his work until the early 1770s, when his health began to decline. He was afflicted by strokes, memory loss and general ill-health until his death in 1778. In his publications, Linnaeus provided a concise, usable survey of all the world's then-known plants and animals, comprising about 7,700 species of plants and 4,400 species of animals. These works helped to establish and standardize the consistent binomial nomenclature for species, including our own.

We have long ago discarded the “loathsome harlotry” and the rank of Empire. Two new ranks have been added; Phylum lies between Kingdom and Class; and Family lies between Order and Genus, giving seven hierarchical ranks in all. In addition, prefixes such as sub-, super-, etc. are sometimes used to expand the system. (The optional divisions of Cohort (between Order and Class) and Tribe (between Genus and Family) are also sometimes encountered, but will not be used here). The Mineral Kingdom was soon abandoned but other kingdoms were added later, such as Fungi, Monera (bacteria) and Protista (single-celled organisms including the well-known (but actually quite rare) Amoeba) and most systems today employ at least six kingdoms.

On this revised picture, Mankind is classified as follows:

Kingdom: Animalia (animals)
Phylum: Chordata (possessing a stiffening rod or notochord)
Sub-phylum: Vertebrata (more specifically possessing a backbone)
Class: Mammalia (suckling their young)
Order: Primates (tarsiers, lemurs, monkeys, apes and humans)
Family: Hominidae (the Hominids, i.e. modern and extinct humans, the extinct australopithecines and, in some recent schemes, the great apes)
Genus: Homo
Species: sapiens

It should be noted that we while we now regard all equivalent-level taxa as being equal, the updated scheme would work perfectly well if we had continued with Linnaeus’ view that some taxa were rather more equal in the eyes of God than others, and it is in no way at odds with the tenets of Creationism. The Linnean Taxonomy shows us where Man fits into the grand scheme of things, but it has nothing to tell us about how we got there. It was left for Charles Darwin to point the way.

© Christopher Seddon 2008

Saturday, 5 January 2008

A Brief Guide to Evolution

Before Darwin

We have seen how Linnaeus laid the foundations of modern taxonomy, but he did not himself believe that species changed and was an adherent to the then-prevalent view of creationism, claiming that “God creates and Linnaeus arranges” (it has to the said that the self-proclaimed “Prince of Botany” was not the most modest of men!). Linnaeus died in 1778. At that time it was widely believed that Earth was less than 6000 years old, having been created in 4004 BC according to Archbishop Ussher, who put forward this date in 1650.

But the existence of extinct organisms in the fossil record represented a serious problem for creationism (about which creationists are still in denial – get over it!). Fossils had been known for centuries and it was becoming clear that they represented in many cases life forms that no longer existed. William Smith (1769-1839), a canal engineer, observed rocks of different ages preserved different assemblages of fossils and that these succeeded each other in a regular and determinable order. Rocks from different locations could be correlated on the basis of fossil content; a principle now known as the law of faunal succession. Unfortunately Smith was plagued by financial worries, even spending time in a debtor’s prison. Only towards the end of his life were his achievements recognised.

Georges Curvier (1769-1832) studied extinct animals and proposed catastrophism which is modified creationism. Extinctions were caused by periodic catastrophes and new species took their place, created ex nihil by God, though his view that more than one catastrophe might have occurred was contrary to Christian doctrine. But all species, past and present, remained immutable and created by God. Curvier rejected evolution because one highly complex form transitioning into another struck him as unlikely. The main problem with evolution was that if the Earth was only 6,000 years old, there would not be enough time for evolutionary changes to occur.

The French nobleman Compte de Buffon (1707-88) suggested that planets were formed by comets colliding with the Sun and that the Earth was much older than 6,000 years. He calculated a value of 75,000 years from cooling rate of iron – much to the annoyance of the Catholic Church. Fortunately the days of the Inquisition had passed; only Buffon’s books were burned! Buffon rejected Noah’s Flood; noted animals retain non-functional vestigial parts (suggesting that they evolved rather than were created); most significantly he noted the similarities between humans and apes and speculated on a common origin for the two. Although his views were decidedly at odds with the religious orthodoxy of the time, Buffon maintained that he did believe in God. In this respect, he was no different to Galileo, who remained a faithful Catholic.

Catastrophism was first challenged by James Hutton (1726-97), a Scottish geologist who first formulated the principles uniformitarianism. He argued that geological principles do not change with time and have remained the same throughout Earth’s history. Changes in Earth’s geology have occurred gradually, driven by the planet’s hot interior, creating new rock. The changes are plutonic (caused by volcanic action) in nature rather than diluvian (caused by floods). It was clear that the Earth must be much older than 6000 years for these changes to have occurred.

Hutton’s Investigation of the Principles of Knowledge was published in 1794 and The Theory of the Earth the following year. In the latter work he advocated evolution and natural selection. "...if an organised body is not in the situation and circumstances best adapted to its sustenance and propagation, then, in conceiving an indefinite variety among the individuals of that species, we must be assured, that, on the one hand, those which depart most from the best adapted constitution, will be the most liable to perish, while, on the other hand, those organised bodies, which most approach to the best constitution for the present circumstances, will be best adapted to continue, in preserving themselves and multiplying the individuals of their race." Unfortunately this work was so poorly-written that not only was it largely ignored; it even hindered acceptance of Hutton’s geological theories, which did not gain general acceptance until the 1830s when they were popularised by fellow Scot Sir Charles Lyell (1797-1875), who also coined the word Uniformitarianism. However, it is now accepted that the catastrophists were not entirely wrong and events such as meteorite impacts and plate tectonics also have shaped Earth’s history.

The best-known pre-Darwinian theory of evolution is that of Jean-Baptiste de Lamarck (1744-1829). Lamarck proposed that individuals adapt during their lifetime and transmit acquired traits to their offspring. Offspring carry on where they left off, enabling evolution to advance. The classic example of this is the giraffe stretching its neck to reach leaves on high branches, and passing on a longer neck to its offspring. Some characteristics are advanced by use; others fall into disuse and are discarded. Lamarck’s two laws were:

1. In every animal which has not passed the limit of its development, a more frequent and continuous use of any organ gradually strengthens, develops and enlarges that organ, and gives it a power proportional to the length of time it has been so used; while the permanent disuse of any organ imperceptibly weakens and deteriorates it, and progressively diminishes its functional capacity, until it finally disappears.

2. All the acquisitions or losses wrought by nature on individuals, through the influence of the environment in which their race has long been placed, and hence through the influence of the predominant use or permanent disuse of any organ; all these are preserved by reproduction to the new individuals which arise, provided that the acquired modifications are common to both sexes, or at least to the individuals which produce the young.

Lamarck was not the only proponent of this point of view, but it is now known as Lamarckism. There is little doubt that confronted with the huge body of evidence assembled by Darwin, Lamarck would have abandoned his theory. However the theory remained popular with Marxists and its advocates continued to seek proof until well into the 20th Century. Most notable among these were Paul Kammerer (1880-1926), who committed suicide in the wake of the notorious “Midwife Toad” scandal; and Trofim Lysenko (1898-1976). With Stalin’s backing, Lysenko spearheaded an evil campaign against geneticists, sending many to their deaths in the gulags for pursuing “bourgeois pseudoscience”. Lamarckism continued to enjoy official backing in the USSR until the after fall of Khruschev in 1964, when Lysenko was finally exposed as a charlatan.

Natural selection.

Not until the middle of the 19th Century did Charles Darwin (1809-1882) and Alfred Russel Wallace (1823-1913) put forward a coherent theory of how evolution could work.

Darwin was appointed Naturalist and gentleman companion to Captain Robert Fitzroy of the barque HMS Beagle, joining the ship on her second voyage, initially against his father’s wishes. Fitzroy, serving as a lieutenant in Beagle, had succeeded to the captaincy when her original skipper Captain Pringle-Stokes committed suicide on the first voyage. Fitzroy was a Creationist and objected to Darwin’s theories. Darwin sailed round the world in Beagle between 1831 and 1836. He studied finches and turtles on the Galapagos Islands – different turtles had originated from one type, but had adapted to life on different islands in different ways. These changes and developments in species were in accord with Lyell’s Principles of Geology. Darwin was also influenced by the work of economist Thomas Malfus (1766-1834), author of a 1798 essay stating populations are limited by availability of food resources.

Darwin developed the theory natural selection between 1844 and 1858. The theory was as the same time being independently developed by Alfred Russel Wallace and in 1858 Darwin presented The Origin of Species by means of Natural Selection to the Linnaean Society of London, jointly with Wallace’s paper. Wallace’s independent endorsement of Darwin’s work leant much weight to it. Happily there were none of the unseemly squabbles over priority that have bedevilled so many joint discoveries down the centuries of which Newton and Leibnitz’s spat over differential calculus and strain placed on Anglo-French relations in the 1840s by John Couch Adams and Jean Urbain Le Verrier’s independent discovery of the planet Neptune are but two examples.

Darwin’s pivotal On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (usually simply referred to as The Origin of Species) was published in 1859 and promptly sold out. The book caused uproar and a debate was held at Oxford where Darwin and Thomas Huxley (grandfather of Aldous Huxley, author of Brave New World) were opposed by Bishop Samuel Wilberforce (son of the anti-slavery campaigner William Wilberforce), the clergy and Darwin’s erstwhile commanding officer, Captain Fitzroy of the Beagle. Darwin was by now in poor health due to an amoebal infection contracted during the Beagle voyage, but Huxley defended his theories vigorously.

The theory of natural selection states that evolutionary mechanisms are based on four conditions – 1) organisms reproduce; 2) there has to be a mode of inheritance whereby parents transmit characteristics to offspring; 3) there must be variation in the population and finally 4) there must be competition for limited resources. With some organisms in a population able to compete more effectively than others, these are the ones more likely to go on to reproduce and transmit their advantageous traits to their offspring, which in turn are more likely to reproduce themselves.

Evolution is the consequence of natural selection – the two are not the same thing, as evolution could in principle proceed by other means. Natural selection is a mechanism of change in species and takes various forms depends on certain conditions.

If for example existing forms are favoured then stabilising selection will maintain the status quo; conversely if a new form is favoured then directional selection will lead to evolutionary change. Divergent selection occurs when two extremes are favoured in a population.

Adaptation is a key concept in evolutionary theory. This is the “goodness of fit between an organism and its environment”. An adaptive trait is one that helps an individual survive, e.g. an elephant’s trunk, which enables it to forage in trees, eat grass, etc; colour vision helps animals to identify particular fruits, etc (and bright distinctive colour schemes are plants’ adaptations to help them to be located).

Sexual selection, proposed by Darwin in his second work The Descent of Man and Selection in Relation to Sex (1871), refers to adaptations in an organism specifically for the needs of obtaining a mate. In birds, this often leads to males having brightly coloured plumage, which they show off to prospective mates in spectacular displays. In many mammal species, males fight for access to females, leading to larger size (sexual dimorphism) and enhanced fighting equipment, e.g. large antlers.

The Descent of Man also put forward the theory that Man was descended from apes. Darwin was characterised as “the monkey man” and caricatured as having a monkey’s body. But after his death in 1882, he was given a state funeral and is buried in Westminster Abbey near Sir Isaac Newton. A dubious BBC poll ranked Charles Darwin as the 4th greatest Brit of all time, behind Churchill, I.K. Brunel and (inevitably) Princess Diana, but ahead of Shakespeare, Newton and (thankfully) David Beckham!

The main problem with Darwin’s theory was that by itself, it failed to provide a mechanism by which changes were transmitted from one generation to the next. Most believed that traits were “blended” in offspring than particulate – the latter being the view now known to be correct.

Mendelian inheritance

Ironically at the very time Darwin was achieving world fame, the missing link in his theory was being discovered by an Augustinian abbot named Gregor Mendel (1822-1884), whose work was practically ignored in his own lifetime. Between 1856 and 1863, Mendel studied the inheritance of traits in pea plants and showed that these followed particular laws and in 1865 he published the paper "Experiments in Plant Hybridization", which showed that organisms have physical traits that correspond to invisible elements within the cell. These invisible elements, now called genes, exist in pairs. Mendel showed that only one member of this genetic pair is passed on to each progeny via gametes (sperm, ova, etc).

The set of genes possessed by an organism is known as its genotype. By contrast, a phenotype is a measurable characteristic of an organism, such as eye or hair colour, blood group, etc (it is sometimes used as a synonym for “trait” but phenotype is the value of the trait, e.g. if the trait is “eye colour” then possible phenotypes are “grey”, “blue”, etc.). Mendel investigated how various phenotypes of peas were transmitted from generation to generation, and whether these transmissions were unchanged or altered when passed on. His studies were based on traits such as shape of the seed, colour of the pea, etc, beginning with a set of pure-breeding pea plants, i.e. the second generation of plants had consistent traits with those of the first. He performed monohybrid crosses, i.e. between two strains of plants that differed only in one characteristic. The parents were denoted by a P, while the offspring - the filial generation - was denoted by F1, the next generation F2, etc. He found that in the first generation of these crosses, all of the F1s were identical to one of the parents. The trait expressed in the offspring he called a dominant trait; the unexpressed trait he called recessive (the Law of Dominance). He also observed that the sex of the parent was irrelevant for the dominant or recessive trait exhibited in the offspring (the Law of Parental Equivalence).

Mendel found that the phenotypes absent in the F1 generation reappeared in approximately a quarter of the F2 offspring. Mendel could not predict what traits would be present in any one individual, but he did deduce that there was a 3:1 ratio in the F2 generation for dominant/recessive phenotypes. In describing his results, Mendel used the term elementen, which he postulated to be hereditary “particles” transmitted unchanged between generations. Even if the traits are not expressed, he surmised that they are still held intact and the elementen passed on. These “particles” are now known as alleles. An allele that can be suppressed during a generation is called a recessive allele, while one that is consistently expressed is a dominant allele. An organism where both alleles for a particular trait are the same is said to be homozygous; where they differ, it is heterozygous.

For example, consider traits X and y, where X is dominant. A homozygous organism will be of phenotype X and genotype XX and a heterozygous organism will still have phenotype X, but the genotype will be Xy. (Note that the recessive allele is written in lower case.) The recessive trait will only be expressed when the genotype is yy, i.e. it receives the y-allele from both parents. There is a 50% chance of receiving the y-allele from either parent; hence only a 25% of receiving it from both; explaining the 3-1 ratio observed. The Law of Segregation states that each member of a pair of alleles maintains its own integrity, regardless of which is dominant. At reproduction, only one allele of a pair is transmitted, entirely at random.

Mendel next did a series of dihybrid crosses, i.e. crosses between strains identical except for two characteristics. He observed that each of the traits he was following sorted themselves independently. Mendel's Law of Independent Assortment states that characteristics which are controlled by different genes will assort independent of all others. Whether or not an organism will be Ab or AA has nothing to do with whether or not it will be Xy or yy.

Mendel’s experimental results have been criticized for being “suspiciously good” and he seems to have fortunate in that he selected traits that were affected by just one gene. Otherwise the outcome of his crossings would have been too complex to have been understood at the time.

Population genetics

Mendel’s work remained virtually unknown until 1900, when it was independently rediscovered by Hugo de Vries, Carl Correns, and Erich von Tschermak and vigorously promoted in Europe by William Bateson, who coined the terms “genetics”, “gene” and “allele”. The theory was doubted by many because it suggested heredity was discontinuous in contrast to the continuous variety actually observed. R.A. Fisher and others used statistical methods to show that if multiple genes were involved with individual traits, they could account for the variety observed in nature. This work forms the basis of modern population genetics.

The discovery of DNA

By the 1930s, it was recognised that genetic variation in populations arises by chance through mutation, leading to species change. Chromosomes had been known since 1842, but their role in biological inheritance was not discovered until 1902, when Theodor Boveri (1862-1915) and Walter Sutton (1877-1916) independently showed a connection. The Boveri-Sutton Chromosome Theory, as it became known, remained controversial until 1915 when the initially sceptical Thomas Hunt Morgan (1866-1945) carried out studies on the eye colours of Drosophila melanogaster (the fruit fly) which confirmed the theory (and has since made these insects virtually synonymous with genetic studies).

The role of DNA as the agent of variation and heredity was not discovered until 1941, by Oswald Theodore Avery (1877-1955), Colin McLeod (1909-1972) and Maclyn McCarty (1911-2005). The double-helix structure of DNA was elucidated in 1953 by Francis Crick (1916-2005) and James Watson (b 1928) at Cambridge and Maurice Wilkins (1916-2004) and Rosalind Franklin (1920-1958) at King’s College London. The DNA/RNA replication mechanism was confirmed in 1958. Crick, Watson and Wilkins received the Nobel Prize for Medicine in 1962. Franklin, who died in 1958, missed out (Nobel Prizes are not normally awarded posthumously), but her substantial contribution to the discovery is commemorated by the Royal Society’s Rosalind Franklin Award, established in 2003.

How DNA works

With these discoveries, the picture was now complete, and it could now be seen how the genome is built up at a molecular level and how it is responsible for both variation and inheritance which are – as we have seen – fundamental to natural selection.

The genome of an organism contains the whole hereditary information of that organism and comprises the complete DNA sequence of one set of chromosomes. It is often thought of as a blueprint for the organism, but it is better thought of as a set of digital instructions that completely specify the organism.

The fundamental building blocks of life are a group of molecules known as the amino acids. An amino acid is any molecule containing both amino (-NH2) and carboxylic acid (-COOH) functional groups. In an alpha-amino acid, both groups are attached to the same carbon. Amino acids are the basic structural building blocks of proteins, complex organic materials that are essential to the structure of all living organisms. Amino acids form small polymer chains called peptides or larger ones called polypeptides, from which proteins are formed. The distinction between peptides and proteins is that the former are short and the latter are long. Some twenty amino acids are proteinogenic, i.e. they occur in proteins and are coded for in the genetic code. They are given 1 and 3 letter abbreviations, e.g. A Ala Alanine. Not all amino acids can be synthesised by a particular organism and must be included in the diet; these are known as essential amino acids. An amino acid residue is what is left of an amino acid once a molecule of water has been lost (an H+ from the nitrogenous side and an OH- from the carboxylic side) in the formation of a peptide bond.

Proteins are created by polymerization of amino acids by peptide bonds in a process called translation, a complex process occurring in living cells, etc. The blueprint – or to take a better analogy – the recipe or computer program for each protein used by an organism is held in its genome. The genome is comprised of nucleic acid, a complex macromolecule composed of nucleotide chains that convey genetic information. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). For nearly all organisms, the genome is comprised of DNA, which usually occurs as a double helix.

Nucleotides comprise a heterocyclic base (an aromatic ring containing at least one non-carbon atom, such as sulphur or nitrogen, in which the nitrogen atom’s lone pair is not part of the aromatic system); a sugar; and one or more phosphate (-PO3) groups. In the most common nucleotides the sugar is pentose – either deoxyribose (in DNA) or ribose (in RNA) and the base is a derivative of purine or pyrimidine. In nucleic acids the five most important bases are Adenine (A), Guanine (G), Thymine (T), Cytosine (C) and Uracil (U). A and G are purine derivatives and are large double-ringed molecules. T, C and U are pyrimidine derivatives and are smaller single-ringed molecules. T occurs only in DNA; U replaces T in RNA. These five bases are known as nucleobases.

In nucleic acids, nucleotides pair up by hydrogen bonding in various combinations known as base pairs. Purines only pair with pyrimidines. Purine-purine pairing is does not occur because the large molecules are far apart for hydrogen bonding to occur; conversely pyrimidine-pyrimidine pairing does not occur because the smaller molecules are too close and electrostatic repulsion overwhelms hydrogen bonding. G pairs only with C and A pairs only with T (in DNA) or U (in RNA). One might also expect GT and AC pairings, but these do not occur because the hydrogen donor and acceptor patterns do not match. Thus one can always construct a complimentary strand for any strand of nucleotides.

e.g. ATCGAT
TAGCTA.

Such a nucleotide sequence would normally be written as ATCGAT. Any succession of nucleotides greater than four is liable to be called a sequence.

DNA encodes the sequence of amino acid residues in proteins using the genetic code, which is a set of rules that map DNA sequences to proteins. The genome is inscribed in one or more DNA molecules. Each functional portion is known a gene, though there are a number of definitions of what constitutes a functional portion, of which the cistron is one of the most common. The gene sequence is composed of tri-nucleotide units called codons, each coding for a single amino acid. There are 4 x 4 x 4 codons (= 64), but only 20 amino acids, so most amino acids are coded for by more than one codon. There are also “start” and “stop” codons to define the beginning and end points for translation of a protein sequence.

In the first phase of protein synthesis, a gene is transcribed by an enzyme known as RNA polymerase into a complimentary molecule of messenger RNA (mRNA). (Enzymes are proteins that catalyze chemical reactions.) In eukaryotic cells (nucleated cells – i.e. animals, plants, fungi and protests) the initially-transcribed mRNA is only a precursor, often referred to as pre-mRNA. The pre-RNA is composed of coding sequences known as exons separated by non-coding sequences known as introns. These latter sequences must be removed and the exons joined to produce mature mRNA (often simply referred to as mRNA), in a process is known as splicing. Introns sometimes contain "old code," sections of a gene that were probably once translated into protein but which are now discarded. Not all intron sequences are junk DNA; some sequences assist the splicing process. In prokaryotes (non-nucleated organisms – i.e. bacteria and archaea), this initial processing of the mRNA is not required.

The second phase of protein synthesis is known as translation. In eukaryotes the mature mRNA is “read” by ribosomes. Ribosomes are organelles containing ribosomal RNA (rRNA) and proteins. They are the "factories" where amino acids are assembled into proteins. Transport RNAs (tRNAs) are small non-coding RNA chains that transport amino acids to the ribosome. tRNAs have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo protein. Aminoacyl tRNA synthetase (an enzyme) catalyzes the bonding between specific tRNAs and the amino acids that their anticodons sequences call for. The product of this reaction is an aminoacyl-tRNA molecule. This aminoacyl-tRNA travels inside the ribosome, where mRNA codons are matched through complementary base pairing to specific tRNA anticodons. The amino acids that the tRNAs carry are then used to assemble a protein. Its task completed, the mRNA is broken down into its component nucleotides.

Prokaryotes have no nucleus, so mRNA can be translated while it is still being transcribed. The translation is said to be polyribosomal when there is more than one active ribosome. In this case, the collection of ribosomes working at the same time is referred to as a polysome.

In many species, only a small fraction of the total sequence of the genome appears to encode protein. For example, only about 1.5% of the human genome consists of protein-coding exons. Some DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few (if any) protein-coding genes, but are important for the function and stability of chromosomes. Some genes are RNA genes, coding for rRNA and tRNA, etc. Junk DNA represents sequences that do not yet appear to contain genes or to have a function.

The DNA which carries genetic information in cells (as opposed to mitochondrial DNA, etc) is normally packaged in the form of one or more large macromolecules called chromosomes. A chromosome is a very long, continuous piece of DNA (a single DNA molecule), which contains many genes, regulatory elements and other intervening nucleotide sequences. In the chromosomes of eukaryotes, the uncondensed DNA exists in a quasi-ordered structure inside the nucleus, where it wraps around structural proteins called histones. This composite material is called chromatin.

Histones are the major constituent proteins of chromatin. They act as spools around which DNA winds and they play a role in gene regulation, which is the cellular control of the amount and timing of appearance of the functional product of a gene. Although a functional gene product may be an RNA or a protein, the majority of the known mechanisms regulate the expression of protein coding genes. Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. Gene regulation gives the cell control over structure and function, and is the basis for cellular differentiation – i.e. the large range of cell types found in complex organisms.

Ploidy indicates the number of copies of the basic number of chromosomes in a cell. The number of basic sets of chromosomes in an organism is called the monoploid number (x). The ploidy of cells can vary within an organism. In humans, most cells are diploid (containing one set of chromosomes from each parent), but sex cells (sperm and ova) are haploid. Some plant species are tetraploid (four sets of chromosomes). Any organism with more than two sets of chromosomes is said to be polyploid. A species’ normal number of chromosomes per cell is known as the euploid number, e.g. 46 for humans (2x23).

Haploid cells bear one copy of each chromosome. Most fungi, and a few algae are haploid organisms. Male bees, wasps and ants are also haploid. For organisms that only ever have one set of chromosomes, the term monoploid can be used interchangeably with haploid.

Plants and other algae switch between a haploid and a diploid or polyploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most diploid organisms produce haploid sex cells that can combine to form a diploid zygote, for example animals are primarily diploid but produce haploid gametes. During meiosis, germ cell precursors have their number of chromosomes halved by randomly "choosing" one homologue (copy), resulting in haploid germ cells (sperm and ovum).

Diploid cells have two homologue of each chromosome (both sex- and non-sex determining chromosomes), usually one from the mother and one from the father. Most somatic cells (body cells) of complex organisms are diploid.

A haplodiploid species is one in which one of the sexes has haploid cells and the other has diploid cells. Most commonly, the male is haploid and the female is diploid. In such species, the male develops from unfertilized eggs, while the female develops from fertilized eggs: the sperm provides a second set of chromosomes when it fertilizes the egg. Thus males have no father. Haplodiploidy is found in many species of insects from the order Hymenoptera, particularly ants, bees, and wasps.

Cell division is the process by which a cell divides into two daughter cells. Cell division allows an organism to grow, renew and repair itself. Cell division is of course also vital for reproduction. For simple unicellular organisms such as the Amoeba, one cell division reproduces an entire organism. Cell division can also create progeny from multicellular organisms, such as plants that grow from cuttings. Finally, cell division enables sexually reproducing organisms to develop from the one-celled zygote, which itself was produced by cell division from gametes.

Before division can occur, the genomic information which is stored in a cell’s chromosomes must be replicated, and the duplicated genome separated cleanly between cells. Division in Prokaryotic cells involves cytokinesis only. As previously explained, prokaryotic cells are simple in structure. They contain non-membranous organelles, lack a cell nucleus, and have a simplistic genome: only one circular chromosome of limited size. Therefore, prokaryotic cell division, a process known as binary fission, is straightforward. The chromosome is duplicated prior to division. The two copies of the chromosome attach to opposing sides of the cellular membrane. Cytokinesis, the physical separation of the cell, occurs immediately.

Division in Somatic Eukaryotic cells involves mitosis then cytokinesis. Eukaryotic cells are complex. They have many membrane-bound organelles devoted to specialized tasks, a well-defined nucleus with a selectively permeable membrane, and a large number of chromosomes. Therefore, cell division in somatic (i.e. non-germ) eukaryotic cells is more complex than cell division in prokaryotic cells. It is accomplished by a multi-step process: mitosis: the division of the nucleus, separating the duplicated genome into two sets identical to the parent's; followed by cytokinesis: the division of the cytoplasm, separating the organelles and other cellular components.

Division in Eukaryotic Germ cells involves meiosis, which is the process that transforms one diploid cell into four haploid cells in eukaryotes in order to redistribute the diploid's cell's genome. Meiosis forms the basis of sexual reproduction and can only occur in eukaryotes. In meiosis, the diploid cell's genome is replicated once and separated twice, producing four haploid cells each containing half of the original cell's chromosomes. These resultant haploid cells will fertilize with other haploid cells of the opposite gender to form a diploid cell again. The cyclical process of separation by meiosis and genetic recombination through fertilization is called the life cycle. The result is that the offspring produced during germination after meiosis will have a slightly different genome contained in the DNA. Meiosis uses many biochemical processes that are similar to those used in mitosis in order to distribute chromosomes among the resulting cells.

Genetic recombination is the process by which the combinations of alleles observed at different loci in two parental individuals become shuffled in offspring individuals. Such shuffling can be the result of inter-chromososomal recombination (independent assortment) and intra-chromososomal recombination (crossing over). Recombination only shuffles already existing genetic variation and does not create new variation at the involved loci. Since the chromosomes separate independently of each other, the gametes can end up with any combination of paternal or maternal chromosomes. In fact, any of the possible combinations of gametes formed from maternal and paternal chromosomes will occur with equal frequency. The number of possible combinations for human cells, with 23 chromosomes, is 2 to the power of 23 or approximately 8.4 million. The gametes will always end up with the standard 23 chromosomes (barring errors), but the origin of any particular one will be randomly selected from paternal or maternal chromosomes.

The other mechanism for genetic recombination is crossover. This occurs when two chromosomes, normally two homologous instances of the same chromosome, break and then reconnect but to the different end piece. If they break at the same place or locus in the sequence of base pairs – which is the normal outcome - the result is an exchange of genes.

An allele is any one of a number of viable DNA codings of the same gene (sometimes the term refers to a non-gene sequence) occupying a given locus (position) on a chromosome. An individual's genotype for that gene will be the set of alleles it happens to possess. For example, in a diploid organism, two alleles make up the individual's genotype.

Organisms that are diploid such as humans have paired homologous chromosomes in their somatic cells, and these contain two copies of each gene. An organism in which the two copies of the gene are identical — that is, have the same allele — is said to be homozygous for that gene. An organism which has two different alleles of the gene is said to be heterozygous. Phenotypes associated with a certain allele can sometimes be dominant or recessive, but often they are neither. A dominant phenotype will be expressed when only one allele of its associated type is present, whereas a recessive phenotype will only be expressed when both alleles are of its associated type. This is Mendelian inheritance at a molecular level.

However, there are exceptions to the way heterozygotes express themselves in the phenotype. One exception is incomplete dominance (sometimes called blending inheritance) when alleles blend their traits in the phenotype. An example of this would be seen if, when crossing Antirrhinums — flowers with incompletely dominant "red" and "white" alleles for petal colour — the resulting offspring had pink petals. Another exception is co-dominance, where both alleles are active and both traits are expressed at the same time; for example, both red and white petals in the same bloom or red and white flowers on the same plant. Co-dominance is also apparent in human blood types. A person with one "A" blood type allele and one "B" blood type allele would have a blood type of "AB".

Recombination shuffles existing variety, but does not add to it. Variety comes from genetic mutation. Mutations are changes to the genetic material of an organism. Mutations can be caused by copying errors in the genetic material during cell division and by exposure to radiation, chemicals, or viruses. In multicellular organisms, mutations can be subdivided into germline mutations, which can be passed on to descendants and somatic mutations. The latter cannot be transmitted to descendants in animals, though plants sometimes can transmit somatic mutations to their descendents. Mutations are considered the driving force of evolution, where less favourable or deleterious mutations are removed from the gene pool by natural selection, while more favourable ones tend to accumulate. Neutral mutations are defined as those that are neither favourable nor unfavourable.

It will be apparent from the above how both variety and inheritance of variety arise at the molecular level.

The so-called central dogma of molecular biology arises from Francis Crick’s statement in 1958 that “Genetic information flows in one direction only from DNA to RNA to protein, and never in reverse.” It follows from this that:

1. Genes determine characters in a straightforward, additive way: one gene-one protein, and by implication, one character. Environmental influence, if any, can be neatly separated from the genetic.

2. Genes and genomes are stable, and except for rare, random mutations, are passed on unchanged to the next generation.

3. Genes and genomes cannot be changed directly in response to the environment.

4. Acquired characters are not inherited.

These assumptions have been challenged and they do not hold under all conditions, e.g. horizontal gene transfer (for example, haemoglobins in leguminous plants).

Modes of evolutionary change

Put together, natural selection, population genetics and molecular biology form the basis of neo-Darwinism, or the Modern Evolutionary Synthesis. The theory encompasses three main tenets:

1. Evolution proceeds in a gradual manner, with the accumulation of small changes in a population over long periods of time, due to changes in frequencies of particular alleles between one generation and another (microevolution).

2. These changes result from natural selection, with differential reproductive success founded on favourable traits.

3. These processes explain not only small-scale changes within species, but also larger-scale processes leading to new species (macroevolution).

On the neo-Darwinian picture, macroevolution is seen simply as the cumulative effects of microevolution.

However the extent and source of variation at the genetic level remained a bone of contention for evolutionary theorists until the mid-1960s. One school of thought favoured little variation, with most mutations being deleterious and selected against; the other school favoured extensive variation, with many mutations offering advantages for survival in different environmental circumstances. Techniques such as gel electrophoreses settled the argument in favour of the second school: genetic variation turned out to be most extensive. By the 1970s the debate had shifted to selectionism versus neutralism. The selectionists view genetic variation as the product of natural selection, which selects favourable new variants. On the other hand the neutralists contend that the great majority of variants are selectively neutral and thus invisible to the forces of natural selection. It is now generally accepted that a significant proportion of variation at the genetic level is neutral.

Consequently certain traits may become common or may even come to predominate in a population by a process known as genetic drift. This is the random changes in the frequencies of neutral alleles over many generations, which may lead to some becoming common and some dying out. Genetic drift, therefore, tends to reduce genetic diversity over time, though for the effect to be significant, a population must be small (to explain by analogy, while a group of ten people could throw a die and all fail to get a six with reasonable probability [tenth power of 0.8 = 0.1], but the probability of one hundred people all failing to get a six is far smaller [hundredth power of 0.8 = 2x10e-10]). There are two ways in which small isolated populations may arise. One is by population bottleneck in which the bulk of a population is killed off; the other is the founder effect which occurs when a small number of individuals carrying a subset of the original population’s genetic diversity move into a new habitat and establishes a new population there. Both these scenarios could lead to a trait that confers no selective advantage coming to predominate in a population. More controversially, they could lead to genetic drift outweighing natural selection as the engine for evolutionary change.

With the foregoing in mind, how do new species arise? There are two ways. Firstly a species changes with time until a point is reached where its members are sufficiently different from the ancestral population to be considered a new species. This form of speciation is known as anagenesis and the species within the lineage are known as chronospecies. Secondly a lineage can split into two different species. This is known as cladogenesis, and usually happens when a population of members of the species becomes isolated.

There are several such modes of speciation, mostly based on the degree of geographical isolation of the populations involved.

1. Allopatric speciation occurs when populations physically isolated by a barrier such as a mountain or river diverge to an extent such that if the barrier between the populations breaks down, individuals of the two populations can no longer interbreed.

2. Peripatric speciation occurs when a small population is isolated at the periphery of a species range. The difference between this and allopatric speciation is that the isolated population is small. Genetic drift comes into play, possibly outweighing natural selection (the founder effect).

3. Parapatric speciation occurs when a population expands its range into a new habitat where the environment favours a different form. The populations diverge as the descendants of those entering the new habitat adapt to the conditions there.

4. Sympatric speciation is where the speciating populations share the same territory. Sympatric speciation is controversial and has been widely rejected, but a number of models have been proposed to account for this mode of speciation. The most popular is disruptive speciation (Smith), which proposes that homozygous individuals may under certain conditions have a greater fitness than those with alleles heterozygous for a certain trait. Under the mechanism of natural selection, therefore, homozygosity would be favoured over heterozygosity, eventually leading to speciation. Rhagoletis pomonella (Apple maggot) may be currently undergoing sympatric speciation. The apple feeders seem to have emerged from hawthorn feeders, after apples were first introduced into North America. The apple feeders do not now normally feed on hawthorns, and the hawthorn feeders do not now normally feed on apples. This may be an early step towards the emergence of a new species.

5. Stasipatric speciation occurs in plants when they double or triple the number of chromosomes, resulting in polyploidy.

Rates of evolution

There are two opposing points of view regarding the rate at which evolutionary change proceeds. The traditional view, known as phyletic gradualism, holds that it occurs gradually, and that speciation is anagenetic. Niles Eldredge and Stephen Jay Gould (1972) criticized this viewpoint, arguing instead for stasis over long stretches of time, with speciation occurring only over relatively brief intervals, a model they called punctuated equilibrium. They pointed out that species arise by cladogenesis rather than by anagenesis. They also highlighted the absence of transitional forms in the fossil record (an old chestnut, often favoured by creationists).

Richard Dawkins has pointed out that no “gradualist” has ever argued for complete uniformity of rate of evolutionary change; conversely even if the “punctuation” events of Eldredge and Gould actually took 100,000 years, they would still show as discontinuities in the fossil record, even though on the scale of the lifetime of an organism, change would be immeasurably small, and invisible at any given time due to variation between individuals; if for example average height increased by 10 cm in 100,000 years, that would be 1/500th of a cm per generation – completely masked by the variation in height of individuals at any one time. It follows that the speciation event – be it anagenetic or cladogenetic – would be very slow in relation to the lifetime of individuals. Reproductive isolation would occur only over hundreds of generations.

On the Dawkins view, then, there is no conflict between gradualism and punctuationism; the latter is no more than the former proceeding at varying tempo.

The Physical Context

Three factors are recognized as influencing the evolution of new species and the extinction of existing ones. The first is the existing properties of a lineage, which places constraints on how it can evolve. The second is the biotic context: how members of a particular species compete both in both an inter-specific and intra-specific context for food, space and other resources; how they interact with other species in respect of predation; mutualist behaviours, etc. The third is the physical context such as geography and climate, which determine the types of species that can thrive and the adaptations that are likely to be favoured.

The relative importance of the last two is a matter of ongoing debate. Darwin held that biotic factors predominate. He did not ignore environmental considerations, but he saw them as merely increasing competition. This view is central to the modern synthesis and it is held that natural selection is necessary and sufficient to drive evolutionary change. For example, adaptations by predators to better their chances of catching prey are the driving force for evolutionary change in the prey, where adaptations to avoiding capture are selected for; thus maintaining the status quo in a kind of evolutionary “arms race”. This is sometimes referred to as the Red Queen effect (van Valen, 1973), from the Red Queen in Alice through the Looking Glass.

However in recent years, it has become clear that the history of life on Earth has been profoundly affected by geological change. The discovery of plate tectonics in the 1960s confirmed that continental landmasses are in a state of constant albeit very slow motion across the Earth’s surface, and when continents meet, previously-isolated biota are brought together. Conversely, as continents drift apart, previously united communities are separated. The first introduces new elements of inter-specific competition; the other to possible isolation of small groups. Both scenarios are likely to lead to evolutionary change.

There is also a school of thought that downplays natural selection and emphasises climate change as the primary cause of evolutionary change. There are two ideas associated with this view – firstly, the habitat theory, which states that species’ response to climate change represents the principle engine of evolutionary change, and that speciation and extinction events will be concentrated in times of climate change, as habitats change and fragment; secondly this pattern of evolutionary change should be synchronous across all taxa, including hominins (the turnover-pulse hypothesis) (Vrba 1996).

Units of selection

In the original theory of Charles Darwin, the unit of selection i.e. the biological entity upon which the forces of natural selection acted was the individual; for example an animal that can run faster than others of its kind, and so avoid predators, will live longer and have more offspring. This simple picture does, however, fail to explain altruism in which an individual acts in a manner that benefits others at its own expense.

One answer is that selection may operate at social group level (group selection) as proposed by V.C. Wynne-Edwards (1962). On this picture, a group in which members behave altruistically towards one another might be more successful than one in which they do not. Kin selection, proposed by W.D. Hamilton (1964), posits reproductive success in terms of passing on one’s genes, and that by helping siblings and other relatives, one is doing this by proxy. This view largely superseded the group selection view. Robert Trivers (1971) extended the theory to non-kin in terms of doing a favour in the expectation of it being returned (“reciprocal altruism”). This behaviour is common in species of large primates, including humans. Kin-selection and reciprocal altruism act at individual and not group level, so the latter fell out of favour; though it has been recently revived.

By contrast, the gene-centric or “selfish gene” view popularised by Richard Dawkins states that selection acts at gene level, with genes that best promote the interests of their host organisms being selected for. On this view, adaptations are phenotypic effects that enable genes to be propagated. A “selfish” gene can be favoured for selection by favouring altruism among organisms containing it, even if individuals performing the altruism do so at the cost of their own chances of reproducing. To be successful, however, a gene must “recognise” kin and degrees of relatedness and favour greater altruism towards closer relatives (by, for example, favouring a sibling [where the chance of the same gene being present is 1/2] over a cousin [where it is only 1/8]). The green beard effect refers to such forms of genetic self-recognition, after Dawkins (1976) considered the possibility of a gene that promoted green beards and altruism to others possessing them.

© Christopher Seddon 2008