All in our genes?

You got your interest in sport from your father.” “Your musical ability comes from your grandmother.” So people tend to believe.

The hours and years of study, training and practice have little to do with it!   Is it really ‘all in our genes.’?

all in the genes

mother. son and daughter. Roman approx 4th century AD

Robert Plomin is a Professor of Behavioural Genetics at the University of London. He has recently published a talk on-line about the effect of genes on our abilities and behaviour. (Plomin 2016)   He is a big fan of ‘genetic influence’. This blog is a counterpoint to that essay.

It is interesting to find out how much our genetic inheritance determines our interests and abilities. Psychologists have studied identical and non-identical twins reared together or reared apart for decades. Could this settle the question of whether it is genes or environment that makes us who we are? Note that our own choices in life do not feature in these two alternatives.

Twin Studies

Identical twins have the same set of genes. They develop in the same womb and are maybe brought up in the same family. Nevertheless, they undergo different experiences in life just by being in a different location. The word ‘environment’ includes anything other than the particular set of genes that came together at the moment of conception. ‘Environment’ is usually split into ‘shared environment’ (family upbringing) and ‘non-shared environment’ (individual experience).

The way of presenting the results of these twin surveys is in a number called the ‘heritability’ of some trait such as IQ. (IQ is easily measured and most studied, but the comments here apply to most other behavioural traits).


‘Heritability’ has a value between 0 (no effect of genes) and 1 (entirely genetically determined). The heritability of most measurable traits is between 0.25 and 0.75. (Pinker 2002) But we have to be careful about what this means. The slide from ‘heritability’ to ‘inherited’ is easily made and propagated, but is quite misleading.

The number of legs that people have is almost uniformly two, the world over. It is somehow in our genes that we have two legs. But if we want to find out the heritability of legs we examine the variation in the number of legs per person in the population. Some individuals will have lost one or more legs due to accidents. The variation is thus entirely due to the environment. The heritability of the number of legs per person is thus zero!

The prevalent temperature during the incubation of the egg of a crocodile determines the sex of the hatchling. (Jones 2000) But you could not deny that there was a genetic element. This shows genes and environment interact – so-called ‘covariance’.

The ‘heritability’ of IQ apparently increases with the age of the subjects tested:

shared non-shared
age heritability environment environment
5 0.22 0.54 0.24
10 0.54 0.26 0.20
16 0.62 0.00 0.38
26 0.88 0.00 0.12
50 0.85 0.00 0.15

The apparency is that the environment has an effect in early life. But by the time one reaches maturity one’s IQ has converged on its ‘genetic’ value. (Bouchard 2004)

Not All in the Genes

To confuse the issue further a 2003 study of 59,000 infants found that in impoverished US families “60% of the variance in IQ is accounted for by the shared environment, and the contribution of genes is close to zero.” But “in affluent families, the result is almost exactly the reverse.” (Turkheimer et al 2003) This shows that sampling can have a big effect on measures of ‘heritability’ and can give a completely false impression.

A study of 17,000 British children all born in one week in 1970 segregated them into 4 groups. Children with low / high IQ; and those whose families who were richer and better-educated (higher socioeconomic status) / low socio-economic status. This showed that children of low-IQ-high-socioeconomic families catch up at age 6 with children of high-IQ-low-socio-economic status. By age 10 they are ahead of the originally high-IQ-low-socio-economic status children who fall further and further behind. The social is evidently trumping the biological. These findings are just the reverse of what the above table seems to suggest. (Coghlan 2010) (More on these paradoxes in Chapter 5 of ‘Rethinking the Mind’ vol2)

One reason for the confusion of heritability with genetic determinism is the fact that heritability is deduced from populations, whereas genetic determinism refers to an individual. Height is partially genetically determined. As you would expect height is heritable. But, “[It is absurd] to ascribe so many inches of a man’s height to his genes and so many to his environment.” (Lewontin 1974)


One solution to the shortcomings of twin studies is to sequence the DNA of individuals and look at candidate genes which might determine IQ. Plomin says “DNA is a game changer; it’s a lot harder to argue with DNA than it is with a twin study or an adoption study.”

A common way to isolate particular genes is to compare the genetic sequences of individuals with well-defined diseases with those of individuals without the disease. This isolates gene differences that might cause the disease. The same approach can be used with IQ.

Plomin continues: “Dopamine genes and serotonin genes are so important as neurotransmitter systems that you’d think DNA variation in those genes might make a difference [to IQ etc]. Well, that didn’t pan out.”

A recent study by the Exome Aggregation Consortium (ExAC) compiled the genetic sequences of over 60,000 people. (Lek et al 2016) This revealed that each person has an average of 54 genetic mutations that were previously considered pathogenic. About 41 of these occur so often in the population that they aren’t likely to cause disease. The authors reviewed evidence for 192 genetic variants thought to cause rare single-gene disorders but only found support for 9.


According to Plomin, “Years of education is around 50 percent heritable. This means that, of the differences among people in their years of schooling, about half of those differences can be explained with genetics. In [a] genome-wide association study, with 120,000 people, we were detecting effects [3 genetic variants] that accounted for 2 percent of the variance in independent samples. …We have a paper coming out [which] … explains almost 10 percent of the variance in tests of school performance [74 genome-wide significant hits]. These are called GCSE scores in the UK, which …are administered at the end of compulsory schooling at age sixteen.

This result is, of course, derived for a population that has compulsory more or less uniform education from age five. If children in Central Africa were included in the sample the result would be quite different.


Plomin explains that we all share about 99% of our DNA which consists of about 3 billion base pairs or 20,000 genes. This is what makes us human as opposed to chimpanzees with whom we share about 95% or our genes. “We’re looking at these differences [between people] and asking to what extent they cause the differences that we observe. I hesitated over the word “cause” because we don’t often use that word with correlations. When DNA correlates with something, like reading ability, it’s the only correlation that you can unambiguously interpret causally, because nothing changes your DNA sequence.”

This misleads. People are apt to misread ‘cause’ as ‘sole cause’ when in fact there are several ‘causes’ that come together to create an effect.

Indeed, Plomin says he has difficulty explaining to people that a child’s ability to learn can be genetically influenced. “They’d say, ‘Well, that’s the end of clinical psychology. If it’s genetic, we can’t do anything about it.’ You’d say, ‘No, no, no, that’s wrong.’ In fact, by identifying genetic differences, you might be able to create therapies that work especially well for certain people.”

The role of DNA in making us who we are is widely misunderstood in this way.


Every cell in the body (apart from red blood cells) contains exactly the same DNA. Yet each cell is engineered for its particular role in whatever organ it appears in. The cell regulates which genes are activated. This is done by various processes including so-called ‘methylation’ in which genes are turned off by the substitution of a methyl group (-CH3) for a hydrogen atom at a strategic place in the DNA. The expression of the genes in identical twins may thus be different. (Kaminski 2009) There is also evidence that methylation can be passed on from parent to offspring. (Chandler 2007)

Experiencing stress early in life can have long term behavioural and physiological effects. There is evidence that this stress causes the methylation of a gene in the hippocampus (part of the brain involved in memory). (Welberg 2009) Although one’s genetic makeup may predispose one to such physiological consequences, the stressful events must still occur. If events in one’s life can alter the way one’s genes are expressed for the worse, surely other events can alter them for the better. (Just what events is the question).

I do not condemn genetic research as a waste of time, far from it. But geneticists must take care not to give a false impression and inadvertently propagate the idea of genetic determinism in society: “It’s not me, it’s my genes.”

(Bouchard 2004) Bouchard TJ. Genetic Influence on Human Psychological Traits Current Directions in Psychological Science vol 13 (4) p148-151
(Chandler 2007) Chandler V. Paramutation: From Maize to Mice Cell 128 p641-645
(Coghlan 2010) Coghlan A. Act early in life to close health gaps across society New Scientist (11 Feb 2010)
(Jones 2000) Jones S. Almost like a Whale Anchor 2000 p375
(Kaminsky et al 2009) Kaminski ZA et al. DNA Methylation profiles in monozygotic and dizygotic twins Nature Genetics vol 41 (2) p240-245
(Lek 2016 et al) Lek M et al. Analysis of protein-coding genetic variation in 60,706 humans Nature 536, p285–291; 2016
(Lewontin 1974) Lewontin RC. The analysis of variance and the analysis of causes American Journal of Human Genetics vol 26 p400-411 available at
(Pinker 2002) Pinker S. The Blank Slate Penguin 2002 p374
(Plomin 2016) Plomin R. Why We’re Different available at
(Turkheimer et al 2003) Turkheimer E et al. Socioeconomic Status Modifies Heritability of IQ in Young Children Psychological Science vol 14 (6) p623-628, available at
(Welberg 2009) Welberg L. The epigenetics of child abuse Nature Reviews Neuroscience 10, 246 (April 2009) | doi:10.1038/nrn2629

Milgram Revisited: “Only obeying orders”

Adolf Eichmann (below) was tried in Israel in 1961 for crimes against humanity. Eichmann’s crimes were in his handling of the logistics of transporting millions of Jews to concentration camps built for the purpose of their extermination during WW2.    His defence was ‘only obeying orders’.

only obeying orders

Milgram’s Experiments

Eichmann’s defence inspired Stanley Milgram (1933-1984) a psychologist at Yale University to perform one of the most infamous of social psychology experiments.   He wanted to find out how far a person would proceed in inflicting pain in obedience to the authority figure of the experimenter.

He chose people varying widely in age, occupation and education as subjects. From the subject’s point of view he and another person came to the laboratory to take part in a study of memory and learning.   They were given a scientific sounding rationale for the study.   One of them became a ‘teacher’, the other a ‘learner’.

The ‘teacher’ was shown an electrified chair and given a sample 45 volt shock. The ‘learner’ was then placed in the electrified chair, wired up with electrodes and told that he will be read lists of word pairs.    When he hears the first one again he is supposed to say the second word.   If he makes a mistake he will be given an electric shock.

The ‘teacher’ was then taken to a different room (linked by intercom) where he was placed in front of a control panel with thirty switches labelled 14 to 450 volts with descriptive designations from ‘slight shock’ to ‘danger: severe shock’ and finally ‘xxx’.

The experimenter in a grey lab coat starts the ‘teacher’ off with the word pairs.   He tells the ‘teacher’ to administer the next level of electric shock when the ‘learner’ gets the word pairing wrong.

In fact, the ‘learner’ is an actor who receives no shocks but acts as though he did. The experimenter unemotionally in the face of objections from the ‘teacher’ just encourages him to continue the experiment. When the learner starts to make mistakes the level of electric shock is stepped up. “At 75 volts, he grunts; at 120 volts he complains loudly; at 150 he demands to be released from the experiment… At 285 volts his response can be described only as an agonised scream. Soon thereafter he makes no sound at all.” (Milgram 1973)

Milgram solicited predictions of the result of his experiment from 14 colleagues.   They almost uniformly predicted that the ‘teacher’ would refuse to obey the experimenter at 150 volts where the learner asks to be released from the experiment.   In fact about 60% of the ‘teachers’ went to the end of the experiment administering the full 450 volts.

The subjects (‘teachers’) were usually agitated during the experiment – sweating, trembling, stuttering or laughter fits.   They were much relieved at the end of the experiment to find they had not hurt anyone – though some showed no emotion throughout. Variations of the experiment were tried to find what parameters influenced the result. When the ‘teacher’ was allowed to choose the shock level rather than being told to raise it to the next level, the average shock chosen was less than 60 volts – lower than the point at which the victim showed the first signs of discomfort.   Only 2 out of 40 subjects went as high as 320 volts.

When the experiment was altered so that the experimenter gave his instructions by telephone rather than being in the room with the ‘teacher’, the percentage of ‘teachers’ obedient to the 450 volt level fell to 20%. When the ‘teacher’ was relieved of the responsibility of pulling the lever that administered the shocks, and merely specified the level at which the shock should occur the percentage of ‘teachers’ going all the way to 450 volts went up to 92%.   In that case the subjects claimed that the responsibility rested with the person who actually pulled the lever.

Milgram concluded, “The essence of obedience is that a person comes to view himself as the instrument for carrying out another person’s wishes, and he therefore no longer regards himself as responsible for his actions… The most far-reaching consequence is that the person feels responsible to the authority directing him but feels no responsibility for the actions that the authority prescribes.  Morality does not disappear – it acquires a radically different focus: the subordinate person feels shame or pride depending on how adequately he has performed the actions called for by authority … the most fundamental lesson of our study [is that] ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.” (Milgram 1973)

The experiment has been repeated in various parts of the world with even higher percentages of obedience in some cases. Milgram gave the subjects  personality tests in an attempt to find those aspects of personality or character that would predict how far the subjects would go, but he found no correlation with any of the test results.

New Experiment

Now a slightly different version of Milgram’s experiment has been performed by a group of ‘cognitive neuroscientists’ from University College London and the Free University of Brussels led by Patrick Haggard (Caspar 2016). They wanted to find out to what degree the participants felt ‘in charge’ when they knowingly inflicted pain on each other and when they knew the aim of the experiment.

In the new experiments the participants (all female) were tested in pairs.  They took turns being ‘agent’ and ‘victim’ thus ensuring reciprocity.    Each was initially given £20.  The agent sat facing the ‘victim’ and so could monitor directly the effect of her actions.    In a first group of participants, the agent could freely choose on each trial to increase her own remuneration by taking money (£0.05) from the ‘victim’ (financial harm) or not.   Money transfer occurred in 57% of trials.   In a variation of the experiment the financial harm was accompanied by an electric shock to the ‘victim’ at a level that was tolerable but not pleasant (the electric shock was administered in 52% of trials).

In both of these groups the experimenter stood by and in some cases told the agent to take the money (group 1) or shock the victim (group 2).   In the other cases the experimenter told the agent to exercise her free choice.   There were also a number of trials as controls where the experimenter asked the agent to press the space bar whenever she wanted (‘active’) and where the experimenter pressed the agent’s finger on the space bar (‘passive’).

In order to investigate the agent’s ‘sense of agency’ (“the subjective experience of controlling one’s actions, and, through them, external events”) the key presses caused a tone to sound after a few milliseconds (variously 200, 500 and 800 msec) and the participants were asked to judge the length of the interval. The rationale behind this is that action-result times are perceived as shorter when the person carries out the action voluntarily (such as raising one’s arm) than when the action is done passively (someone else raises the arm).   So if coercion reduces this sense of agency, interval estimates should be longer in the coercive than in the free-choice condition.

Thus there were several comparison sets of data: free choice versus coercion, financial harm versus physical harm and harm versus no harm, as well as the control conditions (active versus passive).   When they were ordered to press a particular key (producing either harm or no-harm), the participants judged their action as more passive than when they had free choice and they perceived the time interval from the tone as longer (p=0.001).   This did not change depending on whether there was a harmful outcome, though it did when the potential harm was greater (ie physical rather than financial).

So the conclusion was that the coercion rather than the severity of the actual outcome was the determining factor in the sense of agency.   The agent experienced less sense of agency when she was coerced than when she freely chose between the same options – regardless of whether harm was actually inflicted.   So the plea “Only obeying orders” might not be just an attempt to avoid blame “but may rather reflect a genuine difference in subjective experience of agency.”

The participants were also given personality tests prior to the experiments to see if there were any predisposing factors. It was found that those scoring higher on empathy showed a greater reduction in the sense of agency when their actions had harmful outcomes.

In a second experiment, the same procedures were used but the agents were also hooked up to an electroencephalogram (EEG) to investigate changes in brain activity associated with the free choice / coercive conditions.   When an unpredictable stimulus such as a tone occurs it is followed by a ‘negative response potential’ approximately 0.1 seconds later in the frontal part of the scalp (usually referred to as the N100).  The expectation was that the N100 would be larger in amplitude when the agent freely chose her action compared with that when she felt coerced. This was indeed the case (amplitude ratio approx 1.3).   So not only the subjective ‘sense of agency’ but also neurophysiological activity is reduced under coercion.

Haggard says people genuinely feel less responsibility for their actions when following commands regardless of whether they are told to do something evil or benign. So the ‘only obeying orders’ excuse shows how a person feels when acting under command.

Before Haggard did these experiments he had (along with the majority of neuroscientists and many modern philosophers) already espoused the philosophical viewpoints of physicalism¹ , epiphenomenalism² and reductionism³ . He claims that mind-body causation is dualist and “incompatible with modern neuroscience” since most neuroscientists believe that “conscious experiences are consequences of brain activity rather than causes.”    “Philosophers studying ‘conscious free will’ have discussed whether conscious intentions could cause actions, but modern neuroscience rejects this idea of mind–body causation.    Instead, recent findings suggest that the conscious experience of intending to act arises from preparation for action in frontal and parietal brain areas. Intentional actions also involve a strong sense of agency, a sense of controlling events in the external world.    Both intention and agency result from the brain processes for predictive motor control….” (Haggard 2005)

And again : “… the cause of our ‘free decisions’ may at least in part, be simply the background stochastic fluctuations of cortical excitability.” (Filevich 2013)


These experiments are interesting but care must be taken in their interpretation and in the consequences that may be claimed for jurisprudence. It is not clear whether the neurophysiological activity causes the subjective sense of agency or vice versa. What the experiments do reveal is that coercion causes both reduced sense of agency and reduced neurophysiological activity.

The experiments only concern what Elizabeth Pacherie terms ‘present-directed intentions’ ie those intentions which “trigger the intended action, …sustain it until completion, …guide its unfolding and monitor its effects”.   They do not touch upon ‘future directed intentions’ which are “terminators of practical reasoning about ends, prompters of practical reasoning about means and plans, and intra- and interpersonal coordinators” (Pacherie 2006).

One presumes that Haggard and his colleagues were motivated by future directed intentions when they decided to do the experiments and write their paper. They were not simply acting as the result of ‘stochastic fluctuations of cortical activity.’   If so, then the sweeping general conclusion loses its force.

The 18th century philosopher David Hume (1711-1776) thought that every object of the mind must be either an immediate perception or an ‘idea’ – a faint copy of some earlier perception.(Hume 1748)   This was criticised by his contemporary Thomas Reid (1710-1796) :“It seemed very natural to think that [Hume’s book] required an author and a very ingenious one at that; but now we learn that it is only a set of ideas that came together and arranged themselves by certain associations and attractions.” (Reid 1764)

According to Haggard and his colleagues not even ideas are now involved – only ‘stochastic fluctuations of cortical activity’.

The question of who bears personal responsibility is important to the rule of law. Certainly the person who gives the order to harm is culpable for the consequences.   But this does not absolve the person who actually carries out the order. The degree to which people feel responsible on average does not change the moral responsibility of any individual act.   Nor does it justify the inclusion of such ‘mitigating’ circumstances into criminal law.

Hannah Arendt (1906-1975) wrote a book on Eichmann’s trial (Arendt 1963), in which she coined the phrase “the banality of evil”.   It is not clear exactly what she meant by the phrase.   Milgram thought that she meant that Eichmann was not a “sadistic monster” but “an uninspired bureaucrat who simply sat at his desk and did his job“, and that she “came closer to the truth than one dare imagine.” (Milgram 1973)   It may well be true that in some situations evil is not perpetrated by fanatics and psychopaths but by ordinary people who see their actions as normal (banal = commonplace) within the prevailing conditions.   If so all of us are capable of committing horrendous crimes when the circumstances are right.

It is easy to see how ‘situationism’ (the philosophical belief that people act according to the situation in which they find themselves rather than by virtue of any moral or philosophical outlook they might have) is a credible paradigm.   But it predicts the actions of only 2/3rds of the subjects in the Milgram study. The new study suggests that there are character traits (eg ‘empathy’) that predict some aspect of the results (ie reduced sense of agency where there was a harmful outcome) more accurately.   But we do not excuse criminality on the grounds of character traits.

Evil was a common place in Nazi Europe, but for Arendt that did not render it excusable.   Whilst Arendt saw Eichmann as a cog in the machinery of the Final Solution she did not excuse his crimes nor fail to hold him morally responsible for his actions.   “If the defendant excuses himself on the ground that he acted not as a man but as a mere functionary whose functions could just as easily have been carried out by anyone else, it is as if a criminal pointed to the statistics on crime – which set forth that so-and-so-many crimes per day are committed in such-and-such a place – and declared that he only did what was statistically expected, that it was a mere accident that he did it and not somebody else, since after all somebody had to do it.” (Arendt 1963)

Despite the pressures some people do have the resources to buck authority even when the authority has far more clout than the man (or woman) in the grey lab coat. For example, the US GI Ronald Ridenhour forced the US congress to investigate the My Lai massacre in Vietnam where US servicemen massacred an entire village of 300 or more civilians in 1968. (Ridenhaur 1969) There were many people such as Raul Wallenburg (1912-1947) and Oskar Schindler (1908-1974) who protected Jews from the holocaust despite great personal risk.

If there are attempts to influence the law on the basis that these experiments prove diminished responsibility they should be dismissed.

The above contains passages extracted from the book Rethinking the Mind. Get the first volume here:


  1. Physicalism: the doctrine that everything is physical, ie all is matter and energy in its many forms and hence subject to the laws of physics.
  2. Epiphenomenalism: the doctrine that mental events are mere by-products of physical events and that mental events in themselves do not cause anything. In the classic description due to Thomas Huxley (1825-1895) consciousness is simply a collateral product of the working of the body in the same way that a steam whistle accompanies the work of a locomotive engine.
  3. Reductionism: the doctrine that explanations of phenomena are to be found in the smaller entities that comprise it eg) heredity in terms of DNA or in this case, human activities in terms of neural firings.


Arendt H (1963) Eichmann in Jerusalem: A Report on the Banality of Evil Penguin

Caspar EA, Christensen JF, Cleeremans A & Haggard P (2016) Coercion changes the Sense of Agency in the Human Brain Current Biology available at

Filevich E, Kühn S, Haggard P (2013) Antecedent Brain Activity Predicts Decisions to Inhibit PLOS 1 (February 13, 2013) available at

Haggard P (2005) Conscious Intention and Motor cognition Trends in Cognitive sciences vol 9(6) p 290-295 available at

Hume D (1748) An Enquiry Concerning Human Understanding Section II ‘Of the Origin of Ideas’ para 12.

Milgram S (1973) The perils of Obedience Harpers Magazine p62-77 available at

Pacherie E (2006) Towards a dynamic theory of Intentions in Pockett S, Banks WP & Gallagher S (eds) Does Consciousness cause behavior MIT Press p 145-167 available at

Reid T (1764) An Enquiry into the Human Mind chapter 2.6 (ed J Bennett) available at

Ridenhour R  (1969) Letter to US Congress available at

Why Nations Fail review

Book Review

Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

why nations fail review korea
A satellite photograph of Korea at night shows North Korea as dark as – well – night, whilst South Korea blazes forth with light pollution. The South is the 29th richest country in the world with a GDP of $37,000 per head. The North is one of the poorest ($1,800 GDP per head) suffering from periodic famine and desperate poverty. Why is this?

One easy answer is that the North is a dictatorship whereas the South is a democracy. Democracies are good; dictatorships are bad.

It is not so simple.

At the end of WWII Korea was divided between North and South at the 38th parallel. In 1950 the North invaded the South and almost succeeded in overrunning it. At the end of the Korean War (1953) the states were again divided, but both were dictatorships. The South’s GDP increased at 10% per year between 1962 and 1979. It only became a democracy in 1987 with separate executive, legislative and judicial bodies after a succession of 3 dictators (two by coups d’état, one of whom was assassinated).

Figuring out what are the vital factors and what drives the changes occurring in a society is difficult. There are no two identical societies in which one isolated factor can be changed to see what happens. Any theory is liable to have elements of pre-supposing the answer (for example: democracy versus dictatorship). So any theory about how and why some nations become tolerant and prosperous but others become intolerant and poverty stricken is likely to be controversial. Similar problems arise in trying to account for why formerly tolerant and prosperous nations reverse and become repressive and poor.

It is to this problem of why some nations succeed and some fail that Daron Acemoglu, a Turkish-American professor of economics at M.I.T., and James Robinson, a British professor of public policy studies at the University of Chicago (hereafter A&R) have tackled in their book Why Nations Fail: The Origins of Power, Prosperity and Poverty. This book has been generally well received.

I will outline some of the earlier theories and the criticisms by A&R and then their theory and some of the criticisms that have been levelled against it.


The map of the world shows affluent societies in the temperate areas and poor societies in the tropical areas within 30° of the equator. This is particularly marked in Africa. The idea then is that the great division between rich and poor countries is caused by geography. The reasons for this are the pervasiveness of tropical diseases such as malaria, the scarcity of animals that could be used as cheap labour, and the poverty of the soil. There are exceptions, for example, the rich countries of Singapore and Malaysia, but both of these have access to the sea. This allows trade because it is much cheaper to transport cargo by sea than by land.

A&R criticise this theory on several grounds despite its initial appeal. The Indus Valley civilisation is the first recorded great civilisation and it is situated in what is modern Pakistan, well within the tropics. Central America before the Spanish invasions was richer than the temperate zones. One of the world’s currently poorest nations, Mali (GDP $18 billion), where half of the population of 14.5 million live on less than $1.25 per day, was once ruled by the richest person who has ever lived. Mansa Musa Keita I (c. 1280 – c. 1337) had a fortune of $400 billion in today’s money. His wealth included vast quantities of gold, slaves, salt and a large navy(!).

History … leaves little doubt that there is no simple connection between tropical location and economic success.” (A&R p51) There are also vast differences in wealth within the tropics and temperate regions at the present time. There is a sharp line between poverty and prosperity between North & South Korea, between Mexico and the United States, and between East and West Germany before reunification.


The reason poor nations are poor according to this hypothesis is that their governments are not educated in how a modern economy should be run. Their leaders have mistaken ideas on how to run their countries. Certainly, leaders of central African countries since independence have made bad decisions when viewed from outside. The IMF recommend a list of economic reforms that poor states should undertake including:

  • reduction of the public sector,
  • flexible exchange rates,
  • privatization of state run enterprises,
  • anticorruption measures and
  • central bank independence.

The central bank of Zimbabwe became ‘independent’ in 1995. It was not long before inflation took off reaching 11 million % pa (officially) by 2008 with unemployment around 80%.

But according to A&R it is not ignorance that is the source of bad decisions: “Poor countries are poor because those who have power make choices that create poverty. They get it wrong not by mistake or ignorance but on purpose. To understand this, you have to go beyond economics and expert advice on the best thing to do and, instead, study how decisions actually get made, who gets to make them, and why those people decide to do what they do.” (p68)


One idea about the rise of Europe from the 17th century was that it was caused by the ‘protestant work ethic’. Alternatively, the relative prosperity of former British colonies like Australia, and the U.S.A. was caused by the superior British culture. Or perhaps it is just European culture that is better than the others. These smug ideas don’t hold much water when you look at China and Japan, or when you look at the conduct of the European powers in their colonies. Some of those colonies are now prosperous and some are not.

At the start of the Industrial Revolution in the 18th and 19th centuries Britain had relative stability.   It had a tolerant clubby society that encouraged individualism.   It protected invention through patents.   It had a market for mass-produced goods. According to A&R this was not culturally caused.   Rather it was the result of definite structures in society and political arrangements. (p56)


According to this theory (also known as the Lipset or Aristotle theory) when countries become more economically developed they head towards pluralism, civil liberties and democracy. There is some evidence that this holds in Africa since 1950. (Anyanwu & Erhijakpor 2013)

But A&R object that US trade with China has not (yet) brought democracy there. The population of Iraq was reasonably well educated before the US-led invasion, and it was believed to be a ripe ground for the development of democracy, but those hopes were dashed. The richness of Japan and Germany did not prevent the rise of militaristic regimes in the 1930’s. (p443)

Other Theories

There are other theories about where and when prosperity will arise or disappear.
Several experts discuss the cause of the Industrial Revolution in The Day the World Took Off (Dugan 2000). They surmise:

  • historical accident (p182);
  • capitalism (p135);
  • the availability of raw materials (p66);
  • consumerism (p64);
  • the habit of drinking tea or beer rather than contaminated water (p18);
  • the need to measure time (p100);
  • the rise of the merchant classes (bourgeoisie) (p141).

Also discussed in The Day the World Took Off is the settlement in the Glorious Revolution of 1688 which brought political stability (p82). Finance became available through the establishment of the Bank of England. Incentives for investment, trade and innovation appeared through the enforcement of property rights and patents for intellectual property.

Acemoglu & Robinson

The political and economic factors exemplified by the Glorious Revolution are what A&R develop in their 500+ page book on why nations fail. A&R make several major points in their analysis.

1. Centralization

The first requirement for economic growth is a centralized political set up. Where a nation is split into factions, as is the case today in Somalia and Afghanistan, it is difficult to centralize power. This is because “any clan, group or politician attempting to centralize power in the state will also be centralizing power in their own hands, and this is likely to meet the ire of other clans, groups and individuals who would be the political losers of this process.”(p87) Only when one group of people is more powerful than the rest can centralization occur.

2. Extractive Economic Institutions

Economic institutions are critical for determining whether a country is poor or prosperous. A&R define extractive economic institutions as those “designed to extract incomes and wealth from one subset of society to benefit a different subset.” (p76) The feudal system that existed in Europe around 1400 and persisted in places into the 20th century was extractive. Wealth flowed upwards from the many serfs to the few lords. In later times, colonialism flowed wealth away from the locals to the colonists. A particular example was King Leopold II (1835-1909) of Belgium who ruled over the Congo Free State from 1885 to1908. He built his personal wealth through copper, ivory and rubber exports supervised by a repressive police force that enforced the local slave labour. A considerable but unknown proportion of the population were murdered or mutilated in the pursuit of Leopold’s wealth. (Bueno de Mesquita 2009)

Economic Growth can occur where there are extractive economic institutions provided there is centralization of power. It is in the interest of the exploiters to increase production for their own gain. A&R claim that this growth cannot continue for ever. It comes to an end “because of the absence of new innovations, because of political infighting generated by the desire to benefit from extraction, or because the nascent inclusive elements were conclusively reversed…” (p184) They thus predict that China’s growth will stall unless it manages somehow to transition to inclusive institutions (p442).

3. Extractive Political Institutions

Extractive economic institutions are set up by whoever it is that has political power. They will be better off if they can extract wealth from the rest of society and use that wealth to increase their power. “[They] have the resources to build their (private) armies and mercenaries, to buy their judges, and to rig their elections in order to remain in power. They also have every interest in defending the system. Therefore, extractive economic institutions create the platform for extractive political institutions to persist. Power is valuable in regimes with extractive political institutions, because power is unchecked and brings economic riches.” (p343)

4. Inclusive Political Institutions

Political institutions that distribute power broadly in society and subject it to constraints are pluralistic. Inclusive political institutions are those that “are sufficiently centralized and pluralistic.” (p81)

This agrees with the American political scientist Bruce Bueno de Mesquita that one of the main factors in having benevolent government is the presence of a large coalition (which he calls the ‘selectorate’) of those who have a say in who rules.(Bueno de Mesquita 2009)
The Glorious Revolution in Britain of 1688 limited the power of the king and gave parliament the power to determine economic institutions. It opened up the political system to a broad cross section of society.   They were able to exert considerable influence over the way the state functioned.

Before 1688 the king had a ‘divine right’ to rule the state by law. Afterwards even the king was subject to the Rule of Law. “[The Rule of Law] is a creation of pluralist political institutions and of the broad coalitions that support such pluralism. It is only when many individuals and groups have a say in decisions, and the political power to have a seat at the table, that the idea that they should all be treated fairly starts making sense.” (p306)
Britain stopped censoring the media after 1688. Property rights were protected. Even ‘intellectual property’ was protected through patents, which enabled innovators and entrepreneurs to gain financially from their ideas. According to A&R it is no accident that the Industrial Revolution followed a few decades after the Glorious Revolution. (p102)

‘Inclusive Political Institutions’ is not the same as democracy. Great Britain after the Glorious Revolution was not a democracy in the modern sense. The franchise was limited and with disproportionate representation. For instance, the constituency of Old Sarum in Wiltshire had 3 houses, 7 voters and 2 MPs. Not until 1832 did the franchise extend to 1 in 5 of the male population. Only in 1928 did all women get the vote. Similarly, the prosperous nation, the United States, did not grant franchise to ‘all’ males until 1868, to ‘all’ females until 1920 and all African Americans until 1965.

There are many examples of countries where democratic voting occurs but few political institutions of an inclusive nature, if any, exist. There ‘democracy’ tends to be a conflict between rival extractive institutions.

According to A&R the reason the Middle East is largely poor is not geography. It is the expansion and consolidation of the Ottoman Empire and its institutional legacy that keeps the Middle East poor. The extractive institutions established under that regime persist to the present day. It is just different people running them.

5. Inclusive Economic institutions

Inclusive economic institutions … are those that allow and encourage participation by the great mass of people in economic activities that make best use of their talents and skills and that enable individuals to make the choices they wish. To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it must also permit the entry of new businesses and allow people to choose their careers.” (p74)

These features of society all rely on the state.   It alone can impose the law, enforce contracts and provide the infrastructure whereby economic activity can flourish. The state must provide incentives for parents to educate their children, and find the money to build, finance and support schools.

Economic growth and technological change is what makes human societies prosperous. But this entails what the Austrian-American economist Joseph Schumpeter called ‘creative destruction’. This term describes the process whereby innovative entrepreneurs create economic growth even whilst it endangers or destroys established companies. “[The] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” (Schumpeter 1942)

A&R opine that the fear of creative destruction is often the reason for opposition to inclusive institutions. “Growth… moves forward only if it is not blocked by the economic losers who anticipate that their economic privileges will be lost and by the political losers who fear that their political power will be eroded.” (p86) Opposition to ‘progress’ comes from protecting jobs or income, or protecting the status quo.

The central thesis of this book is that economic growth and prosperity are associated with inclusive economic and political institutions, while extractive institutions typically lead to stagnation and poverty. But this implies neither that extractive institutions can never generate growth nor that all extractive institutions are created equal.” (p91)

7. Critical Junctures

A critical juncture is when some “major event or combination of factors disrupts the existing balance of political or economic power in a nation.” (p106) Similar events such as colonization or decolonization have affected many different nations, but what happens to the society at such critical junctures depends on small institutional differences.

100 years before the Glorious Revolution Britain was ruled by an absolute monarch (Elizabeth I). Spain was ruled by Philip II and France by Henry III. There was not much difference in their powers, except that Elizabeth had to raise money through parliament. Henry and Philip were able to monopolize transAtlantic ‘trade’ for their own benefit. Elizabeth could not because much of the English trade was by privateers, who resented authority. It was these wealthy merchant classes who played a major role in the English Civil War and the Glorious Revolution.

Once a critical juncture happens, the small differences that matter are the initial institutional differences that put in motion very different responses. This is the reason why the relatively small institutional difference led to fundamentally different development paths. The paths resulted from the critical juncture created by the economic opportunities presented to Europeans by Atlantic trade.” (p107)



One of the difficulties with political and social theory is that once a formula has been hit upon, everything then becomes interpreted in the light of that formula. Once Marx had explained economics in terms of labour and its exploitation, there was no room for those who espoused that idea to see anything different. So extractive versus inclusive institutions could be just another seductive idea.

1. Economists Michele Boldrin, David Levine and Salvatore Modica make a similar point in their review (Boldrin, Levine & Modica 2012). They say that if we lack an axiomatic definition of what is ‘inclusive’ and what is ‘extractive’, independent of actual outcomes, then the argument becomes circular and subject to a selection bias. Some of A&R’s examples are “a bit strained”.

For example, after Julius Caesar established the ‘extractive empire’ the ‘fall of Rome’ did not occur for four centuries. The success of South Korea, Taiwan and Chile (which had non-inclusive political institutions but evolved into inclusive ones) might lead one to suppose that “pluralism is the consequence rather than the cause of economic success.” (The Anyanwu study mentioned above in connection with the modernisation theory did find a correlation between economic success with democracy in Africa.   But they also found that the extent of oil reserves in the country tended to stop the development of democracy. This is what you would expect from A&R. I think there are cross-causative factors. The rise of the merchant classes in England was a major factor in the development of English politics as A&R show).

In the case of Italy the political institutions are the same in the North and the South.   But the North is prosperous whereas the south is dependent on handouts from the North. BL&M acknowledge that the south suffers from economic exploitation (Mafia) but this suggests that political institutions are only part of the story since there is no national border. They also say there is a danger in using satellite photographs as economic evidence as in this particular case “the poorest part of Italy is the most brightly lit.” The apparent brightness of parts of photographs depends on several factors including the curvature of the Earth and where the satellite is with respect to the subject. The picture of Italy here shows the north: the Po Valley as the brightest lit.why nations fail review italy

Germany from the mid 19th century until the end of WW2 prospered under extractive institutions, and led the world in its chemical industry. It did have compulsory education and social insurance and an efficient bureaucracy, but it could hardly be thought of as inclusive. Nazi Germany invented and produced the first jet planes and rockets. The “brief period of inclusiveness, the Weimar Republic, was an economic catastrophe.

Again, the Soviet Union “did well under extractive communist institutions,” but floundered after a coup d’état established inclusive political institutions.

According to BL&M, Zimbabwe is a disastrous case of moving towards more inclusive institutions by extending the franchise to a wider population and lifting trade restrictions. (I find it difficult to believe that Zimbabwe can be regarded as consisting of inclusive political and economic institutions).

BL&M suggest that the focus of A&R is on what happens within nations when a great many developments within nations depend on what happens between nations. Not the least of these developments being invasions and war. BL&M perceive that many historical crises, including the current crisis in Greece, stem from debt, yet A&R do not mention this. The French Revolution and the rise of Nazism came from debt crises, as did the English Civil War.

BL&M argue against A&R’s stance that intellectual property rights brought in after the Glorious Revolution was one of the spurs for the Industrial Revolution. They show that patents were barriers to progress. They are passionate advocates of liberalizing copyright, trademark and patent laws which they see as the enemy of competition and ‘creative destruction’. (Boldrin & Levine 2008) I have sympathy with this view, but that’s a different story for another day. (see also Hargreaves 2011).

What BL&M’s cases seem to suggest is that we need stricter criteria for ‘inclusive’ and ‘extractive’. These nations were inclusive in some respects and extractive in others.   It is difficult to decide which were or are the most pertinent factors.

A further complication is the passage of time. How long before an ‘inclusive’ or ‘extractive’ feature starts to make a change to the society? A&R do not suggest that prosperity manifests immediately or immediately disappears when a society transitions from one to the other.

2. One of the principal proponents of the geography theory is Jared Diamond, a professor of Geography at the University of California, Los Angeles. He acknowledges that inclusive institutions are an important factor (perhaps 50%) in determining prosperity but not the overwhelming factor (Diamond 2012). He favours historically long periods of central government and geography as major factors. He also makes the point that why each of us as individuals becomes richer or poorer depends on several factors. These include “inheritance, education, ambition, talent, health, personal connections, opportunities and luck…” So there is no simple answer to why nations become richer or poorer.

3. William Easterly, a professor of economics at New York University, complains that A&R have “dumbed down the material too much” by writing for a general audience. They rely on anecdotes rather than rigorous statistical evidence (when “the authors’ academic work is based on just such evidence” ). So the book “only illustrates the authors’ theories rather than proving them.”


All three of these critical reviews acknowledge that Why Nations Fail is a great book. It should be read by anyone with an interest in politics.

Apart from the central thesis outlined above A&R provide many examples and great historical detail. This alone makes it a good read, even if you have philosophical aversions to the conclusions.

There is no simple solution to the problem of failed states but at least a correct diagnosis might lead to a greater percentage of success. Such explanations as ‘geography’, ‘culture’ and ‘historical accident’ do not offer much hope. Imposing ‘democracy’ on states that are anarchic or repressive does not seem to have worked so far, though it might form part of a solution once the system that has kept the nation repressed has been remedied.

You might think that the people who are in charge of states that extract wealth from their populations and gather power to themselves are psychopaths. They probably are. But it is usually the system that has existed for a considerable time, or is easily adapted to this end, that exists before the person takes power. The system tends to persist longer than any individual. There are more than enough psychopaths around to engineer a revolution or coup that puts them in charge when they see the advantages that may accrue. So getting rid of a dictator is only likely to replace him with another one. Where it does not, the likely consequence is the de-centralisation of the state with warring factions.

A&R make the point that “avoiding the worst mistakes is as important as – and more realistic than – attempting to develop simple solutions.“(p437)


Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

Anyanwu JC & Erhijakpor AEO (2013) Does Oil Wealth Affect Democracy in Africa? African Development Bank available at

Boldrin M and Levine DK (2008) Against Intellectual Monopoly, Cambridge University Press. available at

Boldrin M, Levine D & Modica S (2012) A Review of Acemoglu and Robinson’s Why Nations Fail available at

Bueno de Mesquita B (2009) Predictioneer The Bodley Head (published in the USA as ‘The Predictioneer’s Game’ )

Hargreaves I (2011) Digital Opportunity. A Review of Intellectual Property and Growth (report commissioned by UK government) available at

Schumpeter J A (1942) Capitalism, Socialism and Democracy Harper & Brothers

Taylor F (2013) The Downfall of Money: Germany’s Hyperinflation and the Destruction of the Middle Class Bloomsbury

Don’t lose your mind for Utopia

by Michael Davidson

Thomas More

Thomas More (1478-1535), Lord Chancellor to Henry VIII (1491–1547) of England, wrote the book ‘Utopia’[1] first published in 1516. The book describes a fictional island and its politics and customs. The word is derived from the Greek ou = not and topos = place, hence utopia = no place. There is also the Greek eu = good which sounds similar, so utopia = good place (the current meaning). It is not clear whether More was presenting this mythical island as the perfect state or whether he was saying no such place could stably exist. Given the political climate of the time he was probably wise to be equivocal on the matter. He eventually lost his head anyway.

There is no private property or money on Utopia. All produced goods are stored in warehouses where people get what they need. All property is communal so houses are periodically rotated between citizens. All meals are communal. There are no private gatherings. All wear similar woollen garments. Premarital sex is punished by enforced lifetime celibacy. Adultery and travel within the island without a passport are both liable to be punished by enslavement.

You might not think that this would be a pleasant place to live, but there has been at least one attempt to implement such a society (Michoacán, Mexico circa 1535) and More was revered by Lenin for promoting the “liberation of humankind from oppression, arbitrariness, and exploitation.” [2]


Thomas More mentions Plato (427-347BC) favourably, and was obviously well acquainted with Plato’s Republic [3] which is arguably the first attempt to design a ‘perfect state’. In Plato’s republic there are 3 classes of citizen: the rulers, the military and the workers (merchants, carpenters, cobblers, farmers and labourers). The rulers are the philosophers (those devoted to reason); the military (called Guardians) are the spirited or ambitious; and the rest are those who know only their desires. The rulers rule with absolute power, exercise strict censorship so that only good and true ideas prevail, and ensure by appropriate education that they are succeeded by like minded philosophers. All citizens know their place in society and may not change it, for to do so would be to rebel against the institutions.

For the rulers and the potential rulers, family life would be abolished in favour of communal living. All promising children who showed spirit or reason (from whatever class – though Plato advocated a eugenic program of mating the ‘best men’ with the ‘best women’) were to be removed from their families to be educated as potential rulers. Their training would be in gymnastics and military music until the age of twenty. Then mathematics and astronomy for ten years, followed by a thorough study of Plato’s philosophy. Those that didn’t quite make it through the course at any stage were to be assigned to the military. By the time they have successfully finished this study they will be over 50, will have developed such a devotion to Plato’s philosophy that they will rule only through their sense of justice which requires their ruling wisely in recompense for the superb education the state has provided for them. Since the rulers are just, good etc and they have absolute powers there is no need for laws or votes.

Thomas Hobbes

More and Plato were idealists who believed in worlds beyond this one.   But totalitarian states can also be based on a materialist view of Man. Thomas Hobbes (1588-1679) in his book Leviathan (1651) regards the State as something like an artificial man “the sovereign is the soul, the magistrates are artificial joints, reward and punishment are the nerves, wealth and riches are the strength” and so on.

Hobbes thinks that ‘in the state of nature’ Man is or would be in a perpetual state of turmoil. Without “a common power to keep them all in awe”, there would be a war of “every man against every man.”[4] The solution is for men to surrender their liberty to a sovereign power. It does not much matter whether the sovereign power is a monarchy, an aristocracy or a democracy. The essential point according to Hobbes is that the sovereign must have absolute power. Only in this way can the populace have a secure and orderly existence.

Such perfect societies would not be so bad if they were confined to books, but every so often societies built on similar lines spring into being. This is often the result of a revolution or coup in the name of some dream. Whereas these societies tend not to last long, reversing the process to a more libertarian one is often painful. It is not usually possible to impose democracy on what was previously a dictatorship. Although democracies may be born in a coup they also evolve, as is evident in the many different versions of democracy that exist in the world today.

John Locke

Freedom of speech, freedom of enterprise, rule of law, property rights and the ability to remove unpopular governments from power without disrupting society are characteristics of democracies. In popular parlance only voting is seen as distinguishing democracy from other systems. But there is a lot more to it than that.

The above characteristics are attributed in no small part to John Locke (1632–1704) who published Two Treatises of Government in 1689 [5] which seeks to throw light on the basis of political authority. Locke does not reckon much to Hobbes’ absolute sovereign power. He sees the original ‘state of nature’ as happy and tolerant. The State is formed by a social contract which entails a respect for natural rights, liberty of the individual, constitutional law, religious tolerance and general democratic principles. The various institutions form a system of checks and balances. A government must be deposed if it violates natural rights or constitutional law. The state is concerned with procuring, preserving, and advancing the civil interests of the people: life, liberty, health and property through the impartial execution of equal laws. These principles were eventually enshrined in the constitutions of many modern democracies as ‘self evident’. The history of the world shows there was nothing much about them that was evident before Locke. The whole idea of ‘human rights’ (ie those rights arising from being a human being as opposed to sovereign rights, marital rights etc) stems from this era and can be attributed to Locke in no small part.


Democracies depend for their equilibrium on many interdependent and independent institutions. The doctrine of the ‘Separation of Powers’ due to Montesquieu (1689–1755) was based at least partly on his observation of Locke’s England. This had recently become a constitutional monarchy through the “Glorious Revolution”(1688) which installed William of Orange and his wife Mary on the throne with increased parliamentary authority. According to Montesquieu political liberty is a “tranquillity of mind arising from the opinion each person has of his safety.” In England this was obtained through the separation of the Legislative, Executive and Judicial branches of the administration.[6] The merging of these powers into one body would be a recipe for tyranny, he said. The separation of powers was a major consideration in the drafting of the US Constitution (1788).

According to Montesquieu democracy can be corrupted not only where the principle of equality of all citizens does not exist but also where the citizens fall into a spirit of ‘extreme equality’ where each considers himself on the same level as those who are in charge. People then want to “debate for the senate, to execute for the magistrate, and to decide for the judges. When this is the case virtue can no longer subsist in the republic.”[6] The ideal of a ‘Free Press’ exists so that corruption in high places can be exposed, but it can deteriorate into this ‘extreme equality’. This is shown by the recent scandals in England where certain sections of the press have represented themselves as the conscience of the nation in all matters political and judicial whilst at the same time having so little respect for the truth in high profile cases like Christopher Jefferies¹ and engaging in criminal activity like phone hacking². Continue reading

How to Recognise Philosophical Nonsense

philosophical nonsense

Philosophy is our ideas about morality, politics, how we find out about the world in which we live, and consequently how we think the world works.

Unfortunately philosophers do not agree on what the basic ideas in these fields are.   They have not agreed on any principles whereby they or we can distinguish good and bad philosophy.

Is there a message in there that will enlighten or educate us? The philosopher is often pushing a world view and gathering what he or she considers evidence to bolster it.

Philosophers, like the rest of us, are susceptible to reaching a consensus that ‘everybody knows’ that turns out later to be false.  Great fashions in thought have lasted for centuries but philosophers abandoned them later.   In recent times there have been claims that philosophy is dead and that science can take the place of philosophy.   Such claims are inherently philosophical in nature.

Books written by professional philosophers are usually difficult to understand.   How come they are so hard?

My book considers the various schools of philosophical and scientific thought and demystifies the arguments in a way that most people can understand.  My view is an empirical one: physics, biology, neuroscience, psychology, psychotherapy and so on. But I recognise that philosophical viewpoints underpin these subjects.

I am not dogmatic that mine is the correct view: other books with titles like “How the Mind works”, “How the Brain makes up its Mind”, “Consciousness Explained” push particular views.   There is not enough evidence yet to make such statements either as conclusions or book titles.   You can make up your own mind.

It is difficult to judge whether a particular philosophical position will prove to be true eventually.   But it is possible with some uncertainty to measure how readable and understandable a text is.

Psychologists have researched what makes texts simple or difficult to read.   The obvious ones are sentence construction and vocabulary.   Simple measures of these factors are sentence length (in words) and the length of the words (in syllables).   A measure which uses these easy to count features of a text is the Flesch-Kincaid Grade Level which is worked out as follows:
grade = 0.39 x average sentence length + 11.8 x average word length – 15.59.

Microsoft Word is happy to calculate this for us (menu: tools/spelling and grammar).
The calculated grade is the US school grade (years of schooling) = six plus the numbers of years in school required to be able to understand the text.

There are a number of similar measures of school grade using slightly different formulae. The web site will calculate these for any supplied text.   The agreement between the various measures is only about ±3, so they can only be taken as a guide.   It is possible to write philosophical nonsense that gives a good readability score!

Here are the measures of readability (average from 5 different measures) for sample texts of approx 1500 words by a few selected philosophers:

This text 11
Plato (428BC-348BC) 11
Richard Dawkins (b1941) 11
My book “Rethinking the Mind” 12
John Searle (b1932) 13
John Locke (1632-1704) 13
Aristotle (384BC-322BC) 14
Benedict Spinoza (1632-1677) 14
Thomas Hobbes (1588-1679) 15
Dan Dennett (b1942) 15
Thomas Reid (1710-1796) 15
Friedrich Hayek (1899-1992) 16
Georg W F Hegel (1770-1831) 16
Immanuel Kant (1724-1804) 17
John Stuart Mill (1806-1873) 17
David Hume (1711-1776) 17
Robert Almeder (b1939) 18
Rene Descartes (1596-1650) 21

The aim of my book has been to make philosophy understandable to people educated to 12th grade.

The first step in reaching a balanced and consistent personal philosophy that one can live by is to approach philosophical texts with sure principles for evaluating them.   I put forward a set of such principles in chapter 1.   You can download it for free at the top of this page.

Armed with these principles you can not only understand the arguments that have occurred but by applying these principles evaluate their merits.

You will then be able to see through many of the facile psychological, philosophical and political arguments that are put forward in the media.   The counter arguments have been locked in obscure and impenetrable texts as the above table hints.

Begin your journey to Understanding. Buy the first of the three volumes here:

Do we have Free Will? And why it matters to you.

You might think that after several thousand years of debate we have exhausted all the arguments as to whether we have Free Will, or whether our actions are caused by prior events. So say the protagonists on both sides of the argument: but they still argue! Now there are some new approaches that could throw light on the problem.

Free Will is the idea that we are able to choose between alternative courses of action and actually cause something to happen. For example, I can decide to lift my arm and I consider this to be something I could have decided to do or not.

Determinism is the idea that all events are necessary effects of earlier events: future events are as fixed and unalterable as past events.

Determinism is not quite the same as ‘fatalism’. Fatalism is the doctrine that what is going to happen is going to happen regardless of what you do. For example, you will die of a heart attack on such and such a date, regardless of changing your diet, exercising, medical intervention and so on. Determinism does not predict necessary future outcomes; it merely states that whatever the outcome turns out to be it was the result of prior natural causes.

This idea of determinism is sometimes held to come from Isaac Newton’s discoveries of the laws of motion and gravity. These led us to the idea of a ‘clockwork universe’. The French mathematician Pierre-Simon Laplace (1749-1827) claimed that if we knew all the laws of nature and the position of all the particles in the universe at a particular instant we could know the future (and the past) precisely. He did not, though, say how we could start to verify this.

In fact the idea of determinism is not recent and has roots in ancient Greek philosophy and has come through various brands of Christianity before Newton.

It is only in the last few years that we have realised that Newton’s laws do not imply a clockwork universe. In certain circumstances these laws cause chaotic behaviour. The mathematician James Lighthill (1924 – 1998) even apologised to the lay community for mathematicians giving a false impression for 250 years. (Lighthill 1986)

In the 20th century Newtonian physics was displaced by Quantum Mechanics which showed that determinism of the kind envisaged by Laplace is false. For instance it is not possible to predict when an individual atom of radium will emit an alpha particle and become an atom of radon. All that can be predicted is what proportion of a certain mass of radium will have turned into radon in a certain time.

One philosophical response to quantum mechanics is to insist that indeterminism is true: so our actions must be random, and we don’t cause them anyway. The effort here seems to be to deny free will regardless.

You might think that a determinist would necessarily shun the idea of free will and personal responsibility since our actions are all the product of physical brain activity over which the self (if it exists) has no control. Those who believe that determinism and free will are mutually exclusive – are known as incompatibilists.

Those who believe that free will can be reconciled with determinism are called ‘compatibilists’ and according to the contemporary philosopher John Searle (b1932) this is the majority view among philosophers.

Incompatibilists who believe in determinism are known as ‘hard determinists’.

There is no word (as yet) for people who believe that both determinism is false and freewill does not exist (randomists, perhaps?).

One hard determinist is the neuroscientist Colin Blakemore (b1945): “… all those things that you do when you feel that you are using your mind (perceiving, thinking, feeling, choosing, and so on) are entirely the result of the physical actions of the myriad cells that make up your brain.” Consequently, “It makes no sense (in scientific terms) to try to distinguish sharply between acts that result from conscious intention and those that are pure reflexes or that are caused by disease or damage to the brain.” It seems to follow that “the addict is not ill and is surely not committing a crime simply by seeking pleasure.” (Blakemore 1988)

Another hard determinist, or perhaps a randomist, since he allows the influence of random events in biological development and behaviour, is the biologist Anthony Cashmore. Even if quantum theory eventually shows that determinism is false “it would do little to support the notion of free will: I cannot be held responsible for my genes and my environment; similarly I can hardly be held responsible for any [random] process that may influence my behaviour.” (Cashmore 2010)

Whether or not determinism is true there are philosophers who believe that free will is impossible on purely logical grounds. The philosopher Arthur Schopenhauer (1788-1860) said “A man can surely do what he wants to do. But he cannot determine what he wants.

Carrying on this theme the philosopher Galen Strawson (b1952) believes that what one wants is “just there, just a given, not something you chose or engineered – it was just there like most of your preferences in food, music, footwear, sex, interior lighting and so on… [Wants] will be just products of your genetic inheritance and upbringing that you had no say in… you did not and cannot make yourself the way you are.” (Strawson G 2003) If you can make yourself the way you are then you must have some nature that enables you to do that; if you can make that nature then you must have that ability built in and so on for an infinite regress. Since there is an infinite regress the idea of free will must be false.

Determinism is of course also tied to an infinite regress which is only terminated by the idea of the ‘big bang’ (but what caused the big bang?).

There are philosophers such as Ayn Rand (1905 – 1982) who believe, contrary to Strawson, that Man is a being of self-made soul. (Rand 1966)

Jean-Paul Sartre (1905 – 1980) claimed we have free-will whether we like it or not: “We are always ready to take refuge in a belief in determinism if this freedom weighs upon us or if we need an excuse.” (Sartre 1956)

The free-will determinism debate is anchored in fixed metaphysical positions which are then dressed up in complex and seemingly incontrovertible arguments.

Compatibilism regards ‘free will’ not as independent agency but, rather, the feeling of independent agency. Thus a person acts freely when they do what they wished to do and they feel they could have done otherwise. One of the earliest compatibilists was Thomas Hobbes (1588 – 1679): “…from the use of the words free will, no liberty can be inferred of the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop in doing what he has the will, desire, or inclination to do.” (Hobbes 1690) A person would not do what they wished to do, or do what they did not wish to, except if they were coerced by acute discomfort, threat or torture.

For the compatibilist the wish is determined by the genetic makeup and life history of the person, nature plus nurture, so free will is just being able to act as one wishes without coercion. However, the person, according to determinism, has no power to change his or her future whether he or she is coerced or not. So the feeling of having been able to have done otherwise than what he or she did must be a delusion. Thinking freely must also be an impossibility. In particular, the espousal of the doctrine of determinism must have been determined, and those who defend the opposite, non-determinism, must have been similarly determined.

This leads to an endless debate between non-determinists who believe they can induce the determinist to make a non-determined decision and determinists who believe they can determinedly box in the non-determinist to see his impotence. John Searle was evidently once asked, “If someone could unequivocally prove determinism, would you accept it?” to which Searle replied, “Are you asking me to freely accept or reject such a proposition?” He points out that if “you go into a restaurant and they give you a menu and you have to decide between the veal and the steak. You cannot say to the waiter, ‘Look, I’m a determinist. Que sera sera’ because even doing that is an exercise of freedom. There is no escaping the necessity of exercising your own free choice.” (Searle 2000)

In other words whether we have free will or not, it is a difference that does not make a difference.

Incompatibilists who reject determinism but accept free will are called Libertarians. Libertarianism is the theory about freedom that despite what has happened in the past, and given the present state of affairs and ourselves, just as they are, we can choose or decide differently than we do – act so as to make the future different.

The idea is that the future normally consists of several alternatives and one has the power to choose freely which alternative to pursue.

A modern libertarian is the former New South Wales Supreme Court Judge, David Hodgson (1939-2012). He accepts that some combination of deterministic laws and quantum randomness is one form of causation. But he insists there is another kind of causation operating in the conscious decisions and actions of human beings, and perhaps also of non-human animals, ie ‘volitional causation’ or ‘choice’. He suggests that physical law does not necessarily imply determinism, ie a number of possible futures may all be consistent with physical law. He grants that the choices a person might come to may partly be the result of unconscious reasons and motives codified in the neural mechanisms. But the function of consciousness is to “allow choice from available alternatives on the basis of consciously felt reasons …the rationality and insight of normal adult human beings, even though far from complete or perfect, is generally sufficient for them to be considered as having free will and responsibility.” (Hodgson 1999)

The motive of both libertarians and compatibilists seems to be to justify holding people morally responsible for their actions. The libertarian might also claim that if we are not free agents then there is no basis for morality at all. The fear is that if moral responsibility is a prerequisite for guilt, blame, reward and punishment, and no one can do anything other than what they do, then no one should be rewarded or punished just as hard determinism seems to imply. Some hard determinists claim that reward and punishment is justified on the grounds that people do respond to reward and punishment in a determined way. But this leads to the view that the rewarders and punishers do what they do without grounds or justice, whereas the rewarded or punished are suckers, taken in by the authority of the judgers, continuing to believe in their guilt or worth. (Warnock 1998)

Compatibilists hold that even though people cannot do anything other than what they do, they are nevertheless morally responsible. There is an argument from Donald MacKay (1922-1987) which shows that even if there is a Laplacian demon or God who knows all about the state of my brain and even if He claims to be able to predict my every action I can have no reason to believe any of His predictions (which necessarily must include His knowledge of whether I believe the prediction or not). As I do not know whether He has predicted I will believe or not, He has given me no grounds for believing the prediction or not. (McIntyre 1981)

So according to MacKay, even if the universe is determined the self must regard itself as an agent capable of moral choices and act accordingly. Determinism makes no difference to how we conduct our lives.

Philosophers of a determinist persuasion have stuffed the self into a variety of strait-jackets in an attempt to avoid the dreaded idea of the soul. Personal experience must be denied or at least proscribed at the risk of introducing personal agency. The idea of a responsible self is opposed by the idea of scientific explanation and prediction. On the other hand philosophers of a libertarian conviction try to find in science evidence that the world is not ‘causally closed’. This could allow free will and justify the retention of our jurisprudence, against the revisionist urgings of those determinists who feel all punishment is unjust.

Peter Strawson (1919-2006) thinks that the metaphysical dispute between the compatibilists and the incompatibilists is ill-framed. It can be resolved if each side would relax a little. The compatibilist normally portrays jurisprudence as an objective instrument of social control, excluding the essential element of moral responsibility. The incompatibilist is appalled that if determinism is true then the concepts of moral obligation and responsibility really have no application, and the practices of punishing and moral condemnation etc are really unjustified. (Strawson P 1962)

But both sides, says Strawson, neglect the fact that “it matters to us [a great deal] whether the actions of other people – and particularly of some other people – reflect attitudes towards us of goodwill, affection, or esteem on the one hand or contempt, indifference, or malevolence on the other …The human commitment to participation in ordinary inter-personal relationships …is too thoroughgoing and deeply rooted for us to take seriously the thought that a general theoretical conviction might so change our world that, in it, there were no longer any such things as inter-personal relationships as we normally understand them… The existence of the general framework of attitudes itself is something we are given with the fact of human society. As a whole, it neither calls for, nor permits, an external ‘rational’ justification.”

According to Strawson, determinism does not entail that anyone who caused an injury was ignorant of causing it or had acceptable reasons for reluctantly going along with causing it. Nor does it entail that nobody knows what he’s doing or that everybody’s behaviour is unintelligible in terms of conscious purposes or that everybody lives in a world of delusion or that nobody has a moral sense which is what would be required if determinism was at all relevant. Compressing Strawson’s argument down from his 11,000 words : If determinism is true this would imply that our nature includes the concept of moral responsibility that we apply in our jurisprudence. It would not be rational, even if determinism is true, to change our world to dispense with our moral attitudes.

Nicholas Maxwell recasts the problem from ‘free-will versus determinism’ to ‘wisdom versus physicalism’. (Maxwell 2005) For of all the various constructions that could be placed on the term free-will he considers that the one most worth having is not the ‘capacity to choose‘ but rather, ‘the capacity to realise what is of value in a range of circumstances‘ (in both senses of the word ‘realise’ ie: apprehend and make real). Secondly he characterises physicalism as “the doctrine that the universe is physically comprehensible.” It is not determinism but the idea that the universe is understandable that characterises physicalism. The problem of free will then comes down to how can that which is of value associated with human life (or sentient life more generally) exist embedded in the physical universe? In particular how can understanding and wisdom exist in the physical universe?

Both Peter Strawson’s and Nicholas Maxwell’s reformulation of the free-will debate appear to be compatibilist with respect to moderated concepts of free-will and determinism. Anything that weakens fundamentalist views ought to be welcomed, though how these views can be taken forward into empirical investigation is not apparent.

I think that one of the difficulties with the debate on free will is what it means to talk about ‘moral responsibility’. The usual interpretation of this concept is that when someone has done something reprehensible we hold them to account: we blame them for some situation and punish them. Blame is the attempt to impose shame on the part of the offender so as to inhibit activity. The dictionary definition of ‘responsibility’ is only vaguely related to this scenario. Responsibility is ” (Latin respondeo = to respond) the quality or state of being able to respond to any claim or duty.” Thus a responsible person can set in place those procedures necessary to prevent harm; if he has done wrong he can act to put the situation right; if some situation arises that is perceived as morally wrong he can take the requisite actions. Irresponsibility is where one seeks to evade one’s duty by excuses and inaction. Those who claim that no one is responsible for anything should be asked what they are ashamed of.

It seems to me that what is worth having for one’s self and for people in the society at large is this ability to respond to situations (to take ‘responsibility’) and do whatever is necessary in the circumstances we find ourselves. This means that responsibility is tied in closely with wisdom: it is responsible to acquire wisdom, it is wise to act responsibly.

Whether we have ‘free will’ in some ultimate sense or whether our actions are ultimately ‘determined’ is a metaphysical matter. Such concerns are junior to the fact of ‘moral responsibility’ which we can (hopefully) exercise regardless of our metaphysical leanings.

Libertarianism does not entail the idea that decisions are divorced from circumstances. It does presuppose, I believe, the ability to predict the future with some degree of confidence. “Able to choose otherwise in the same circumstances” restricts the possibilities for ‘free will’ by demanding that free will means nothing more than caprice. Responsible action requires gathering the information relevant to the decision at which time the decision may become ‘necessitated’ by what one now knows. This does not mean that that information caused the decision or that one is relieved of the responsibility for that decision.

Scientific investigation of the questions of free-will and compatibilism are difficult in principle because they are metaphysical issues that science cannot address directly. The particular side of the debate that people take would seem to depend on introspection of their decision making processes. For much of the 20th century psychological investigation of introspective accounts was considered worthless. So there is very little research on the subject.

There are however questions related to the metaphysical problem of free will that can be investigated empirically. For instance, the question of whether one’s attitude to the question of free-will affects one’s moral sense has been investigated.

In one experiment 119 undergraduates were randomly assigned to one of five groups to answer the same set of 15 standard reading-comprehension, mathematical and reasoning problems. (Vohs & Schooler 2008) Participants were told they would receive $1 for each problem they correctly solved. In three of the groups participants marked their own answers and paid themselves after which they shredded their answers. This gave ample opportunity to cheat. The other two groups had no opportunity to cheat. The five groups were treated slightly differently.

The three cheating-possible groups were given a series of 15 statements which they were supposed to think about for one minute each.

One group were given statements that were pro-determinism such as “a belief in free will contradicts the known fact that the universe is governed by lawful principles of science” and “Ultimately we are biological computers – designed by evolution, built through genetics, and programmed by the environment“.

Another group were given statements that were pro-freewill such as “I am able to override the genetic and environmental factors that sometimes influence my behaviour” and “Avoiding temptation requires that I exert my free will.”

The third group were given neutral statements such as “Sugar cane and sugar beets are grown in 12 countries.”

One of the two no-cheating groups was also given the pro-determinism statements to study before doing the test. The other was given the free-will statements. So this gave two groups of interest that could cheat – one primed with determinism, one primed with freewill; and three control groups to act as a ‘base line’. The average reward for the group primed for determinism that were able to cheat was $11 ± 1 whereas the other four groups each obtained approx $7 ± 1 (with non-significant variation).
free will and cheaters
It thus appears that the spreading of deterministic views is liable to increase modest forms of unethical behaviour, a result significant at the 1% level. Whether this generalises to more serious offences and whether the belief in determinism may compensate these minor offences with an increased compassion for the less well off and a decrease in the desire for revenge is not known.

Nevertheless it seems that the question of free-will is not just philosophical but is of great interest in jurisprudence as libertarians such as David Hodgson claimed.

So do we have free will? Well, if this experiment generalises we’d better believe it.


Blakemore C 1988 The Mind Machine BBC Books pp7, 270,170

Cashmore AR (2010) The Lucretian Swerve: The biological basis of human behaviour and the criminal justice system Proc Nat Acad Sci USA vol 107(10) p4499-4504

Hobbes T (1690) Leviathan chapter 21

Hodgson D (1999) Hume’s Mistake Journal of Consciousness Studies vol 6 no 8-9 p210

Lighthill J (1986) The Recently Recognised Failure of Predictability in Newtonian Dynamics Proceedings of the Royal Society of London A 407: 35-50.

Maxwell N (2005) Science versus Realization of Value, not Determinism versus Choice Journal of Consciousness Studies vol 12 no 1 p53

McIntyre JA (1981) MacKay’s Argument for Freedom Journal of American Scientific Affiliation 33 (Sept) p169-171

Rand A (1966) Philosophy and a Sense of Life The Romantic Manifesto Signet p 28

Sartre J-P (1956) Being and Nothingness: An essay in phenomenological ontology (transl HE Barnes) New York: Philosophical library p78

Searle JR (2000) Consciousness Free Action and the Brain Journal of Consciousness Studies vol 7 no 10 p11

Strawson G (2003) The Buck Stops – Where? (Interview with T Sommers) The Believer (march 2003)

Strawson P (1962) Freedom & Resentment Proceedings of the British Academy vol 48 p1-25

KD Vohs & Schooler JW (2008) The Value of Believing in Free Will: Encouraging a belief in determinism increases cheating Psychological Science vol 19(1) p 49

Warnock M (1998) An Intelligent Person’s Guide to Ethics Duckworth p 92

Machine Consciousness: What is it like to be a computer?

by Michael Davidson

(response to the article Machine Consciousness: Fact or Fiction?  by Igor Aleksander available at )

The eminent AI researcher Igor Aleksander tackles the question as to whether a machine could be conscious in the above article posted in February 2014.

Aleksander assumes that being conscious is an advantage to an organism such as a human being.   Consciousness therefore has a function.

This is a step forward from some philosophical positions:

a)   consciousness has no function (epiphenomenalism – originating with Thomas Huxley in the 19th century [Huxley 1912] and espoused typically by some neuroscientists and psychologists today [Soon et al 2008; Wenger & Wheatley 1999]).

b)   “psychology must discard all reference to consciousness… [must] never use the terms consciousness, mental states, mind, content, introspectively verifiable, imagery and the like.” [psychological behaviourism – Watson 1913].

c)   any statement involving mind words (such as consciousness, intentions) can be paraphrased without any loss of meaning into a statement about what behaviour would result if the person considered happened to be in a certain situation. [philosophical behaviourism – Ryle 1949].

d)   consciousness does not exist [Eliminative Materialism eg Churchland 1981].

e)   the self does not exist [New Scientist 23 Feb 2013] see my response here.

Aleksander believes that consciousness can be understood in ‘scientific’ terms, though what he means by ‘scientific’ is not explained (that would require several thousand words on its own and is not a subject devoid of controversy itself).   I suspect when he says ‘scientific’ he means physicalist since the core – metaphysical – assumption of most AI is that all things can ultimately be explained through the theories of physics and computation.

It is Aleksander’s contention that it is not possible to know that people other than ourselves are conscious because we have no tests to tell us, we simply believe they are because they are human.  (Or is it that we only believe they are ‘human’?).  ‘Non-scientific’ ordinary humans simply believe things based upon assumptions.   He invites us to believe that ‘tests’ – by which I presume he means ‘scientific’ tests – could confer upon us some kind of certainty and proof.

This seems to miss the important point that scientific theories are built up from observations by humans.   A theory is not necessarily true; it is just the best idea we have which

1) explains the observations leading to some understanding
2) predicts new observations that can be tested and may be found to be true (it’s the observations that are true not necessarily the theory) and
3) enables us to (ultimately) control the phenomena in question.

The observations are repeatable by anyone who treads the same path of learning.  Thus we can build jumbo jets and hadron colliders.

Aleksander quotes from the report of a conference of neurologists, computer scientists and philosophers on machine consciousness (Swartz Foundation 2001): “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artefacts designed or evolved by humans.”   May be so, but we don’t know of any fundamental law that allows it either.   Nor do we know, if it be allowed, how consciousness manifests from the material of brains.   Metaphysics creeps easily into science at the fringes.

We have had several revolutions in science when theories that seemed to explain everything in their sphere of enquiry were found to be flawed.   The classic example is Newton’s ‘Laws’ which were held to be metaphysically ‘true’ for 250 years.   Newton’s Laws could not be ‘proved’ but we certainly believed them then – and we believe them now in the context for which they were formulated.   Newton’s metaphysical assumptions (absolute space and time ‘flowing’ evenly) and the metaphysical conclusions derived from his works (the ‘clockwork universe’) have been overturned.

The idea that humans are conscious can certainly be subject to qualitative tests. Apart from the obvious observations of whether a person is asleep or awake, we can see to some degree whether a person is alert and aware of his or her surroundings.   The theory that that person is conscious explains his or her bearing and demeanour, enables us to enter meaningful conversation, enables us to discuss what we simultaneously perceive or simultaneously imagine, cooperate in joint ventures and so on.   It is difficult to imagine how science could be conducted if ‘scientists’ were not conscious.

Evidently most people do not doubt their own consciousness and if there are indeed no means to determine whether other people are conscious it is only a small step from there to solipsism – the view that the world and other minds exist only in one’s own mind.  This is contrary to the physicalist position that only physical things exist.

Aleksander’s first step is to define consciousness (a notoriously difficult task).   You can exclude too much or include too much.  You can limit it to simply the difference between a person who is conscious and one who is unconscious, concentrating on anaesthetics and their antidotes.   Or you can broaden it to personal identity – the self – moral intuitions and even ‘cosmic consciousness’.   You can also divide the concept in various ways –

‘phenomenal consciousness’ (awareness of here and now) and social consciousness (which presupposes concepts, abstraction and language) [Guzeldere 1995 ]  or

‘core consciousness’ (awareness of here and now) and ‘extended consciousness’ (an elaborate sense of self) [Damasio 1999] or

‘access consciousness’ (the contents of the mind accessible to thought, speech and action)  and  phenomenal consciousness’ (what it is like to be me) [Bloch 1994]
(‘What it is like’, which I also allude to in my title, is a reference to the seminal article by Thomas Nagel “What is it like to be a bat?” [Nagel 1974])

Here is Aleksander’s definition:
consciousness is a “collection of mental states and capabilities that include:

1. a feeling of presence within an external world,
2. the ability to remember previous experiences accurately, or even imagine events that have not happened,
3. the ability to decide where to direct my focus,
4. knowledge of the options open to me in the future, and
5. the capacity to decide what actions to take.”

If a machine endowed with language could report similar sensations Aleksander reckons then we would have as much reason to assume that it is conscious as we have with another human being.   I suspect the five points are framed in this way as to allow the construction of an artefact that could be said to be ‘conscious’, thus satisfying the metaphysical first principle that humans are machines.

Aleksander has built a machine ‘VisAw’ which he claims satisfy points 1, 2 and 3.

The machine is a set of artificial neural networks with the inputs and outputs displayed on a computer screen.   There is a picture of the screen in the referenced article.

VisAw is based on the architecture of the part of the human brain called the extrastriate cortex – ie Brodmann areas 18, 19 and 37.   The cortex was studied microscopically by K Brodmann (1868-1916) in 1909 and various areas were found to have different structures now given numbers.   The various functions of the cortex are only loosely related to the Brodmann areas.   The parts of the cortex involved in vision are labeled V1, V2, V3, V4 and V5.   The striate cortex (V1 located in B17) is so named because it is striped or striated.

machine consciousness brain map
(The two hemispheres of the brain are similar)

Simplistically and briefly, V1 maps to the retina and detects edges; V2 ‘fills in’ illusory edges; V3 is concerned with motion; V4 is concerned with spatial frequency, shape and colour and is affected by attention; V5 is concerned with the perception of motion and the guiding of some eye movements (the ‘picture’ of our surroundings is built up from many snapshots from the central sensitive area of the retina).   (Damage to one or other of these areas of the brain results in a characteristic neurological disorder such as not being able to perceive motion).

VisAw models V3, V4 and V5 at least to some extent.

Let’s look at Aleksander’s points more closely:

1) a feeling of presence within an external world.

Applying this to humans, most people will certainly regard this as true for themselves, but Aleksander asks how do we know this for others?   One test is to ask them whether they feel they are in an external world.   A reasonable test you might think, but not one that is 100% reliable.   The following simple program can do this but few would consider the answer as reflecting the internal feeling.

IF x = “Do you feel you are present in an external world?” THEN
OUTPUT “of course”

You might think this example is too ridiculous for anyone to be taken in by it, but many simple conversational programs have done so (eg ELIZA [Weizenbaum 1976] fooled some people into revealing intimate secrets and some practicing psychiatrists believed that ELIZA could develop into an automatic form of psychotherapy).   It is only when the mechanism of how the program functions is revealed that the magic disappears.   Here’s an example:

Person: “My dog’s mother died recently.”
ELIZA: Tell me more about your mother
(it doesn’t recognise anything in the person’s statement apart from the word ‘mother’)

Aleksander contends that by suitable manipulation “the content of a machine’s ‘mind’ can be made transparent and observable, [so] attributing consciousness to it may be no more outlandish than attributing consciousness to a living creature.”   Obviously my little program would fail this test.   In the case of a neural network the workings are relatively obscure since they do not depend on rules written by a programmer but they are not necessarily any more magical.   The display of a message “I am imagining” does not presuppose any ‘I’ attached.   If we expose the workings of VisAw it is not clear that there are any feelings connected to it.   So it is difficult to see how we would know that the machine has any feeling at all.   Leibniz made the same point: “Perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions.   And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill.   That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.” (Liebniz 1714)

From what I can tell of VisAw it seems to ‘remember’ and ‘recognise’ two dimensional pictures of human faces.   Humans and other living creatures exist in three dimensions.   According to the psychologist JJ Gibson perception depends on the creature being able to move in the real world and so build up a three dimensional model of the world in which it is located and moves, rather than a world which rotates around the creature, as if on a TV screen.[Gibson 1986]

2. the ability to remember previous experiences accurately, or even imagine events that have not happened.

It could be argued that a DVD recorder has the ability to remember audio visual experiences but it seems unlikely that such a machine would be conscious.   Thus displaying the contents of VisAw’s memory does not illustrate consciousness.   Nor does the recombination of several parts of previous experiences demonstrate imagination, I would contend.   Imagination is creative not merely reconstructive.   A display on the computer screen saying “I am imagining” does not constitute imagining.

On the other hand humans don’t remember accurately anyway.   A robot that was similar cognitively to a human but with an ‘accurate memory’ would probably illustrate the phenomenon of the ‘uncanny valley’.   This refers to the revulsion humans typically experience when a robot is visually almost indistinguishable from a human being.   Such robots are experienced as distinctly ‘creepy’.

The criterion seems somewhat divorced from consciousness as such.   People with serious damage to the hippocampus (part of the brain associated with certain aspects of location and memory) typically cannot remember anything before a few minutes ago (though they can remember things before the injury occurred).   Such people can nevertheless learn skills such as playing the piano but cannot remember actually learning to do so, though they are definitely conscious according to the people who meet them.

3. the ability to decide where to direct my focus.

This seems to postulate ‘free will’ although most modern philosophers consider this to be an illusion and argue that all our actions are determined by prior causes.  This is the consequence of the predominance of the metaphysics of physicalism.   So I suppose that the machine must simply say that it has this ability for it to be considered that it has it (just like humans are supposed to).   The question of ‘free will’ is another thorny issue that requires several thousand words to summarise.

In so far as VisAw actually models the extrastriate cortex and produces something which seems to work in the way the cortex does, this is a big achievement.  But the extrastriate cortex is concerned with vision, and although vision is arguably a central part of consciousness (tell that to a blind man) it is not the whole of it.

Neuroscientists have not come out and claimed that the centre of consciousness is in the extrastriate cortex.   Some have identified the thalamus as the centre of consciousness, since most of the sensory nerves converge on it.   Others have suggested B40 is the seat of the sense of self; B24 the seat of free will; B9, B10, B11 and B12 the centre of the executive functions.   Others have said that consciousness is a function of the entire brain or even the whole body.

It is too much to require that tentative steps to what might become ‘machine consciousness’ should demonstrate the whole phenomenon immediately.   But I suspect that even a partial demonstration is a way off.

The Turing Test suggested by Alan Turing in 1950 was to test whether a computer and a human being could be distinguished if the medium was only a text message.   Turing estimated that a suitable program could be written in about 3000 man years by about the year 2000 and it would fool humans 70% of the time. [Turing 1950]   By contrast the program for the NASA space shuttle took about 22,000 man years to develop and I have not yet heard a claim that the shuttle is conscious.   Optimism is endemic in artificial intelligence research.

Arguably the program IDA [Franklin 2003] mentioned by Aleksander in his article has passed the Turing test in the very narrow domain of allocating sailors to new assignments by communicating in natural language by e-mail.    IDA does not understand the communications of the sailors in the same way a human would: it matches the content of the incoming e-mail to one of a few dozen templates egplease find job“.   IDA would fail on general knowledge or anything outside its templates.   Yet it is claimed to be conscious by ‘machine consciousness’ advocates.

As a tougher test for machine consciousness I propose the ‘Reid Test’.   The Scottish Enlightenment philosopher Thomas Reid (1710-1796) defined common sense as “that degree of judgment which is common to men with whom we can converse and transact business.” (Reid 1785).   Consciousness and common sense are not quite the same concept but I believe consciousness is a prerequisite for common sense.  Anything that demonstrates common sense must be conscious, I think. I contend that consciousness requires the abilities to perceive (external objects in 3 dimensions and one’s location in the world), understand (words and abstract concepts), imagine (conceive novel situations), communicate (in some natural language) and act on the environment.   Consciousness without emotion would seem paradoxical (the uncanny valley) for emotion reveals the degree of involvement of the subject with the object.   Clearly the scope of these abilities would be limited in the initial stages of development of any conscious machine.   In addition the machine must be open to internal inspection so that any ‘magic’ is exposed.   History shows that humans can be deceived by determined trickery.

Animals other than humans show abilities in these domains to the point where we believe them to be conscious.   For example dogs can remember the layout of their environment, understand a limited number of commands and the task at hand, imagine certain situations and behaviours (think sheep dogs), and bark and whine in appropriate places.   They obviously display emotions.   The behaviour of trained chimps and orang-utans with sign language is even more impressive.   What we don’t have are scales of achievement in these domains that would allow us to measure the degree of consciousness.   It seems to be an all-or-nothing attribute that is confused with and by other attributes such as intelligence, social context and trained response on the part of the animal, and by empathy on our part.   Consciousness itself must be subject to more investigation before we can definitively ascribe consciousness to lower animals such as bees.

Developing a conscious machine requires a lot of effort in physics, computer science, psychology, neuroscience, philosophy and so on. It also requires a notable lack of arrogance not merely assertions. It also requires good luck in finding that metaphysical reality actually allows machine consciousness.

So to answer Aleksander’s query: Is Machine Consciousness  fact or fiction?
For the moment and the forseeable future: fiction.   But keep trying.


Block N (1994) Consciousness in A companion to Philosophy of Mind ed S Guttenplan, Blackwell

Churchland PM  (1981) Eliminative materialism and the propositional attitudes Journal of philosophy vol 78 no 2 section 1

Damasio A (1999) The Feeling of what happens Heinemann p16

Franklin S (2003) IDA: A conscious artifact? Journal of Consciousness Studies  vol 10 no 4-5 p47-66

Gibson JJ (1986) The Ecological Approach to Visual Perception Lawrence Erlbaum Associates

Guzeldere G ( 1995) Problems of Consciousness: a perspective on contemporary issues, current debates Journal Of Consciousness Studies vol2 no 2 p118

Huxley T  (1912) Method and Results Macmillan : p240, p243 available at

Leibniz G  (1714) Monadologie sec 17 transl R Latta available at

Nagel T (1974) What is it like to be a bat? Philosophical Review vol 83, 4 p435 available at

Reid T  (1785) Essays on the Intellectual Powers of Man, Essay 6 Chapter 2  (p229)  available at

Ryle G (1949,2000) The Concept of Mind Penguin

Soon CS, Brass M, Heinze H-J & Haynes J-D  (2008) Unconscious determinants of free decisions in the human brain. Nature Neuroscience  11, p543 – 545

Swartz Foundation (2001) Can a Machine be Conscious? available at

Turing A (1950) Computing Machinery and Intelligence Mind 59  (236) p433-460  available

Watson JB  (1913) Psychology as the Behaviorist Views it. Psychological Review, vol 20, p158-177 available at

Wegner D & Wheatley T (1999) Apparent Mental Causation Source of the Experience of Will American Psychologist vol 54 no 7 p480-492 available at

Weizenbaum J  (1976) Computer Power and Human Reason Penguin p188

Memories of Historical Sexual Abuse

by Michael Davidson

A number of allegations of historical sexual abuse have recently hit the courts in the UK. This follows revelations of sexual abuse on the part of the celebrity DJ Jimmy Savile who died in 2011. Following his death there were 450 complaints against Savile including 34 rapes and 28 sexual assaults on children under 10 years of age at the time, some offences dating back to the 1950’s. Savile was associated with numerous charities and good causes during his lifetime, all of which are now embarrassed by their association with him.

The publicity surrounding this scandal has encouraged people to come forward with allegations concerning sexual misconduct by other celebrities and non-celebrities many years ago, some of which are now proceeding through the courts with attendant publicity. I do not envy the jurors their task of deciding on the guilt or innocence of the defendants in these cases. The defence is often that the alleged perpetrator was not in the stated place at the stated time, or had never met the alleged victim, or the sexual relations were consensual, so it boils down to one person’s word against another’s. Clearly, rape and sexual assault where proved should be punished according to the law. But imagine for the moment that you are suddenly confronted by a policeman who states that an allegation, or worse several allegations, of sexual assault perpetrated perhaps decades ago have been made against you. How to defend yourself? Unless you kept a detailed diary which says where you were and who you were with at the time in question, and can then call on witnesses or other documentation, you are in a sticky situation. You can easily start to doubt your own memory.

The key questions are “How reliable is memory?” and “How can memories be assessed for truth?”

Even if we were able to report precisely what we see, our accounts of what occurred some time ago rely on our memories that can be subject to error in many ways. Psychologist Daniel Schacter (Ref 1) has listed seven sins that our memories are subject to:

  • 1) transience
    forgetting things gradually over time
  • 2) absentmindedness
    forgetting because of not paying attention to things that we should have
  • 3) blocking
    The temporary inability to remember something that is known when you need it (it may pop into consciousness some time later)
  • 4) misattribution
    assigning a memory to the wrong source, such as attributing someone’s statements to another
  • 5) suggestibility
    developing false memories for events that did not happen
  • 6) bias
    changing past events in support of current attitudes and beliefs
  • 7) persistence
    remembering past episodes that we would prefer to forget.

The phenomenon of false memory is regarded as particularly problematical for any notion of objective reporting (Hall, McFeaters & Loftus Ref 2).   In a well known experiment into the reliability of eye witness testimony (Loftus & Palmer Ref 3) 45 students were each shown seven short film clips of car accidents. The students were asked to describe the accident they had just seen and then answer a number of questions. The 45 students were divided into 5 groups who answered a slightly different set of questions. The key difference was the question: “About how fast were the cars going when they xxx into each other?” where xxx was substituted by one of the verbs “smashed”, “collided”, “bumped”, “hit “, and “contacted “. When the word “smashed ” was used the mean estimated speed was 41mph and when “contacted ” the mean estimated speed was 32mph, with the other verbs somewhere in between (“hit” was 34mph). This result could be due to distortion of memory by the verb used, or it could be caused by distorted report based on what the student thinks is the expected answer.

In a follow-up experiment 150 students were shown a short film with a 4 second multiple car accident. The students were divided into 3 groups and asked several questions, one of which was asked the ‘smashed’ question and another, the ‘hit’ question. The third was not queried about the speed. A week later all 150 students were asked a number of other questions including the critical question: “Did you see any broken glass?” (there was none in the film). Here is the result:

  • Broken Glass seen
    smashed: 16 (55%); hit: 7; control: 6.
  • Broken Glass not seen
    smashed: 34 (28%); hit: 43; control: 44.

Considering the fact that the difference in response appears to have occurred because of one word in one question among several asked one week earlier (p < 1%), this calls for some explanation. The ‘reconstructive hypothesis’ is that two types of information go to make up a memory of an event. One is the information obtained from the perception of the event and another is the information supplied after the event (in this case the suggestion of ‘hit’ or ‘smashed’). These may become integrated as one memory. When the question about broken glass is asked, the subject who thought he saw a smash rather than a mere hit, reasons that there must have been glass and adds it to the memory.

The phenomenon that memory can be changed after the fact is not confined to episodic memory. There is evidence that the brain converts short term memory into long-term memory by a hypothetical process known as ‘memory consolidation’. Experiments with conditioning in different animals as divergent as bees, snails and rats show that consolidated memories are not fixed for all time, but can become malleable when they are reactivated. This is known as ‘reconsolidation’ and it only occurs in a ‘window’ of time after the reactivation of the memory.

The Russian physiologist Ivan Pavlov (1849-1936) developed the idea of ‘conditioning’ around 1900.   This was mainly through his work with the salivary response of dogs to food.   As a physiologist Pavlov was interested in the chemistry of digestion and was collecting saliva from dogs when they were presented with food.   The salivation response is automatic and not learned.    When he rang a bell just before giving the dog food a few times he found the bell on its own would cause the saliva to flow.   Pavlov called the stimulus that caused the physiological response on its own (the presentation of food) ‘the unconditional stimulus’.   This was because it always gave rise to ‘the unconditional response’ or ‘the unconditional reflex’ (saliva).   The ‘conditional stimulus’ was the originally neutral stimulus (bell) which through enough pairings with the ‘unconditional stimulus’ (food) would display the same response (the ‘conditional response’ – saliva).   In other words, the animal had learnt a connection between the two stimuli and it responded in the same way to either of the two.   Pavlovian conditioning of a fear response gradually diminishes when the conditional stimulus is not associated with the unconditional stimulus (a process known as ‘deconditioning’), but the fear response is liable to return later (so-called ‘spontaneous recovery’).

An experiment to investigate the reconsolidation window in humans was performed in 2009 in which the unconditional stimulus was an electric shock and the conditional stimulus an image of a coloured square. The ‘fear’ stimulated by the coloured square was measured by the change in the electrical resistance of the palms. The experiment found that if the deconditioning is performed inside the reconsolidation window (say 10 minutes after a ‘reminder’ conditional stimulus) there does not appear to be spontaneous recovery even a year later. On the other hand where the deconditioning occurred outside the reconsolidation window (say 6 hours later) spontaneous recovery occurred the following day and a year later (Schiller et al Ref 4).   It might be thought that the deconditioning process would itself establish a reconsolidation window, but this does not appear to be the case. The conclusion is that the original fear memory has been changed.

In later experiments, Elizabeth Loftus attempted to create completely false memories in 24 individuals aged 18 to 53. The subjects were given a booklet containing three one-paragraph accounts of events in their childhood as recounted by a parent or older sibling. Into the booklet was inserted a plausible but false event that the subject had been lost in a shopping mall for an extended period and had been comforted by an elderly woman before finally being reunited with the family. The subjects were asked what they remembered about each event. About two thirds of the factual events were recalled immediately on reading the accounts and a quarter of the subjects partially or fully ‘remembered’ the false event (p < 1%). “Statistically, there were some differences between the true memories and the false ones: participants used more words to describe true memories and they rated the true memories as somewhat more clear. But if an onlooker were to observe many of our participants describe an event, it would be difficult indeed to tell whether the account was a true or false memory.” (Loftus Ref 5)
As far as I know no attempt has been made to establish any personality or character differences between the people who did not ‘get lost in the mall’ and those who did. The former outnumber the latter by a factor of 3 to 1. Personality may be a factor but it seems that the more trusted the source of the false information the more likely that a false memory is implanted. For instance, a contrived photograph of a childhood flight in a hot-air balloon gave rise to a false memory of such an event in 50% of 20 subjects aged 18 to 28 (Wade et al Ref 6).

In the last decade there have been several attempts to distinguish true memories from false ones, using EEG, PET and MRI but although group differences have been found there is no reason yet to modify Loftus’ 1997 conclusion: “Without corroboration, there is little that can be done to help even the most experienced evaluator to differentiate true memories from ones that were suggestively planted.”

There is every reason to be cautious about first person reports, particularly with reports of satanic and sexual abuse ‘just remembered’, but if it were possible to dispense with memory all together, much more is at risk than mere reminiscence. Indeed, science itself would collapse.

There are some memories that are particularly vivid – a so called ‘flash bulb memory’. This is where particularly surprising and emotional events are apparently etched into our memories, such as when you first heard that John F Kennedy was assassinated (if you are that old); Princess Diana had died; the twin towers of the World Trade Centre in New York had collapsed and so on. These mentioned ones are of international importance but such memories are not necessarily so e.g. a rape or when you heard of the death of a close relative for instance. But are these memories as accurate and as detailed as we think?

I remember being in a cinema watching a Japanese film ‘Close to Life’ by the Japanese director Kurasawa when the lights went up and the cinema manager came in to say that Kennedy had been assassinated. I only know the name of the film now because I identified it from the plot some years later. I remember that one of my two friends there with me immediately left. I do remember their names. The lights went down and the film continued. I can see the manager now, but not in any photographic detail. I could not say at what point in the film the interruption came, nor what I or my friends were wearing at the time. In my mental image picture there is nothing much to distinguish this cinema from any other though I have the impression of a number of rows with a certain width in front and behind me, but I could not count them. Although I knew the cinema manager at the time I do not remember his name nor could I say with any certainty what he was wearing. Such memories do not appear to be as accurate as we think they are.

Psychologists Neisser and Harsch (Ref 7) interviewed 106 college students less than 24 hours after the Challenger disaster and then again after 2½ years. The later recollections were much less detailed than the original. Neisser thinks that the persistence of the memories and their clarity is due to the frequent consideration the memories receive after the event, rather than to the original impact. The flash bulb events are the link between our own personal life stories and the life stories of our friends and acquaintances; or as Neisser puts it, “with ‘history'”. Flash bulb memories are prone to confusions and omissions and insertion of people who were not there.

People who survive traumatic events sometimes develop the condition known as Post Traumatic Stress Disorder (PTSD). Sometimes the condition arises a considerable time after the traumatic event. PTSD entails such things as disturbing and recurring flashbacks, avoidance of reminders of the event, and high levels of anxiety. Surely these kinds of memory are more reliable? In a 1990’s study 59 Gulf War veterans were asked about their war experiences a month after return and again 2 years later. (Southwick et al Ref 8) 70% recalled at least one traumatic event after 2 years that they had not mentioned before. This does not necessarily mean they were confabulated, but those recounting the most ‘new’ memories also reported the most PTSD symptoms. This suggests to some that the vets were attributing symptoms of depression and anxiety to a memory given new significance or even unconsciously fabricated.

Victims of rape and other traumas are often offered psychotherapy (ie counselling and talking cures). There are somewhere between 400 and 800 different brands of ‘psychotherapy’ depending to which list you refer, and some are of a bizarre nature and dubious efficacy. That there are so many versions is testimony to the lack of good theory based on evidence. Some of these have extensive training periods and/or accreditation procedures and/or are backed by some academic background and/or are government sanctioned and/or are heavily promoted. However, the sometime recommendation that psychotherapies be licensed and validated by the government has little going for it. In view of the wide definition of psychotherapy [HH Strupp defines psychotherapy as “the systematic use of a human relationship to effect enduring changes in a person’s cognition, feelings and behaviour.” (Ref 9) ] it is difficult to separate ‘psychotherapy’ from those social interactions which the government currently has, and should have, no business in. See Dawes (Ref 10) for the American experience with licensing of psychologists from the point of view of a professor of psychology. The usual rationale for such licensing is that the public should be protected from charlatans, and quality assurance of the techniques. According to Dawes, licensing is more oriented to protecting the status and income of practitioners and does little to protect the public, rather it sanctions the practice of dubious procedures such as alien abduction ‘therapy’, the application of invalid diagnostic tests such as the Rorschach and the public recognition of being an ‘expert witness’ in court.

Psychoanalytically-oriented therapists think that the reason an incident apparently causing PTSD was not a problem for a period of time was because the individual repressed the memory (pushed it into his or her ‘unconscious mind’) and was unable to recall it. The repression is expressed in unhealthy emotions and behaviour. When the memory is recovered the individual is restored to ‘health’. This idea gave rise to ‘recovered memory therapy’ in which therapists sought to uncover repressed memories of traumas of all kinds. Memories of ‘sexual abuse’, ‘satanic rites’, ‘ritual murders’ not to mention ‘alien abductions’ were recovered in the 1980’s and 1990’s which owed more to the imagination of the therapists than the experiences of the individuals. A number of high-profile court cases where fathers were wrongfully imprisoned for sexual abuse of their daughters based on memories ‘recovered’ by hypnosis and suggestion, tempered the courts with caution at least for a while, if not the therapists.

Furthermore those who were subjected to therapy of this kind were evidently more upset afterwards than they were before and the therapists may well have actually created the PTSD they were trying to prevent. See (McNally Ref 11) for some accounts of these false memory cases. One of the more interesting (for outsiders) was the case of a man whose daughters recovered ‘memories’ of him having abused them. In an “intensive quasi-hypnotic interrogation” of this man he recovered memories of having raped his own children repeatedly, having led a satanic cult for nearly 20 years and been involved in the sacrificial murder of hundreds of babies. He confessed to the crimes and was jailed despite there being no evidence of missing babies or bodies or a satanic cult. Evidently patients who recover memories of ritual abuse often develop PTSD during the course of therapy – rates have ranged from 28% to 100%. Survivors of ‘recovered memory therapy’ seem to be intent on revenge against their alleged abusers. The problem with ‘recovered memory therapy’ was that the ‘memories’ were false.

It is not just ‘psychotherapists’ who can instil false memories. Leading questions by social workers, and police can contaminate and corrupt children’s (and adult’s) memories. This evidently occurred in the notorious cases of sexual abuse ‘epidemics’ in Scotland in 1991 (BBC News Ref 12) and in England in 1987 (Pragnell Ref 13)
Not all psychological investigation of memory is negative in the sense that it throws doubt on the validity of memory. But care has to be taken not to make inadvertent suggestions. Based on the challenges mentioned above, Geiselman and Fisher (Ref 14), developed a means of improving the validity of recall, for example in forensic investigations, called the ‘cognitive interview’. This technique is based on the idea that a memory consists of many different elements and that the more context that can be recalled or reinstated the more reliable the recall is likely to be. (Evidently, this is the factual basis of the old Hollywood joke in which a person can only recall certain information when he is drunk. One experiment with deep-sea divers asked them to recall a list of words given to them either under water or on land. This showed that words given underwater were best recalled under water and words given on land were best recalled on land.) Secondly, memory can be retrieved in several ways, so what is not retrieved by one means may be retrieved in another. The technique includes four instructions that interviewers could use to get more reliable accounts from witnesses:

  • (1) try to picture in your mind the circumstances that surrounded the crime event including what the environment looked like, and also think about your feelings and reactions to the event.
  • (2) Report everything that you can remember; do not leave anything out of your description, even things you may consider unimportant.
  • (3) Report the events in different orders: forward, backward, or starting from the middle.
  • (4) Try to recall the different perspectives you may have had during the event or think about what some other prominent person at the event would have seen.

In addition to these general instructions, the cognitive interview also contains specific prompts to facilitate recall of particular kinds of information… (eg ‘Did he remind you of anyone you know? If so, why?’… ‘Was there anything unusual about the voice? What were your reactions to what was said?’)”. Evidently in laboratory experiments this technique produced 25%-30% more facts than standard interviews without increasing the number of false details and it may decrease the contaminating effect of misleading post-event information. Field studies with police interviewers trained in the technique showed a 47% increase in information gleaned.

Now that police interviews are routinely videoed and the videos shown to the jury, the jury should be able to assess to what degree the interrogation was in line with the cognitive interview technique. Unfortunately juries can only judge cases on the evidence before them and unless evidence such as that discussed here is presented to them the average juror will be ignorant of it.

It is not sufficient to accept an allegation of rape or sexual abuse without obtaining such information as time and place, details of how the offence was perpetrated in all the embarrassing details, the size and shape of the offender’s member, and so on. Is the account consistent from telling to telling or does it show evidence of successive ‘embroidery’?

The difficulty of assessing the truth of a ‘recalled event’ by witnesses has given rise to the idea that lies can be detected by physiological measurements as used in the ‘lie detector’ or ‘polygraph’. The theory behind the polygraph is that a deceptive answer to a pertinent question causes an emotional response such as fear of detection or heightened arousal which shows up in the physiological recordings. Although the result of a polygraph test appears to be a purely physiological measurement, in fact the result is a product of the examinee’s motivation, the interrogation technique and the interpretation of the physiological measurements, as well as the physiological effects themselves [Orne et al Ref 15].   Accordingly polygraph operators generally demonstrate to the subject how the polygraph can detect emotional responses and instil a belief that the polygraph can detect lies. Faced with an infallible witness such as this many interviewees confess, believing resistance is useless. On the other hand accused persons sometimes volunteer for polygraph tests believing their innocence can be proved by their physiological responses. Physiological responses to the various questions can vary according to whether the examiner is friendly or aggressive, and whether the examiner is acting for the prosecution or the defence. Sometimes the examiner concludes that the subject is deceptive because of suspected “counter-measures” during the session. The subject’s psychological profile such as his attitude to lying or his considerations concerning the alleged offences must be pertinent. The result of a polygraph examination thus depends on a number of factors apart from the actual physiological responses. Results are therefore difficult to replicate. Scientific opinion on the accuracy of the polygraph in detecting lies is generally unenthusiastic [Fienberg et al Ref 16]. There is no direct causative chain that leads from lying to the physiological responses. The physiological responses can be caused by other factors than lying and it is therefore impossible to decide on the basis of the physiological response that a lie has occurred.

In most cases there is no independent measure of deception therefore the incidence of false positives and false negatives cannot be ascertained. In one experiment a number of students were given information on a forthcoming test. When wired up to what they believed were lie-detectors that were in fact dummies, 13 out of 20 confessed to receiving the information, whereas only 1 in 20 confessed when not wired up[Quigley-Fernandez & Tedeschi Ref 17]. Therefore even spectacular true positives do not prove the effectiveness of polygraph testing per se. Also, false confessions do occur [Meyer & Youngjohn Ref 18].  A confession by a naïve subject is no doubt counted as a success for the polygraph, but this success does not automatically carry over into the case where a more sophisticated and possibly trained subject deliberately wants to evade detection. Polygraph evidence is not admissible in UK criminal courts.

The perceived infallibility of technology is carried over even more convincingly into brain scanning techniques, so-called ‘brain fingerprinting’. In this technique the subject is hooked up to an EEG (electroencephalograph) that records electrical potentials in the scalp. The particular potential that is considered significant is a positive potential that occurs roughly 300 milliseconds after a stimulus that the subject recognises. This P300 potential is thought to occur where the subject recognises the stimulus as familiar or meaningful, but not otherwise. Thus a number of pictures, phrases or words, including some relevant to the enquiry, are shown to the subject and the P300 potential looked for. Thus when a P300 response occurs on a picture of (say) the murder weapon when the subject would otherwise have no knowledge that this was the case, this could be taken by a jury as an indication of guilt. But brain fingerprinting only reveals what information is stored in the subject’s brain. It does not show how or why the information got there [Farwell Ref 19]. Therefore the selection of the various phrases and pictures is critical. The degree to which such memory traces are reliably indicated under the conditions where memory is subject to the seven sins mentioned above requires investigation. According to Farwell, no questions are asked or statements made during the test, so it is not in any sense detecting ‘lies’. In the case of an alleged rape, the intent of the parties, which may be the vital piece of evidence, is not revealed. Brain fingerprinting has therefore limitations, and is only one more piece of evidence to be weighed by the jury, if indeed such evidence is produced and admissible. The method of producing such evidence must also be subject to scrutiny to prevent abuse and even ‘fitting up’. In addition the reliability of brain fingerprinting has been seriously questioned [Rosenfeld Ref 20].

As far as I know no brain fingerprinting evidence has been produced in a British court and is not likely to do so in the near future. There are also ethical considerations in the use of such techniques, for example: does an individual have a right to their own thoughts? Under what circumstances should such a right be waived? What level of certainty is required for a person’s statement is to be classified as a lie based on the output of a machine?

At this time of writing a prominent case of alleged rapes and sexual assaults by a celebrity has been resolved with a ‘not guilty’ verdict. There was a high cost to the tax payer of this and other trials, not to mention the stress to the accused, the witnesses and their families. The onus is therefore on the Crown Prosecution Service to make sure there is a reasonable chance of conviction. It should never be the case that every prosecution brought by the CPS should result in a guilty verdict. However it is up to the CPS to vet the police evidence and assess the reliability of witnesses. There are standard psychological techniques to measure to what degree a witness is suggestible or indeed fantasy prone [Gudjonnson Ref 21]. The main use of this kind of evidence in court has been in assessing whether a confession by a defendant was the result of suggestion and coercion. Such evidence has resulted in the reversal of ‘guilty’ verdicts in several high-profile cases. It would be too much to suggest that all witnesses be subjected to such tests, but in the case of historical abuse cases with little or no evidence beyond the witness testimony, the CPS should consider using the tests to assess the reliability of witnesses. If the main prosecution witness proves reliable when subject to such tests, the CPS would strengthen their case. In the contrary eventuality, the CPS would save a great deal of tax payers’ money and not lose face by bringing a weak case.

What do you think?




[Ref 1] Schacter D L (1999) The seven sins of memory: Insights from psychology and cognitive neuroscience American Psychologist vol 54 p182-203
[Ref 2] Hall DF, McFeaters SJ & Loftus EF (1987) Alterations in Recollection of Unusual and Unexpected Events Journal of Scientific Exploration, Vol 1 (1) p3-10 available at
[Ref 3] Loftus EF & Palmer JC (1974) Reconstruction of automobile destruction: An example of the interaction between language and memory Journal of Verbal Learning and Verbal Behaviour vol 13 p585-589
[Ref 4] Schiller D, Monfils M-H, Raio C, Johnson DC, LeDoux JE & Phelps EA (2010) Preventing the return of fear in humans using reconsolidation update mechanisms. Nature vol 463 (8637) p49-53
[Ref 5] Loftus EF (1997) Creating False Memories Scientific American vol 277 no 3 p70-75 available at
[Ref 6] Wade KA, Garry M, Read JD & Lindsay DS (2002) A picture is worth a thousand lies. PsychonomicBulletin and Review vol 9 p597–603 available at
[Ref 7] Neisser U & Harsch N (1992) Phantom Flashbulbs: false recollections of hearing the news about Challenger in Winograd E & Neisser U (eds) Affect and Accuracy in Recall: studies in Flashbulb memories Cambridge p9-31
[Ref 8] Southwick SM, Morgan CA, Nicolaou AL & Charney DS (1997) Consistency of memory for combat related traumatic events in veterans of Operation Desert Storm American Journal of Psychiatry vol 154 p173-177 abstract available at
[Ref 9] Strupp HH (1986) The non-specific hypothesis of therapeutic effectiveness: a current assessment American Journal of Orthopsychiatry vol 56 (4) p513-520
[Ref 10] Dawes, RM (1994) House of Cards: Psychology and Psychotherapy Built on Myth New York: The Free Press p133-177
[Ref 11] McNally RJ (2003) Remembering Trauma Belknap Press chapter 8 p240-246
[Ref 12] BBC News (1991) “1991: Orkney ‘abuse’ children go home”. On This Day 4 April 1991. available at
[Ref 13] Pragnell C (2002) The Cleveland Child Sexual Abuse Scandal: An Abuse and Misuse of Professional Power available at
[Ref 14] Geiselman RE & Fisher RP (1988) The Cognitive Interview: An Innovative Technique for questioning witness of crime Journal of Police and Criminal Psychology vol 4 (2) p2-5
[Ref 15] Orne MT, Thakray RI & Paskewitz DA (1972) On the detection of deception: A model for the study of physiological effects of psychological stimuli in Greenfield NS & Sternbach RA (eds) Handbook of Psychophysiology New York: Holt, Rinehart and Winston
[Ref 16] Fienberg SE, Blascovich JJ, Cacioppo JT, Davidson RJ, Ekman P, Faigman DL, Grambsch PL, Imrey PB, Keeler EB, Laskey KB, McCutchen SR, Murphy KR, Raichle ME, Shiffrin RM, Slavkovic A & Stern PC (2003) The Polygraph and Lie detection National Academies Press p288
[Ref 17] Quigley-Fernandez B and Tedeschi JT (1978) The bogus pipeline as lie detector: Two validity studies Journal of Personality and Social Psychology 36 p247-256
[Ref 18] Meyer RG & Youngjohn JB (1991) Effects of feedback and validity expectancy on responses in a lie detector interview Forensic Reports 4 p235-244
[Ref 19] Farwell LA (2004) PBS Innovation Series – Brain Fingerprinting: Ask the Experts available at
[Ref 20] Rosenfeld JP (2005) Brain Fingerprinting: A Critical Analysis The Scientific Review of Mental Health Practice vol 4 no 1
[Ref 21] Gudjonsson GH (1984) Interrogative Suggestibility: Comparison between ‘False Confessors’ and ‘Deniers’ in Criminal Trials. Med Science Law vol 24 no 1 p56-60 available at

Common Sense and Reid’s Razor

by Michael Davidson

One of the problems in philosophy is how to set standards by which philosophical theories might be evaluated. Various philosophers have proposed razors, ie philosophical principles that can be used to cut away and discard other philosophical principles and ideas. Perhaps the best known are Ockham’s Razor and Hume’s Fork. Another razor is ‘common sense’. This term has unfortunately become a pejorative in some philosophical and ‘scientific’ circles along with ‘folk psychology’ which means explanations of the behaviour of people in terms of their beliefs and goals, rather than in ‘scientific’ terms such as neurophysiology. One objection to the term ‘common sense’ is that though everyone presumes to have it, it seems to be remarkably absent in others, and it lacks any agreed definition.

Another objection is that our best physics (ie quantum mechanics and relativity) “defies common sense” and since this physics describes how the world is, common sense can and should be discarded as inconsistent with reality. The astrophysicist John Barrow, for example, defines common sense as

“a description that crystallises from what is already known, and implies a certain unwillingness to admit any change, …the implication being that any deviation from it would be uncommonly senseless”.(Barrow 1988)

Similarly the biologist Lewis Wolpert contends

“that ‘natural’ thinking – ordinary day-to-day common sense – will never give an understanding about the nature of science. Scientific ideas are with rare exceptions counter-intuitive: they cannot be acquired by simple inspection of phenomena and are often outside everyday experience. Secondly, doing science requires a conscious awareness of the pitfalls of ‘natural’ thinking. For common sense is prone to error when applied to problems requiring rigorous and quantitative thinking; lay theories are highly unreliable.
. . .I would almost contend that if something fits in with common sense it almost certainly isn’t science. The reason is that the way in which the universe works is not the way in which common sense works.
. . . common sense thinking is quite unsatisfactory [for science]. It is quite different from scientific thinking lacking the necessary rigour consistency and objectivity.
. . . One of the strongest arguments for the distance between common sense and science is that the whole of science is totally irrelevant to most people’s day to day lives.” (Wolpert 1992)

This view seems to promote the alienation of science from the rest of human activity. If science is antithetical to ‘common sense’ it does not follow that because something is contrary to ‘common sense’ it must be true. In another part of the book, Wolpert states that he is ‘a common-sense realist’ ieI believe there is an external world which I share with others and which can be studied.” but holds that his philosophical position is irrelevant to his scientific activities.

In both these examples ‘common sense’ seems to mean the frequent notions that people have about the world particularly in those areas in which they have little or no experience. Appeals to common sense certainly ought to be more careful and more specific about what particular principle is held to be ‘sensible’ and ‘common’. But neither ‘frequent beliefs’ nor ‘common sense’ are a single principle.

The foremost ‘common sense’ philosopher was probably the Scottish philosopher Thomas Reid (1710 – 1796). Reid acknowledged Francis Bacon and John Locke as his predecessors. Reid’s standing as a philosopher diminished at the hands of Kant who was dismissive of common sense as

“but an appeal to the opinion of the multitude, of whose applause the philosopher is ashamed, while the popular charlatan glories and confides in it.” (Kant 1783)

The most well-known ‘common sense philosopher’ of the twentieth century was G E Moore (1873-1958), through his defence of common sense in an essay.(Moore 1925) Karl Popper also claimed that his philosophical theories did not clash with common sense nor with science (Popper 1983). Commitment to ‘common sense’ is of course no guarantee of having it.

Defining what we mean by common sense is not easy. Generally we define it by those things which are not it, for example when someone does something we consider stupid. If we look up the two words in the dictionary we find:

common: shared (as in ‘common property’); open or free to all (as in ‘common land’); frequent (as in ‘a common event’); inferior (as in ‘a common dwelling’).
sense: sensation (as in ‘sense of pain’); understanding (as in ‘it makes sense’); sound reasoning and judgement (as in ‘to speak sense’); opinion or sentiment (as in ‘the sense of the meeting’).

With four possible meanings each of ‘common’ and ‘sense’, (plus some minor meanings) we have plenty of room for misunderstanding and misrepresentation. The form of common sense that is usually dismissed as useless is ‘frequent opinions’ or ‘frequent notions’. Many frequent notions are false and misleading. That is no news. A different reading of the term ‘common sense’ is “the power of reasoning and judgement possessed by people in general and open and free to all.” Very often when this power is brought to bear on frequent notions they are seen to be the falsehoods they are. What is missing from frequent false notions is exactly this power. Obviously I do not mean to say that all mankind are capable of discourse on quantum mechanics; but I do mean to say that most people given the inclination, aptitude and application would be able to appreciate the subject. The sciences are built on this power presumed to exist for all. Not everyone can be an Einstein and trail blaze, but we assume that everyone with an inclination can follow. In the sciences, at least, there are no private lines to God.

‘Common sense’ is for reasons mentioned an unfortunate term. A possible alternative term is ‘good sense’ which perhaps escapes the connotation of “everybody knows (that which turns out not to be so)”, and perhaps sees the term more as an epistemological one than a metaphysical one. It is difficult to see how any principle can stand once we throw out good sense as a starting point.

Reid defined common sense as “that degree of judgement which is common to men with whom we can converse and transact business” (Reid 1785a) I have no quarrel with this definition. To try to avoid the unfortunate connotations of the term ‘common sense’ I refer to this as ‘Reid’s Razor’. Take a philosophical or scientific principle that is being applied to a particular situation: ask yourself whether you would be able to converse rationally and transact business with that person assuming that principle governed the situation or persons involved. If not dismiss the principle as erroneous or at least deeply suspicious. For example, suppose someone proposes that things-as-they-appear-to-be are not things-as-they-really-are. I do not think I would buy a used car from this man.

Thomas Reid (1710 – 1796)
Reid goes on to list a number of principles that he holds to be contingently true (but not necessarily true all the time) (Reid 1785b)

1) Everything of which I am conscious really exists
2) The thoughts of which I am conscious are the thoughts of a being which I call myself, my mind, my person.
3) Events that I clearly remember really did happen.
4) Our personal identity and continued existence extends as far back in time as we remember anything clearly.
5) Those things that we clearly perceive by our senses really exist and really are what we perceive them to be.
6) We have some power over our actions and over the decisions of our will.
7) The natural faculties by which we distinguish truth from error are not deceptive.
8) There is life and thought in our fellow-men with whom we converse.
9) Certain features of the face, tones of voice, and physical gestures indicate certain thoughts and dispositions of mind.
10) A certain respect should be accorded to human testimony in matters of fact, and even to human authority in matters of opinion.
11) For many outcomes that will depend on the will of man, there is a self-evident probability, greater or less according to circumstances.
12) In the phenomena of Nature, what happens will probably be like what has happened in similar circumstances.

According to Reid, anyone who doubts these principles will be incapable of rational discourse and those philosophers who profess to doubt them cannot do so sincerely and consistently. Each of these principles, if denied, can be turned back on the denier. For example, although it is not possible to justify the validity of memory (3) without reference to premises that rest on memory, to dispense with memory as usually unreliable is just not philosophically possible. Reid qualifies some of these principles as not applying in all cases, or as the assumptions that we presume to hold when we converse, which may be contradicted by subsequent experience. For instance with regard to (10) Reid believes that most men are more apt to over-rate testimony and authority than to under-rate them; which suggests to Reid that this principle retains some force even when it could be replaced by reasoning.

I endorse Reid’s principles as normally true and what we must assume to be true to engage in argument and discussion. But, as Reid acknowledges, not all may be true all the time. I thus see Reid’s principles as epistemological rather than metaphysical. Psychologists might point to such things as optical illusions, false memory, attentional blink, hallucinations and various other interesting phenomena which might throw some doubt over some of Reid’s assertions. But these are nonessential modifiers that if entertained as falsifications of these principles would lead to the collapse of all knowledge. Very few philosophers have not acknowledged that the senses can deceive us or that reason is fallible, but to say the senses consistently deceive or that reason is impotent is too big a sacrifice. That the senses can deceive and reason is fallible is good reason to be cautious in our conclusions but not a good reason to dispense with observation and reason all together. Whereas it may be true that

“we are the sort of creatures who cannot help but believe some things that are false,” and “There seems to be no guarantee that our epistemic capacities really give us access to the world,” (Jamieson 1991)

it does not follow that everything we believe is false, nor that our most cherished beliefs – 1 to 12 above – are entirely false.


Barrow J (1988) The World within the World, Oxford University Press, p54
Jamieson D (1991) Method and Moral Theory in Singer P (ed) A Companion to Ethics Blackwell p 476-487
Kant I (1783) Prolegomena to any Future Metaphysics transl J Fieser p6 available at
Moore GE (1925) A Defence of Common Sense in Muirhead JH (ed) Contemporary British Philosophy (2nd series), available at
Popper K (1983) Realism and the Aim of Science Routledge p131
Reid T (1785a) Essays on the Intellectual Powers of Man, Essay 6 Chapter 2 available at (p217)
Reid T (1785b) Essays on the Intellectual Powers of Man, Essay 6 Chapter 5 ibid p240-51
Wolpert L (1992) The Unnatural Nature of Science Faber & Faber pp xi,11,12,16,106,487

The Self: The greatest trick your mind never played

by Michael Davidson

The 23 February 2013 special issue of the New Scientist featured a series of short articles entitled “The Self: the greatest trick your mind ever played“. These provide various arguments as to why ‘the self’ – ie ‘you’ – is deluded into thinking it exists when in fact it doesn’t.the self

The first thing to note is that of the seven small articles, two are by the philosopher Jan Westerhoff and three are written or partially written by “New Scientist consultants”. Two other authors are named but their credentials are undisclosed. I will only comment here on Westerhoff’s articles.

Westerhoff briefly reviews some of the philosophical arguments that tell us that ‘the self’ is an illusion. Either ‘the self’ is like
1) a continuous thread which “runs through every single moment of our lives, providing a core and a unity for them.” or
2) it is just “the continuity of overlapping mental events” – somewhat like a rope which has no single fibre running the length of the rope but still holds together as a rope.

The argument against the first model is that what we normally consider to be ‘the self’ undergoes numerous changes through our lives – such as “being happy or sad, being able to speak Chinese, preferring cherries to strawberries, even being conscious.” The continuous self, then, is “so far removed from everything constituting us that its absence would scarcely be noticeable.”

The argument against the rope model is that there is no constant part therein that we can identify with.

Since these two models are flawed we are urged to conclude that a continuing ‘self’ is really an illusion. The simpler course is to think of better models. The fact that our moods, tastes, abilities etc change through our lives does not seem to be a great strike against a simple entity – attribute – ability model. The puzzle – if it be a puzzle – of how an entity can change its attributes and remain the same entity has come down to us from the ancient Greeks as the paradox of the ‘Ship of Theseus’. The idea is that as the various planks that make up the ship are replaced because of decay, it is possible eventually that no part of the original ship remains. Is it then the same ship? Such paradoxes are thus not confined to ‘the self’.

Westerhoff states that even if there is no continuing self perhaps there is a self in the here and now – ie the self is where all the senses come together.

But this apparency of the unity of conscious experience can be disrupted as seen in an experiment which is described but not referenced. This experiment was first reported by Kolers & von Gruneau (1). In it two coloured dots (one red, one green) are displayed on a screen in quick succession in different locations. According to the article the dot appears to move steadily between the two locations and change colour somewhere between. This immediately presents a paradox, since the apparent spot cannot change colour before it arrives at the second location. The apparency of motion caused by a succession of closely related images is called the ‘beta phenomenon’ and is the fortunate basis of the multi-billion dollar movie industry.

The colour changing illusion is not seen by everyone. I personally do not see it and psychologist Nelson Cowan (2) reports that he sees the moving object as colourless.

But this experiment is apparently seen of great significance in destroying the idea that ‘the self’ is the confluence of the senses whereby we get a whole view of the world – what Aristotle termed ‘the common sense’. There are innumerable other optical illusions which show that our perceptions are modulated by our sensory apparatus but these don’t evidently have the same significance. It is no surprise that the senses can be deceived by sufficiently unusual and specifically engineered events but this does not mean that our perceptions are always wrong. Indeed if they were not usually good enough we would have no criteria for determining when they deceive and human beings would have been winnowed by natural selection millennia ago. We usually can recognise illusions as illusions.

A third aspect of ‘the self’ which needs to be explained (away) is the apparency of conscious will, ie that we are agents – “the thinker of our thoughts and the doer of our deeds.” The empirical finding on which this argument relies is an experiment by Wegner & Wheatley in 1999 (3). Approx 50 volunteers were asked to move a cursor around a screen by controlling a mouse and stop the cursor on one of about 50 small images. The mouse was shared with an accomplice of the experimenters via an ouija board arrangement, and unbeknownst to the volunteer the accomplice was instructed to force a stop on particular objects at particular times. The volunteers were asked to stop on the same particular objects and then to rate how much they were actually involved in causing the stop (between 0% = ‘I allowed the stop to happen’ to 100% = ‘I intended to make the stop’). What was variable was the interval of time from the forced stop and the moment when the object was named to the volunteer. When the object name was presented between 1 and 5 seconds before the stop, the volunteers rated their intentions at 60%+-5% as opposed to when the word was presented 30 secs before or 1 sec afterwards (45% +-5%). The result is significant at the 5% level, though 45% and 60% indicates the participants weren’t particularly certain either way. The Westerhoff article however gives no hint of this uncertainty, implying a much more definite result than is in fact the case.

Wegner & Wheatley concluded that conscious will is only an apparency – similar to magic in that the magician presents an apparent causal sequence whilst the real sequence is something else. People do what they do not through conscious will but as the result of various genetic, unconscious, neural, cognitive, emotional, social and possibly other causes. This (according to Wegner & Wheatley) is the ‘core assumption‘ of psychological science. Evidently the conclusion follows from the assumption rather than the empirical facts.

The conditions of the experiment are such that the volunteer thinks that the experimenter’s accomplices are volunteers like himself. It is not known how he views the other person: as a competitor, a team player, a spectator etc. The times where the accomplice did not manage to force the stop on the particular object chosen at the desired time (which seemed to be between 22% and 47% for any one participant) were excluded from the analysis. So there are a number of question marks as to what this experiment means. The philosopher Eddy Nahmias (4) comments “When all is said and done, Wegner has offered no evidence or arguments against this proposal: certain brain processes have the property of being consciously represented to the agent as mental states we describe as beliefs, desires, intentions and actions (for instance, my brain is currently going through the processes which I experience as something like ‘I think this proposal makes sense’. Type out the words, ‘This proposal makes sense.’ and so on). How is it that these brain processes have these experiential properties is currently a mystery… But if these processes did not have their representational properties then they would not have the causal powers they have.” I comment further in chapter 12 of my e-book “Rethinking the Mind”.

In a separate article “When are you? – you are being tricked into thinking you live in the present” Westerhoff describes an experiment by Eagleman & Sejnowski (5). Five participants sat in front of a computer screen on which a small ring moved around a large circle at one revolution per second. When the ring was in a certain position of its trajectory (9 o’clock) a white disk was flashed at the 9’oclock position +- 7° and the subjects were asked to indicate whether the flash was above or below the ring when it was perceived. In this way an estimate of how much the flash was displaced from the perceived position of the ring at the time of the flash could be made. Each participant saw the flash when the ring was displaced by around 5°. When the direction of travel of the ring was reversed at the time of the flash the percept was displaced by a similar amount in the new direction of travel. By changing the time at which the reversal takes place upto 80 millisecs after the flash the perceived position of the flash can be manipulated. This indicates that “the percept attributed to the time of the flash is a function of the events that happen in the ~80 millisecs after the flash.”

According to Westerhoff “All this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment of time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.” This seems to be a drastic conclusion for the sake of 80 millisecs. High class athletes find it difficult to react to the starting gun in less than 180 millsecs but athletes are not considered to be zombies as a result. Similarly the apparent fact that our present moment is stretched over 80 millisecs does not entail that there is no ‘self’.

The whole approach seems to be an example of what Francis Bacon (1561-1626) identified as the human habit of accepting data that agrees with one’s own prejudices or those of the philosophical schools and rejecting data that disagrees. This was demonstrated in an experiment (6) in which 24 proponents and 24 opponents of capital punishment were shown the results of two studies that showed evidence for and against capital punishment as a deterrent.   Since the studies were fictional (unbeknownst to the 48 students) they could be ‘engineered’ to produce positive or negative findings from exactly equivalent protocols, and could be presented in either order to eliminate the effects of sequence.   Needless to say, the subjects felt that the studies confirming their view were more scientifically valid than those opposing.   Furthermore they were more convinced that their view was correct after reviewing both pieces of evidence than before, regardless of the order in which the two studies were viewed (p < 0.001).

This series of short articles in the New Scientist hold to a materialist philosophy as the starting point. Since materialist notions fail to find any reason why there should be a ‘self’ then the ‘self’ must be denied as an illusion or delusion. All forms of materialism have philosophical difficulties, as do all forms of non-materialism, such as dualism. What is required is a neutral approach that takes the empirical data and forms theories of limited applicability and from there predict and control phenomena in that area to gradually widen the theories.

Assuming the answer to the mind-body problem in advance (materialism) and limiting the search to only the empirical data that seems to confirm this philosophical view is not only self-confirming but anti-science.


1. Kolers PA & von Gruneau M (1976) Shaper and Colour in Apparent Motion Vision Research vol 16 p 329-335.

2. Cowan N (1995) Attention and Memory an Integrated Framework OUP p237

3. Wegner D & Wheatley T (1999) Apparent Mental Causation Source of the experience of Will American Psychologist vol 54 no 7 p 480-492 available at

4. Nahmias E(2002) When Consciousness matters Philosophical Psychology vol 15(4) p527-554

5. Eagleman DM & Sejnowski TJ (2000) Motion Integration and Postdiction in Visual Awareness Science vol 287 p2036

6. Lord CG, Ross L & Lepper MR (1979) Biased assimilation and attitude polarisation: The effects of prior theories on subsequently considered evidence Journal of Personality & Social Psychology vol 37 p2098-2109 available at