Discovered: the secret of morality

Michael Cook
24 Sep 2013
Reproduced with Permission
MercatorNet

It's too easy to discuss specific ethical issues: what are the facts? what are the principles? how should they be applied? Boring.

Let's switch to metadiscourse, ie, grand theories of life, the universe and everything. Let's begin with a sweeping generalisation: the great challenge of civilisation is to turn selfish, passionate, greedy, lustful savages into law-abiding citizens. The best philosophical minds have pondered how to achieve this, beginning with Aristotle and Plato.

That we haven't advanced much further past the barbarity of the Peloponnesian Wars which had scarred the psyches of the Greek philosophers is obvious. Just watch the evening news about al-Shabaab terrorists running amok in a Kenyan shopping mall or the madman who killed a dozen people in Washington DC last week.

However, after 2,500 years of philosophical stumbling, psychologists at Harvard University and the University of California Santa Barbara are confident that they have more or less wrapped the problem up.

Their solution? Just have more confidence in science. It's an astonishingly daring claim. But Christine Ma-Kellams, of Harvard, and Jim Blascovich, of UCSB, write confidently in a recent issue of the journal PLOS ONE that their experiments are "the first of their kind to systematically and empirically test the relationship between science and morality".

Ma-Kellams and Blascovich used social priming to test their hypothesis, a technique which studies how sensory cues unconsciously affect attitudes and behaviour. This is a field which has led to interesting results: people who see an American flag are more likely to vote Republican; thinking about old people makes people walk more slowly; a rubbish-filled environment makes people receptive to racist thoughts...

In this case, the researchers found that priming students' minds with scientific words made them think more ethically.

This is an issue of great interest, of course, to parents, teachers, policemen and politicians. How can we be sure, for example, that doctors will not defraud the health departments, abuse patients, traffic in babies, or euthanase the elderly without their consent? If it is as easy as buying doctors copies of the Feynman Lectures on Physics or some other classic of the scientific method, we've got this morality business done and dusted.

But before rushing off to place an order with Amazon, why don't we apply a bit of common sense?

First of all, what did Ma-Kellams and Blascovich actually find?

They divided their students into two groups and told them to compose sentences using sets of words. The first group was given the words "logical, hypothesis, laboratory, scientists, and theory"; the second, random words like "more paper it once do".

Then both groups were given quizzes on their moral outlook. One was a scenario about date rape; another reported their relative interest in altruistic activities like giving blood or in selfish fun stuff like going to a party; and another was an economics game to see if they would give money away.

Who "endorse[d] more stringent moral norms and exhibit[ed] more morally normative behavior"? The students whose minds had been salted with scientific terms. They were chaster, more altruistic and more generous. Ergo, the researchers write, "the study of science itself -independent of the specific conclusions reached by scientific inquiries - holds normative implications and leads to moral outcomes."

If only it were that easy!

I have my quibbles with the way the experiment was designed. But the core problem with this study is what the authors mean by morality. All they tested, really, is their subjects' moral aspirations, not their moral deeds. Switching back to our metadiscourse for a moment, the story of civilisation is not getting people to talk the talk, but to walk the walk. Morality is about deeds, not words.

Thinking about science may enable educated people to engage in sophisticated moral discourse; it doesn't necessarily make them more moral.

Another issue is what the authors mean by science. In a brief and confused account of the history of science in their introduction, they write, "we contend that there is a lay image or notion of 'science' that is associated with concepts of rationality, impartiality, fairness, technological progress, and ultimately, the idea that we are to use these rational tools for the mutual benefit of all people in society".

There is a lot of truth in this, of course, but one could substitute the word "religion" for "science" and many religious people would agree with it. They seem to have modelled their idea of a good society on an ideal promoted by Judaeo-Christian values.

In any case, the authors of the PLOS ONE article have a very short memory.

It was only two years ago that one of the world's experts in social priming - he did many experiments similar to this one -- Dutch psychologist Diederik Stapel, was exposed as a massive fraud who had simply made up a substantial portion of his research results. His behaviour was so scandalous that it has put the whole discipline of social priming under a very dark cloud. "I see a train wreck looming," wrote Nobel laureate Daniel Kahneman in an open email to psychologists who work in social priming: "your field is now the poster child for doubts about the integrity of psychological research".

Exposure to the scientific method didn't make Mr (he was stripped of his doctorate) Stapel more ethical, not by a long shot. Curiously, Mr Stapel is not referenced in the PLoS article.

Top