Wednesday, 19 October 2016

The Moral Fuzziness Of The Moral Calculus



Here’s a moral question for you to consider (based loosely on a real life scenario in 1884 called the lifeboat case: The Queen vs Dudley and Stephens). Four students named John, David, Derek and Alan are stranded on a boat with no food in the middle of the ocean after their engine fails.

The students call for help but their rescue crew won’t get there in time, meaning they look set to die of starvation. There is a revolver in the cabin, and a possible solution to save three of the four students - with one student being shot in the head (for a painless death) and then being eaten by the other three, ensuring survival until the rescue crew arrives. Would it be morally wrong to kill one to save the other three (the famous trolley problem has a similar dilemma)?

The answers to that question tend to depend on how the person to be shot and eaten is chosen. If you asked 100 people whether it would be right for David, Derek and Alan to gang up and nominate John to be killed, most would feel uncomfortable with this scenario. If, however, you asked 100 people whether it would be right for the students to put the four names in a hat and then randomly pick a student to be sacrificed for the greater utility of the other three, many would feel much better about that scenario (as I argue in this blog post, in scenarios like the trolley problem we should pretty much always take the consequentialist action).

One of the many interesting things about moral dilemmas is that we get to draw the important distinction between what we ‘should’ do and what we ‘would’ do. If forced to choose between pushing a button to kill your one child or pushing a button to kill 500 randomly chosen strangers, you probably should save the 500, but many probably wouldn’t. If faced with a screaming child on the top floor of a 3 storey house on fire, we probably should go up and try to save the child, but ‘should’ and ‘would’ are two different things. It’s always easier to say what you ‘should’ do than be sure of what you ‘would’ do.

Moral philosophy enables people to exclaim (and often exaggerate) their own probity in defining what is theoretically the right course of action - but, as the old maxim goes, actions speak louder than words - and the real life evidently can bring about a gulf between what we should do and what we do end up doing. If it didn't, there would be no wrongdoers in the world.

Another thing to consider is this. Does acknowledging the right thing naturally mean that one must always do the right thing? I'm not sure it does. For example, in the trolley problem, could it be the right thing to do to push the fat man to save the five, but not morally essential to do so if one's personal deontology trumps the consequentialism of the act? It's possible. Furthermore, are there actions for which any of the two outcomes are either both moral or both immoral? It could be conceivable that pushing the fat man and not pushing the fat man can be argued to be moral, and I suspect, immoral too.

What we often find objectionable is when we think people are being used as a means to ends. But I've often thought the whole notion of means and ends to be limited - after all, what are ends to some consequentialists could be regarded as means to other consequentialists. And clearly an end isn't always an end in itself - it is frequently another means to a further end, which is a means to another end, and so the story goes on.

There is another reason why what we should do and what we would do don't always translate into a personal moral philosophy - most people don't really live their lives with an underpinning moral philosophy, they only think they do. That is to say, we humans have a tendency to place more emphasis on the theory of moral systems than is actually the case in action. We talk so often about creating moral rulebooks or philosophies, but they don’t seem to have much influence on our behaviour. 

Perhaps the theories are like maps we consult every now and then, and our daily lives are like trips around a territory with which we are habitually familiar. And just maybe that is actually good for the emotions – after all, a human that followed a book, or a rigid set of instructions, or a mathematical formula, or advice from a guru every time he was looking to do the right thing would be rather too much like a robot.  

We are not like robots, of course - we are much more dynamical, and hence much more inconsistent because of that. The trouble with the dynamical approach, I find, is for people not very proficient or balanced it could lead them into a life where they scarcely have anything of moral value at all. They end up being selfish, parochial and so unaccustomed to the pursuit of human excellence that they let themselves down and cause upset to others. 

What we actually find, in fact, is that humans tend to oscillate between what we might call "Working out our actions from emotional desire one situation to the next" and what we might call "Having a firm and consistent set of moral values" - and we are frequently trying to find the right balance between the two, which is why you'll often notice that some of our strengths lead to faults, and some of our weaknesses lead to positive qualities. And you'll find in some cases that the two are quite inextricably entangled. Or to paraphrase W.H. Auden, if you take away some of our devils you take away some of our angels too.

The danger, as always, is that those who live by their own self-regulated moral standards tend to make those standards comfortably low - thereby making it harder to fall short of them. The primary thing that reawakens our wrongness every day, every hour perhaps, is the wrongness we do in spite of knowing full well these things are wrong, even if it just mild solecisms like selfishness, bad temperedness and uncharity.   

For example, I have written a whole book on morality - and it's a very long one too - it covers just about every aspect of morality through the lens of philosophy, economics, psychology, religion and science. But even with all that knowledge and understanding I am, and continue to be, way short of where I want to be morally, and every day is a reminder of how much better I can be.

By no stretch of the imagination am I one of the world's worst people - but most of my badness is not contingent on any lack of knowledge or any logical inconsistency or lack of symmetry - it's to do with the fact that with our daily goings on we have continual daily reminders of how much better we can be. The knowledge, logical inconsistency and symmetry that increases our morality simply makes us aware of how, on a daily basis, we can do so much better.

On that spectrum our general human behaviour is bell curved shaped. That's why for most of us our failings are failings that we humans have made largely socially acceptable due to their commonality - failings like being too impatient with someone, acting a bit too selfishly, not speaking to someone nicely due to being a bit tired and grumpy, needing to take more time to show care for people we haven't seen for a while, not volunteering more of our time for good causes, being unwilling to give up more money to worthwhile things, and things of that nature.

There is no real moral philosophy in the equation - most of us are very rarely in positions in which we get to see how we'd react in those age-old moral dilemmas. And most of us are very rarely in positions in which we and are under great pressure to be amazing human beings, or ensnared in a situation in which we are terrible - our lives are played out as a kind of weighted average of the whole spectrum, in which we usually never allow ourselves the self-rebuke or the self-congratulation of extreme badness or extreme goodness.


/>