Thursday, 28 February 2013

Why The Judicial System Is Flawed

The jury in the trial of Vicky Pryce has been discharged at Southwark Crown Court after failing to reach a majority verdict regarding whether she was coerced by the odious former Liberal Democrat MP Chris Huhne into accepting his speeding penalty points.  This has elicited a hugely overblown response about whether juries are, on the whole, fit for purpose.  This topic was brought up on last week’s Question Time, where the likes of Peter Hitchens and Michael Heseltine, among others, debated this issue.  The trouble is, they focused only on the things that probably aren’t making juries unfit for purpose (age, employment, academic qualifications), and ignored the things that do make them less fit than they should be. The most important thing about jurisprudence is justice.  Justice occurs when innocent people are acquitted and guilty people are convicted.  Anyone hoping for justice hopes that the system in place is most conducive to that outcome. 

I think there are primarily two things wrong with the judicial system – which means that if you were to find yourself on trial for a crime you didn’t commit, your hopes for a just outcome, in which the decisions made by the jury are consistent with the true facts surrounding the trial, would not be matched by the present system as well as could be expected. Were you to be an innocent person on trial, or a victim of crime sitting in the court hoping the person that offended against you was found guilty, you’d want everything about the system to bring about the highest probability that the right decisions are made.  Here’s why you don’t have that at the moment.

The first, and, I think, most important, it is unbelievably inefficient to shield the jury members from knowledge of relevant information, because it impairs their ability to judge the case with optimum effectiveness.  This issue should be redressed immediately – it is the biggest thing wrong with jurisprudence.  What I find ironic is that the argument against giving the jury all the background information (that such knowledge will potentially bias the jury) is precisely the reason they should be doing it, because any reasonable person should want all relevant information to be brought to bear in the courtroom.  At the very least, jurors should have the chance to decide whether this background information is relevant or not. 

If a man has been accused of threatening a neighbour with a shotgun, I’d want the jury to know whether he has a history of similar behaviour, what sort of person he is, what sort of record his lawyer has, and information of that kind.  The concept of shielding inquirers from information is alien to every other formal evidence-based system; the work of scientists, political groups, police officers and building surveyors would suffer immeasurably if they had part of their investigative data withheld from them, so why on earth should we do it in a court of law when justice and people’s futures are at stake?  At election time in politics we want the electorate to be as well informed as possible; in the biology lab we want the researchers to be apprised of as much information as possible.  It is truly unsatisfactory to expect (and wish for) members of the jury trying to get justice in the courts to remain ignorant of the important details, when many of those details are so relevant to the probability of the defendant’s guilt or innocence..

The compromised admissibility of evidence hinders in every way – and with some irony, the reason people give in support of it is an inversion of understanding the very thing we are trying to enhance – decisions based on explicit and accurate information.  Just like in science and police investigations, the information being omitted is important in building a clear probability perspective of the situation.  A man on trial accused of threatening behaviour with a shotgun is much more likely to be guilty if he has previously threatened 3 other people with a shot gun.  Yet the courts would rather you didn’t know this if you’re on that jury – which means the courts must favour a serial shotgun offender having a greater chance of being acquitted.  Apparently if you’re the sort of person who thinks this way it is frowned upon by the courts, because they don’t want juries to come into the courtroom with any biases.  I presume the courts must think that the police arrest civilians in a completely random fashion.

The second fault is less severe, but still an example of something that not is ideal – it’s to do with the configuration of the jury group.  The 12 jury members would be more efficient if they were made up of 3 groups of 4 rather than one group of 12.  The reason being, you want the conformity levels to be at a minimum – and there is a lot more conformity with one large group.  In a group of 12 you’ll almost always find a few more prominent members are able to influence the less prominent and less confident members.  In three groups of 4 this would diminish greatly, and each member would by and large be less pliable and more confident and competent in his or her involvement.  So, if you’re innocent, on trial, and desperate that the decisions made by the jury are consistent with the true facts, you’d be much better off with three groups of 4 than one group of 12.

So a trial in which the lawyers and the three groups of 4 jury members know all the background information is, I think, a much more efficient system than the one we currently have.  Only when we’ve sorted those two issues out should we start discussing comparably trivial issues like the age, employment and academic qualifications of jury members.  To focus on those and ignore the other two big flaws is a bit like turning up to a large house on fire and trying to retrieve all the furniture instead of trying to rescue the people trapped inside.

Thursday, 21 February 2013

Thou Shalt Not Inherit

There's been a lot making the headlines recently about our (largely incompetent) coalition Government's proposals for stealthily getting more out of the taxpayer.  The dishes of the day are currently centred on inheritance tax and mansion tax, as well as gimmicky taxes like fizzy drinks tax, fatty food tax, booze tax and all that malarkey (a subject which deserves a Blog of its own one day). I emailed my local MP to ask her to speak on behalf of the coalition, and was told:

"It is not true to suggest the inheritance tax amounts to double taxation. The wealth in most houses has never been taxed because it is largely in the form of unrealised capital gains."

Who is feeding MPs this kind of nonsense?  It's simply not true. The wealth in most houses has already been taxed at least once, because taxation occurred when the money was earned to pay for the house. You can't even begin to grasp what's wrong with the Government's reasoning until you understand this simple example.  Jack earns £100, makes some fruitful investments, and leaves £100 million in unrealised capital gains tax to his progeny.  If Jack paid 50% income tax on that £100, then he invests only 50% as much, earns 50% as much, and leaves his progeny only half as much as he would have done with no taxation.  This is simple economics - taxing Jack 50% on his £100 pounds eventually costs his progeny £50 million pounds.

You see, when I probed my MP about her error, she emailed with a confused response -

"If I understand you correctly, Mr. Knight, you’re saying that if a multi-millionaire died tomorrow and left his gains to his family, they shouldn’t have to pay inheritance taxes because he already paid a few thousand on the original income?"

Here's the problem - I'm not arguing against a few thousand, I'm explaining that the taxation already amounts to a lot more than a few thousand. My MP is failing to grasp that a “few thousand” paid many years ago is the equivalent to far more than a few thousand today. If Jack has a converter machine that magically makes several thousand pounds from 1987 into several million present day pounds, then taxing him several thousand pounds in 1987 is the equivalent of taxing him a few million pounds in the present day. So it’s quite misleading to say he’s only paid a few thousand in taxes - his net contribution is much more.  Moreover, here is another damaging consequence of these kinds of taxes - savers and investors will be deterred from saving and investing, which is contrary to the ethos of the present Government.  Not all will be deterred, but some will, and those will be important players in the economy.  

The real problem with this issue is one that I haven't seen any politician addressing.  Although, I understand why - MPs are elected with mostly no prior knowledge of the kind of economic thinking needed to tackle these issues, so it's hardly surprising they (and their advisors) don't.  They may know economists who could advise on this, but I doubt they’d listen because good economic advice is often contrary to the Government's aim – which is to get as much money as possible without obtaining it at the cost of being found out and unelected. 

Here's how the situation should be looked at. The argument isn’t about how much to tax, or about who should pay those taxes - it is primarily about the most efficient way to raise those taxes. Taxing earnings once at a higher rate is more efficient than taxing it twice later at two lower rates because the latter distorts the desire to save*, it makes people circumspect about investing, and it encourages people to over-consume in the here and now.  Remember, savings now is consumption in the future.

A similar level of stability is required in the consumers’ market if the economy is to achieve the right balance in encouraging sensible consuming and mobility.  That is why consumption goods are taxed at the same rate, not different rates.  Try to imagine the kind of consumers’ world we'd live in if different products were taxed at different rates. If you tax two different goods at two different rates, you'd have a new budget line with a slope that differs from the present one. Call the optimum point X. If you tax the goods at the same rate and hold Government revenue at a fixed rate, meaning you get a new budget line with the same slope as the original, also going through point X. To avoid the crossing of these indifference curves, the optimum along this new budget line must sit on a higher indifference curve than X (you can draw the diagram yourself), which is why consumers are better off with an equal taxation consumer policy**.

Returning to efficiency – the consideration of efficiency is important.  The notion that it is more “efficient” to tax earnings than to tax investment income is ridiculous if it means mega-rich people can earn hundreds of millions on their investments and pay too little tax or no taxes at all.  That's an argument in favour of taxing investment income and labour income as two wings of the same bird that lays the golden egg.  As you can gather, either way causes problems.  At one extreme, earnings that brought the investments on which taxes are already paid (a house in an investment, don't forget) are being taxed again via the route of inheritance.  At the other extreme, a man can switch income to investment and avoid paying tax on income. Here's how that's done. Suppose there is a business that earns £100,000 per year after paying all expenses except income. The business operator could pay himself a salary of £100,000 and the business would record zero earnings, or he could pay himself zero earnings and the business would earn £100,000, which he could then distribute surreptitiously to himself as “investment income.”  Under those conditions, investment income is not double-taxed when compared with labour income.

Even if capital gains taxes were capped at 1%, income subject to those taxes would be taxed at a higher rate than off the peg compensation. That’s because capital gains taxes (as well as other taxes on capital income) are surcharge taxes, assessed on top of the tax on compensation. An illustration will explain why.  Jenny and Jack each work a day and earn £1. Jenny spends her £1 right away. Jack invests his £1, waits for it to double, and then spends the resulting £2. Let’s see how the tax code affects them. First add a wage tax - Jenny and Jack each work a day, earn £1, pay 50p tax and have 50p left over. Jenny spends her 50p right away. Jack invests his 50p, waits for it to double, and then spends the resulting £1.

What does the wage tax cost Jenny? Answer: 50% of her consumption (which is down from £1 to 50p). What does it cost Jack? Answer: 50% of his consumption (which is down from £2 to £1). In the absence of a capital gains tax, Jenny and Jack are both being taxed at the same rate.  Now add a capital gains tax, let’s say 10% - Jenny and Jack each work a day, earn £1, pay 50p tax and have 50p left over. Jenny spends her 50p right away. Jack invests his 50p, waits for it to double, pays a 5p capital gains tax, and is left with 95p to spend. What does the tax code cost Jenny? Answer: 50% of her consumption (which is down from £1 to 50p). What does the tax code cost Jack? Answer: 52.5% of his consumption (which is down from £2 to 95p).

So there you have it: A 50% wage tax, together with a 10% capital gains tax, is equivalent to a 52.5% tax on Jack’s income. In fact, you could have achieved exactly the same result by taxing Jack at a 52.5% rate in the first place: He earns £1, you take 52.5% of it, he invests the remaining 47.5p, waits for it to double, and spends the resulting 95p.

Why is this so hard for so many supposedly intelligent people running our country to understand? Just like the above observation, people tend to look for a wealthy businessman with £1 million capital gain on his investment, and they forget that were it not for wage taxes, he would have invested twice as much and earned a £2 million capital gain. In that sense, the capital gain is taxed in advance.  Who you want taxed the most is one thing – but there’s no room for rational debate about the impact of the tax code, which is a matter of simple arithmetic.  The arithmetic shows quite clearly that anyone who pays taxes on capital income is effectively paying at a higher total tax rate than anyone who doesn’t.

To be fair, it is awkward suggesting that people who inherit wealth should have the right to lifetime tax-free income, but it's also disingenuous to suggest that people are inheriting this money tax free - they are not.  If you inherit £500,000 from your relative - what is being forgot is that your relative paid tax on that £500,000 (let's say 50% for simplicity's sake). As a consequence you inherited £500,000 instead of £1 million, which means you are already paying £500,000 income tax before your capital gains even begins to be taxed.

I must end by saying, when Mr Pomsonby-Smythe leaves his £10 million fortune to his layabout son who has never worked a day in his life, I, as a caring member of society, am quite comfortable with a system that sees over £4.5 million of that money given to the Government and spent on things like health, schools, transport and social services for the elderly.  But on the other side of the coin, I dislike the sense of entitlement that Governments feel they have on people's equity, because that equity is the leftovers of a sum that has already been taxed by the Government. 

A further issue is that everyone worth, say, £500,000 is not in the same position.  In London you can find someone worth half a million who has four children to send to university, high value council tax, and generally high expenditure, and you can find someone else worth half a million who dosses about all day wasting it on drugs, booze and yuppie parties. That's why I think we shouldn't have a blanket threshold - we should assess each situation on a case by case basis.  Yes, that will require extra resources for those actuarial studies, but I have an easy way to pay for it – we can do so by drastically reducing the number of MPs and the number of local councillors who are ornaments for every district and county council in the land - that is just pure waste through and through.  I think there are at least 250 MPs too many, and 50-60% too many local councillors, all of which are sucking expenses funds out of central services and every district and county council.  Let’s drastically reduce those, and spend the money on being more resourceful with taxation.  There - problem solved.

* Incorporating a tax for consumption tax on top of a tax for earnings is equivalent to raising the tax for earnings, whereas incorporating a tax for capital gains tax or a tax for inheritance on top of a tax for earnings is not equivalent to raising the tax for earnings, since it brings about a disincentive to save.

** Point of note, though, this model relies on indifference curves and therefore applies only to consumption goods, not capital gains, income tax or other non-consumer taxes.

Wednesday, 13 February 2013

Freakonomics Exhumed: Making Decisions By Tossing A Coin

As many of you who've had the pleasure will know, there is an excellent book by Steven D. Levitt and Stephen J. Dubner called Freakonomics, in which the authors apply economic theory and research to many fascinating social situations.  This is the sort of book that will show, for example, how legalisation of abortion in America went on to reduce crime rates 15-20 years henceforward in the States in which it happened, due to the fact that the future criminals were being aborted instead of being born.

I noticed on the Freakonomics webpage that they are now having fun with social science by researching how people get on when they flip a coin to decide on a course of action.  Whether it's deciding to move house, switch jobs, ask a girl out, change cities, ask for a divorce, go to a concert, or take up a sport or hobby, they want to know how it will turn out for you if you made the decision by flipping a coin instead of carefully thinking it through.

When reading Freakonomics a few years ago, I seem to recall Levitt and Dubner showing that companies that hire big reputation consultants and sports teams that hire prestigious coaches get from their investments no better results than those of a random coin toss.  I guess their current coin-toss research will eventually reach fruition by giving us some indication whether or not our carefully thought out choices work out better for us than those made by the random walk of coin tossing. 

Recording whether our strategic and so-called 'well thought out' decisions return a better than 50/50 payout could turn out to be valuable, or at the very least edifying. One note of caution though - it won't be entirely random, because although one could make a lot of life decisions using a random walk model, the decisions one chooses to submit to the random walk model mostly aren't random.   Whether the protagonist flipa coin to decide whether to shout at his neighbour, as opposed to, say, flipping a coin to decide whether to buy her some flowers or lend her a copy of Freakonomics is nothing like as random as coin tossing.  

My other issue with the experiment is that in a great many cases I don't think there is a reliable way to know what constitutes a good outcome.  Good compared to what?  In such an experiment the good is being weighed against the weight of the other alternative - but one can never know how that would have turned out.  Say Mrs Jones from London flips a coin, and on landing a tails decides to go to America for her holiday instead of Canada.  She has no way of knowing whether the results of the coin toss turned out better or worse, because she doesn't have the Canada trip with which to make the comparison.  Most of your life decisions may have turned out better or worse if you'd have picked an alternative decision, but in most cases you can't know whether the grass is greener on the other side or whether it's better the devil you know, irrespective of whether you decided by coin flip or careful consideration.

I do, however, agree with the authors that many of the results of our so-called 'well thought out' decisions are, in fact, just as random as the toss of a coin (this has been shown to be true in sports, management, consultancy and the stock exchange, to name but four examples).

Hopefully, though, you won't flip a coin to decide whether or not to carrying on reading my blog - otherwise my readership might shrink by 50%.

Sunday, 10 February 2013

Shrinking The 'God Of The Gaps' To Zero

When it comes to issues poured into the ‘God of the gaps’ melting pot - faith, reason, evidence, science – why is it that so many people get it wrong?  As any inveterate viewers of the BBC's Sunday morning show The Big Questions will know, it is easy to become perpetually frustrated by the poor intellectual calibre of the contributors to the debate.  But this problem extends much further.  Those contributors are only doing on a small scale what those who influence them are doing on a much wider scale.  Too many people just seem so unable to grasp a few simple notions related to knowledge, faith and gaps.  How foolish to say that science is now providing us with, or pointing us towards, knowledge that we once thought was supernatural, as if that is a point against the supernatural.  It isn’t – it is only a point against the error of not paying enough attention to the fact that scientific progress will inevitably bring fresh knowledge and understanding*.  It is obvious that a person who tries to introduce God or the supernatural into an investigation that can (and probably will) be explained in the future by science is acting foolishly and irresponsibly.  The trouble is, most theists do realise this, so the many atheists’ continual allusion to this fault only shows them to be very incapable of seeing the wider landscape.

Here’s how the landscape should be viewed, irrespective of whether you’re a theist, agnostic or an atheist.  The natural world of chemistry and biology provides no means by which one can justifiably claim with any authority that God or the supernatural exists.  Every time someone points to a good or positive part of reality and says that’s good reason to believe in God, one can switch it around and point to a cruel or negative part of reality and say that’s good reason to not believe.  Things won’t improve until people start to realise that nuanced empirical investigation gives no general indicator of God or the supernatural, much less a fait accompli justification for belief or unbelief.  That is why it comes down to personal faith – and why the poets like Blake, Eliot, Tennyson, Donne, Herbert and Rossetti express sentiments that are beautifully suggestive rather than empirically compelling. 

But what is faith?  First I’ll tell you what it isn’t – it isn’t the silly caricature many atheists make it out to be when they say it is opposed to reason or evidence – that’s just buffoonery.  Faith is the position whereby one looks at the whole of nature and trusts that God is the Creator.  If you want to see the real gravitas of faith, it is utterly erroneous to look at nature and ask if there is any evidence for God from within that natural perspective, because all you’re doing is assessing the situation by treating God as though He shows Himself only through radical breaks in normalcy. 

When you hear people talking about the knowledge they have, you should realise that they are commenting on lots of links in a chain of observed instances of cause and effect.  One may easily justify believing A because of B, and B because of C, and so on, but at some point we reach a terminus where justification for that chain of beliefs halts and no further connectivity is retained.  The most logical thing to conclude is that the terminus is reached with the origins of the universe.  With our current understanding, the big bang is an obvious epistemological place at which one can plant one’s flag of ignorance in the ground.  Our epistemological trail stops dead at physics, leaving us only mathematics and our ability to imagine what, if anything, lies beyond physics (and maybe even beyond mathematics too).  Either you can find faith or not, but don’t get drawn into the world of crass distortions – it will only make you look incompetent.

To intelligent theists and atheists there should be no ‘gaps’ at all – because faith isn’t like looking for water in the Atlantic ocean, it is about trusting that the whole story is part of a grand narrative.  For me, there are no gaps – because I never tried to erroneously fill them with supernatural explanations in the first place – I was always happy to let science take care of the physical parts of reality, and at the same time have faith that that physical reality is part of a grander cosmic narrative over which God has ultimate control. 

Sunday, 3 February 2013

If We’re Going To Discriminate, Let’s Do It Properly

There’s talk at the moment in Westminster, and this morning on BBC’s The Big Questions, about whether the police should recruit more black officers through positive discrimination.  This has thrown up other issues about whether minority groups (Muslims and black youths, specifically) are unfairly targeted in situations like airport searches and police stops on the street.  Naturally a few people who feel discriminated against have made a lot of noise this week.  Who is right?

It might be easier to show who is right by showing who is wrong.  When it comes to positive discrimination, it seems quite clear to me that both groups have got the situation entirely backwards.  Those who condemn discrimination and the supposed undermining of civil liberties by arguing that the police should positively discriminate in favour of more black people are missing the fact that their proposal is simply another kind of discrimination with the signs reversed.  I’m against this kind of positive discrimination because, quite simply, you cannot artificially smooth the path for one group (whether it be for more black officers in the police force or more women in Parliament, or whatever) without artificially hindering the path of the rest of the group (or groups) that fall outside of the purview of the group for whom you are trying to positively discriminate. 

What the minority groups should be asking is whether the low numbers of black police officers is due to other factors that are not being considered properly (that’s a future Blog perhaps).  That is to say, you would think someone pretty idiotic if they said that the primary reason that there are so few female garage mechanics or female bricklayers is because women are being discriminated against.  Jobs should be awarded on two things; on merit (skills, experience, personality, enthusiasm) and on the basis that certain groups of people do actually want these jobs.  If most women don’t want to be bricklayers, and if most police officers are white due to the pretext of merit, desire, or some other reason, then this needs to be acknowledged before anyone makes an automatic assumption of unfair discrimination.

What about Muslims at airports?
Now to the people who are arguing that stopping a disproportionate number of Muslims at airports or stopping a disproportionate number of black youths for police searches is discrimination – I’m afraid they have got their reasoning entirely backwards too.  I make no comment here about whether targeting Muslims and black youths is preferable to random distributions, but let’s get the facts straight – it is not unfair discrimination, it is the opposite of unfair discrimination.

When it comes to who is statistically most likely to provide the biggest terrorist threat in an airport (or any public place), you know which group it is - it is fundamentalist Muslims.  Sure, most Muslims aren’t terrorists, but that’s irrelevant – the relevant thing is that most dangerous terrorists are Muslims.  You can be politically correct and insist that people at airports are searched in an entirely random fashion, but then you are unfairly discriminating against the vast majority of groups who statistically pose virtually no terrorist threat.  Those who say that targeting one specific group (even if they are the most likely group) is undermining civil liberties have missed the most important points.  In the first place, in net terms, detaining 15 Muslim men is no more of an infringement than detaining 15 passengers randomly selected, because in each case 15 people are being detained.  But in the second place, detaining 15 randomly selected passengers instead of a high-probability group is much more of an infringement of civil liberties, because if you’re going to detain 15 people, you should at least detain 15 people who are statistically more likely to be terrorists.  If you are one of the many in the randomly selected group that is statistically almost certainly not going to be a terrorist then you have been unfairly discriminated against.  The airports will have discriminated unfairly in order to assent to a spurious adherence to political correctness, and that is not a good thing.

Now, just because you detain certain targeted people, that doesn't mean you cannot treat them fairly and with respect.  If most of the terrorists likely to blow up your plane happen to belong to the same faith as you, then being detained and questioned is a burden that Islam pays.   Not only is this inevitable, it is actually prudent if you want people to do their jobs more efficiently.  If you want to be safer on planes or in tube stations, you want the group most likely to put you and you children at risk to be the ones targeted – you don’t want the authorities to waste time detaining and searching your grandmother Betty or your aunt Doris. 

The same applies to keeping our streets safe.  Statistics show that black youths commit 65% more crimes than whites, which means that it is unfair discrimination to choose a random sample of 100 people rather than a larger proportion of black youths.  To see this logic in more explicit terms, let’s pick a really extreme hypothetical to make the point clearer (for those that need it).  Consider that around 95% of youth crime in London comes from gangs on estates in rough boroughs of London.  Suppose that in wanting to tackle youth crime with strict adherence to political correctness the police decided they would randomly search 1000 people all over London (including, say, those in Kensington, those queuing up for the London Eye, those outside Harrods, etc) rather than randomly searching 1000 people in gangs on estates in rough boroughs of London.  It is obvious that here the level of unfair discrimination would be higher not lower, because just like in the airport, in that 1000 sample space you're targeting people who are some of the least likely to be complicit in youth crimes.  If the police are spending valuable time and resources searching your grandmother Betty or your aunt Doris when they go out to the bingo and the supermarket, then they are not searching youths who are statistically more likely to be committing crimes while Betty is on her way to Bingo and Doris is on her way to buy her soup and bread at Sainsbury’s.

For those who are still somewhat ill at ease that this is how the world works, and that many Muslims with no intention of terrorism will be detained at airports, the individuals in question could easily be compensated for their time.  Let me put it another way; would you pay an extra £1 added on to your plane ticket price to drastically reduce the chances of your plane having a terrorist on board?  I’ll bet most people would – I know I would.  According to the airport statistics, 68,068, 304 passengers visit Heathrow every year – so let’s round that to a simple 60 million.  If the airline (and all other airlines) added £1 to every flight ticket and called it a ‘Terrorist prevention surcharge’ then Heathrow and other airports connected to those Heathrow flights would have £60 million per year to spend on compensating the people being detained and searched, and on staff wages to employ people to do the detaining and searching.

Let’s say it would cost £10 million per year in wages and additional personnel costs - that leaves £50 million compensation money, which means you could give every detainee £25 compensation for a ten minute search, and afford to search two million people per year, which works out at nearly five and a half thousand people per day.  With such measures in place, that would act as a huge disincentive for would-be terrorists to even try to board planes, it would create £10 million worth of jobs, and it would make every passenger feel much safer on their travels.  It is better that than the current politically correct policy of pretending that Muslim men in their twenties are no less of a threat than your grandmother Betty or your aunt Doris.

Friday, 1 February 2013

What Would Happen If We Rewound Evolution?

The famous palaeontologist Stephen Jay Gould once pondered what the results would be if we wound back the clock of evolution half a billion years and observed the trajectory.  In his book Wonderful Life Gould argues that if we were given the chance to "rewind the tape of evolution and play it again" we would find ourselves living in a world populated by a different kind of animal kingdom, maybe one without primates (and as a corollary, humans).  This hypothesis (called the 'exaptation' hypothesis) is based on the observation that fitness for conditions in any current climate does not ensure long-term survival, because conditions change rapidly, and the things that had good evolvability in time x might not have good evolvability in time y. 

Arguing to the contrary, Simon Conway Morris in his Crucible of Creation propounds the view that humans are the unique yet intended goal of evolution.  While Conway-Morris does not want to imply that evolution is driven by mysterious goal-directed forces like supernatural intelligence, he appeals to the argument of 'convergence' to argue that at least in its broad outlines, the outcome of evolution is predetermined.  Convergence occurs when two lines of evolution independently develop the same or very similar structures, such as when reptiles and mammals independently evolved a fish-like body plan (as per examples like the ichthyosaur and the whale respectively).

To the greatest degree the path of evolution is one of contingency; mutations are random and cannot be predicted with foresight, and sexual reproduction is a random combination of genes.  By ‘mutations are random’, we mean that specific mutations do not arise in response to the selection needs of an organism or population. These random mutations, along with gene flow, genetic drift, and natural selection are the mechanisms that facilitate evolutionary change.  The mechanism for the fittest genes surviving is what is meant by natural selection, where natural selection is the main driving force that underpins evolution*.

To ask whether Stephen Jay Gould is right in his view that if we rewound all those mutations, migrations, genetic drifts, and natural selection, and played the tape anew, nothing close to human beings would very likely evolve again, we first need to consider the tape we are rewinding.  Stephen Gay Gould need not think of the ‘rewinding’ of evolution purely in terms of the hypothetical, because once we think of evolution as being made up of constituent parts, then in terms of replaying the tape of evolution we can observe that this has happened many times, because different species have evolved independently in similar ecological conditions.  Eyes and wings, for example, have evolved separately many times over.  So clearly we can make a reasonable estimate that a rerun evolution would bring about eyes and wings because they are such useful traits to acquire.  So we have evidence that once evolution reaches a certain level of production throughout those vast ecological niches, some traits seem highly likely to evolve again. 

In the sense being described, it is hard to justify thinking of eyes and wings as mere products of contingency; we ought to consider that the system of evolution has something more powerful underpinning it.  If understood in the right way (and there are many wrong way to understand this) the powerful underpinning It has is 'progress' through a ratchet mechanism.  People often hate the term 'progress', but if it is only taken to mean the inevitable direction that organisms tend to take, and the increased complexity that will occur with ratchet mechanisms, then (as long as we are mindful of that caveat) by progress we can mean something like “successfully surviving and reproducing”. What I think is true about evolution is that the progress is obviously not some accidental act of serendipity that keeps popping up randomly.  Why did wings evolve independently in birds, bats and insects?  Why has echolocation sprung up in bats, birds and dolphins?  Why has antifreeze evolved in independently in both the Arctic and Antarctic?  Why are eyes virtually everywhere you look (pun intended)?  It’s not just random chance – it is because progress (where progress = successfully surviving and reproducing) is built into the system.  In one sense the whole system is about survival, so through an implicitly human lens we can look at the evolutionary story and see animals getting good at surviving and reproducing.  

But here’s the most impressive thing about evolution; the constituent parts of biological evolution tend towards ratchet type progression, but not by having any direction towards better outcomes – this mechanism occurs purely through self-organising of the individual units (think of an ant colony as a great example).  Let's look at self-organisation in more detail..

Self Organisation
In self-organising systems each constituent obeys some basic rules determining its wider interaction with other constituents of the system, resulting in highly ordered behaviour in biological evolution.  The best human analogue I know is the economy, which looks after itself in a similar way, but in this case the self-organising systems are human beings obeying some basic rules of supply and demand, determining the self-organising wider economic system.  This is what Adam Smith’s term ‘the invisible hand’ means in economics.  For Adam Smith, the invisible hand acted as a social mechanism that channelled collective objectives toward meeting the needs of the people that made up a society, by ensuring competition between buyers and suppliers which channels the profit motive of individuals into providing products that society desires at prices which are rarely above cost.  What was soon observed, making the argument for laissez-faire economic philosophy strong, is that markets automatically channel self-interest toward socially desirable ends.  I don’t mean to suggest that the results of an open market economy are all due to skill – there is a lot of luck too.  But what the invisible hand does is enable people to be skilful and lucky. 

Have you ever seen the wonderful synchronicity in those birds that fly in a large flock, with each perfectly attuned to the flying patterns of the rest (see picture below)?  Like the ant colony, that is a beautiful example of self-organising structures where each individual is contributing to the whole in looking after its own interests.  I have a friend who is a computer programmer; he can use complex systems theory to simulate this flying using basic rules determining how each individual of the flock responds to its neighbours.  The most notable thing is that one doesn’t need any governing central planning principle to underpin the system – all that’s needed is the procedures for each bird to look after itself, and those beautiful flight patterns emerge in collective form. 

This is the kind of mechanism that drives evolution.  Organisms in biological evolution lock into the beneficial changes, whereby the ratchet mechanisms of natural selection facilitate successful dynamic structures that have good reproductive qualities.  These structures (like eyes, wings, and echolocation) are selected because they have the adaptable qualities needed for self-maintenance up against the tumult of a world in restless change.  This is what is meant by organisms being adapted to their ecological niche – obviously evolving antifreeze in the bloodstream is far more beneficial in the Arctic than it is in Africa.  In generalised terms, what makes this model representative in terms of complex systems theory is that it is a procedure observed in everything from physics, to information theory, right through to biological evolution and the economy.  We have mathematical systems that generate algorithms that encode successful models of the world, thereby allowing constituent parts to anticipate aspects of the wider organisation of the system but without any central planning from on high.

Prediction and Patterns
Clearly when we talk about predictability in evolution, and whether there is any inevitability in the system, the answer is; that all depends on what we’re trying to map.  Mathematical systems that are either random or chaotic on the whole can give rise to pockets of order – it just depends on the resolution of the parts on which we zoom in.  For example, here is a very evident pattern we can observe in human history.  When social and technical advance feeds population growth and population growth feeds social and technical advance, we find a kind of predictable pattern that gathers mass, size and momentum.  To see why recent progression stumbled upon us consistent with population increase, think about how the world has gone for the past 200,000 years.  For the past 199,800 years we’ve had low global populations, and humans lived in meagre conditions, with lots of primitivism, low life expectancy and frequent infant mortality.  Until recently in our 200,000 year history we have lived in pretty poor circumstances, just above the subsistence level.  Then a couple of hundred years ago something changed.  People started to become more scientific, more empirically minded, richer, and populations began to increase more rapidly (it’s still going on)**. 

But how reliable is that pattern?  Overall, not that reliable – particularly if we want indication of the supposed outcomes of replaying the tape of evolution and asking if humans will come about again.  Not only does evolution of life run concomitantly with external factors such as meteorites hitting the earth, volcanic explosions, severe droughts, and ice ages, where the fittest do not always survive – human behaviour is chaotic enough to not be predictable at a wider level.  By chaotic I don’t mean erratic (although it is erratic too) – I mean simply that (like the weather) the initial conditions of any one time do not make long term predictions conducive, because there’s no telling what technology humans will discover in the future, what global conditions will fall upon us, or which of the many possible man-made catastrophes we might engender. 

Just as people in the 17th century couldn’t have foreseen what would emerge as the result of the industrial revolution, we cannot foresee what will happen in the next few hundred years.  Given what we’ve seen in the last century, it is perfectly reasonable to predict that humans have found the method to lock into progressive techniques, and that (save for any unforeseen catastrophes) that progression will continue to increase. But we cannot know for certain, because the broader picture is beyond the immediacy of our sphere of local and present forecasting (predicting societal change is a bit like predicting the weather. – we can only do short-term forecasts). With short term forecasts we can use economic equations to predict some economic parameters and show a tend towards short term progression (short terms can exceed our lifetime), but as is the case with the weather, the chaos makes long term predictions unlikely.

Here evolution and society follow a similar heuristic – and that can be expressed with the model of computational irreducibility devised by Stephen Wolfram.  To see what X will do, one has to "run" through the full computational nexus of the system - there isn’t a more succinct analytical method of finding out what biological evolution or a society will do other than running the full computation simulation of that society – and that will take as long as the whole thing took (in the case of evolution, over 4 billion years).

You may have heard of non-linear feedback systems.  What happens in the case of societies is this; social and technological advance and a rising population feed back off each other, whereby once the ratchet mechanisms lock into place, social and technical advance feeds population growth and population growth feeds social and technical advance.  Of course, the ratchet effects of a locking into place don’t have to come in quick fait accompli bursts – they can be gradual over time; for example, over the course of about 8000 years we had creative peaks in the form of previously unseen social and technical antecedents like the agricultural revolution, the emergence of writing, philosophy, science, the printing press, mechanical engines, electricity, all of which help bring about increasing technological accomplishments, increase in prosperity and scientific paradigm shifts. 

So what we have is a system of order and progression, but also a system of chaotic perturbation that makes the overall system pretty intractable, and not amenable to succinct equations that consistently map societal change and evolution to predictable mathematical analysis. Whether it be the Cambrian explosion, the industrial revolution, or the FOXP2 gene mutation that brought about human ability of language, one can never know when the favourable junction of conditions will came into place and bring forth new horizons and vistas of potential change – which is why I agree with Wolfram that we are dealing with computationally irreducible systems here.

So in the most important sense, Stephen Gay Gould was right that if we rewound the tape of evolution we couldn’t expect to see the same outcomes, much less believe that humans are inevitable.  But Simon Conway Morris is also right that despite the overall chaotic perturbations that underpin the system, there are framework structures that are simply the best for certain adaptive purposes.

Any animal needing to adapt by surviving a freezing climate, and any vertebrate needing to swim in the water, and any creature needing to fly, is going to evolve in the same general direction only by being successful in passing on genes that favour survival.  The reason being - the genetic constraints imposed on those developmental pathways and the demands of the environment mean that there are restrictions on the evolutionary processes.  So if we reran the tape, the details would be different, but the framework probably would be more or less the same.

 * Note about randomness and chance; there is a certain chance element in the process – as I said, mutation is a process of random chance - but it is only random with respect to improvement.  Mutations enrich the gene pool whose diversity natural selection acts upon.  Natural selection is the non-random survival of randomly varying genetic codes - the survival of which depends upon their phenotypic effects as regards the process of embryogenesis on phenotypes, which cause survival or non-survival.  Survival and reproduction depends upon the passing on of the genetic code of instructions that built these organisms, equipped them and made them conducive to survival and reproduction.

** This progression can be explained by a simple rule of thumb – people innovate, improve and provide answers to problems - and the more people, the more innovation, improvements and problems solved.  The more ideas and the more people to share those ideas with, the more humans prosper.  It’s no coincidence that each half century has been progressively better than the last, and that the most recent times have been the most globally prosperous than any time in history.  That’s largely because we have 7 billion people on the planet – more ideas, more innovation, better technology, improved economic freedom, peak human liberation, and more global communication and potential to help the neediest.  When the world has 8 billion people it will be even more prosperous; when it has 9 billion, yet even more prosperity.  It’s no coincidence – the recent burst in population in the past 200 years has been the primary cause of our burst in prosperity (200 years is only 0.1% of 200,000 years).  That we have a maximal population and progression dialect in 0.1% of the entire human history suggests that there is a clear pattern at work.