Tuesday, 23 October 2018

On Evolution & Random Walk


In one of my most popular papers, I wrote about how the universe is governed by a biased random walk, giving us a Divinely choreographed mathematical skew that can create enough order to facilitate stars, evolution and life. Many of you have asked whether, in that case, biological evolution is also similarly governed by a biased random walk. Actually, nobody has asked that - but it’s one of the most interesting and intelligent questions a reader could have asked if they were on the ball with this, because it’s exactly the right sort of question to be asking.

As a reminder, a random walk describes a path derived from a series of random steps in a mathematical search space - so, for example, whether a drunk man staggers left or right with each step of the walk is entirely random (50% chance of left or right), as is his final destination after N number of steps. Using this model, if you mapped the drunkard at point A, and tried to predict his position after, say, 100 steps, you would not be able to deterministically predict his final location.
 
So let’s ask the question, then: is biological evolution governed by a random walk process? The answer is yes and no, but mostly yes. If all the proprietary parts in evolution’s mathematical space (which I’ve called ‘morphospace’) are all randomly walking through evolution with their distinct genetic drift and mutations, contingency says that, like a group of drunkards walking independently, we would expect them to have arrived randomly at different evolutionary endpoints. However, evolution is not a purely random walk process - and there are two reasons why: one is fairly simple, and the other is pretty complex. Let’s start with the simple one first, as discussed by Richard Dawkins in his book The Blind Watchmaker.

“I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence?”

That describes what evolution would be like if it was a random walk process. But of course, it isn’t, as Dawkins is happy to acknowledge:

“We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase METHINKS IT IS LIKE A WEASEL. The sequences progress through each generation:

Generation 01:   WDLTMNLT DTJBKWIRZREZLMQCO P

Generation 02:   WDLTMNLT DTJBSWIRZREZLMQCO P

Generation 10:   MDLDMNLS ITJISWHRZREZ MECS P

Generation 20:   MELDINLS IT ISWPRKE Z WECSEL

Generation 30:   METHINGS IT ISWLIKE B WECSEL

Generation 40:   METHINKS IT IS LIKE I WEASEL

Generation 43:   METHINKS IT IS LIKE A WEASEL”

What this describes is what is called a ratchet effect (cumulative selection) where beneficial traits lock into place, rather like how card players get to keep their favoured cards after each shuffling of the deck. To expound on this, evolution requires four fundamental things to underpin the system.

1) Variation: there is variation in traits.

2) Inheritance: these variations can be passed on to offspring.

3) Differential survival(/reproduction): given the reproductive potential of most organisms, a population should be able to grow (this is not always what happens, of course)

4) Natural selection: those with heritable traits that make them more likely to survive by passing on genetic material.

The above example of METHINKS is not precisely illustrative of how natural selection works; rather it is illustrative of how cumulative selection can lead to rapid change over a relatively short period of time. The analogy was used to answer the criticism that there has not been sufficient time for particular structures to evolve by "random chance.” The analogy shows that random variation can lead to rapid organisation of structure, provided that there is selection for the structure. The analogy defends the 'rapid' part, not the 'selection' part.

Suppose a specific complex sequence, such as just the letters that make up the word 'METHINKS' corresponds to something complicated, like a human eye. The chance of hitting the sequence such as 'METHINKS' by fortuity alone is very small - 1 in 209 billion (That is, 1 in 26 to the power of 8, for this 8 letter sequence, drawn from an alphabet of 26 letters). Similarly, conjuring up a human eye out of nothing also has a vanishingly small probability, it might as well be zero. But, as I said, this is a poor analogy for evolution, because evolution acts as a 'ratchet', so when a correct letter clicks into place, it stays there (as indicated by capital letters), so it can achieve the target phrase in much fewer attempts, say 40:

1) 'sgfcsgo' ...

10) 'fETopcrS' ...

20) 'xETrINsS' ...

30) 'METoINKS' ...

40) 'METHINKS'

Now the question is, doesn't there have to be an intelligence to compare the target sequence 'METHINKS' against the sequence that evolution is trying out, or in real terms, the comparing of the 'proto-eye' to the target eye that is evolving? Well, in evolution, intermediates give advantages, and when those advantages accumulate, like in poker when you keep the cards you need for a good hand and toss out the bad ones, more sophisticated survival parts are created.
 
By this model above, the first attempt corresponds to being totally blind. The 10th try might correspond to a patch of photosensitive cells, so the organism can know if it is light or dark. The 20th try might correspond to ridges forming around these cells, so they are in an indentation, and the shadows of the ridges could give some information about which direction the light is coming from. The 30th try could correspond to the ridges starting to close up, so the light comes only through a small hole, so that the organism has much better information about the direction of the light, like a pin-hole camera. The last, 40th try, could correspond to a lens forming over this hole, which focuses light onto the photosensitive cells, resulting in a high quality image.

The point is that 1% of an eye is better than no eye, and 50% of an eye is better than 20% of an eye, and so on. At all stages, this extra light information available to the organism improves its survival value, and so the genes for making 1%, or 20% or 80% or whatever, is preferentially passed on to future generations. So, it's not as if an intelligence compares 20% of an eye to a complete human eye, and said 'ahh, this is better than its cousin, with 15% of an eye, I will let it pass on its genes for making this eye', but simply that when a predator comes along, it will see it before its cousin sees it, so its cousin will get eaten and not pass on its genes for making the 'inferior' 15% of an eye, but the 20% of an eye individual will pass on its genes. Of the offspring, some might have 19% of an eye, others might have 21% of an eye. Then the 21% of an eye will be more likely to survive, and its offspring might have 22% of an eye, and so on, all the way from humble beginnings until a complete, complicated and accurate eye is formed. The target sequence above merely corresponds to something that aids differential reproduction.

Having established all that, here is where we get to the more complex considerations, because underneath all that is a highly complex mathematical picture, which gives us another way to consider random walk. What’s happening in biological evolution underneath that layer is that there is a huge biochemical morphospace that has a connected structure through which evolution’s reducible complexity can traverse. Take, for example, irreducible complexity and reducible complexity - they refer to the arrangement of stable organic structures in evolution’s ‘morphospace’, but they cannot most primarily be understood at the level of the organism, because morphospace is not an adaptive landscape where we visualise the relationship between genotypes (or phenotypes) and reproductive success, and model fitness on the height of the landscape. Morphospace is a mathematical model of the form of connectivity between patterns – so a reducibly complex morphospace means that the biological structures that populate the evolutionary landscape form a connected group.

You may think of the system as being like a gigantic sponge made up of very tiny fibrils that connect the evolutionary structure together. If the connection has no broken off parts then the random walk of evolution can move across the whole structure. In fact, this is a particularly good illustration because sponges are composed entirely of mobile cells which can move about between different layers of tissue and reallocate themselves to take on different tasks. Sponges have totipotency, which as you may know, is the ability of a single cell to divide and produce all the differentiated cells in an organism. This allows any fragment of a sponge to regenerate into a self-sustaining organism.

So biological evolution is random walk, but as I said at the start, it is not entirely random walk. Firstly, because the ratchet effect locks beneficial mutations in place, but secondly because although the biochemical engine of evolution is underwritten by probability envelopes concerning whether a particular genetic trait will be passed on to subsequent organisms - and that, at least in terms of the mutations themselves, does approximate random walk - there are sufficient constraints on the system to bias the model in favour of order.

If we take an evolutionary starting point and then generate a random walk on the organism, then the probability favours random walk statistics (in formal terms,  a Gaussian probability distribution across the search space) - a probability curve in the shape of a distribution graph with no evolutionary biases. In the actual evolutionary landscape, though, what happens is random walk plus incremental variants in the search space; that is, we see a bias in the system that conforms to the ratchet mechanism of natural selection’s operation for fitness.

What you have to remember is that by the time we get to the level of order in the universe that contains our planet’s chemical substrate, the majority of the cosmological groundwork has already been done. It’s a bit like showing a movie at the cinema: your viewing pleasure is the tiny end part that succeeds all the planning: the screenplay, the casting, the rehearsals, the production, and the filming that went into making the movie. In mathematical terms, biological evolution is like showing an ingenious movie at the cinema that God has already written and produced - what we see is the most accessible elements of a complex creation process that involved ingenious twiddling of the mathematical laws to eventuate in a biochemical random walk substrate on which biological life can flourish.

In terms of evolutionary genetics and inheritance, our movie-watching lens of analysis looks like probabilistic search space of numerous configurational possibilities which generates successful survival machines, with genes using bodies as vehicles for propagation, many of them outliving their hosts by millions of years. Evolution, then, has an isotropic random walk directionality, but a relatively constrained search space in terms of the four fundamental underpinnings I mentioned earlier (variation, inheritance, differential survival and natural selection).

So, in simpler terms, going back to our group of drunkards on a random walk - if they all live in the same apartment block, the neighbourhood in which they are walking has enough limitations on the road and path structure, and they meet enough friends along the way to gently nudge them on the right course, and both those things mean that they have more of a chance of all arriving back at the same place.
 
The kind of biological evolution we see from those cinema seats can only work if the randomness of the mutations plays out within a very small probabilistic search space - and the groundwork for this was already done when the blueprint for the universe was written into the laws of physics. By the time the second law of thermodynamics gets scrambled into action we have an intricately directed form of entropy: where biology is organised under the constraints of the information implicit in its machinery, and at the same time still remains within the ordinances of the second law of thermodynamics.

Wednesday, 17 October 2018

The Mathematical Bias Theory Redux: Why There Probably ‘IS’ a God – in 20 Steps


This was written about 8 years ago, and published as a Blog post about 6 years ago. It is the summarising thesis that makes up a whole book of material I've written on God, Philosophy and Mathematics. Today's Blog post is the newer, redux version - published today because it contains a few additional analogies and clarification points that should supplement the original work and amplify its key tenets.

Look around and you and you’ll see a plethora of dubious theology and pseudo-science centred on Creationist ideas and Intelligent Design movements. I reject Creationism and Intelligent Design as being fabrications of the truth. What’s often missed, though, is that real, authentic science gives (I think) some kind of exhibition to the Divine Cosmic Mathematician behind the law and disorder of the cosmos. So here are a few ‘back of the envelope’ style jottings on why I think there almost certainly is a God.  I’ve decided to call it The Mathematical Bias Theory: Why There Almost Certainly ‘IS’ a God – in 20 Steps

Introduction
In science we don’t start by assuming we have all the answers on a plate ready for easy consumption – we spend our time bringing together information and ideas on how to assess variable and diverse protocols, and we work hard to bring them into exquisite theoretical descriptions.  To this end, and through a particular lens, science is descriptive inasmuch as it is about deciphering mathematical patterns that are imposed upon the substance of the cosmic order.  There are many facets to this deciphering that remain too complex or too multi-dimensional for a full cognitive purchase, particularly when we talk of the deeper scientific questions.  Even the complex patterns generated through our observations in, say, quantum mechanics are so complex that they only permit statistical descriptions.  So, naturally, statistical descriptions are human constructions that approximate a reality ‘out there’ – and they can only be considered accurate to the extent that we can formally conjecture about them and create models and labels to communicate them. 

Various proposals have been put forward by physicists about descriptions of nature; there are speculations about string theory, M theory and other conjectures about multi-dimensionality; conjectures that at sub-Planckian levels the universe has no dimensions at all and is just an arrow line.  We’ve had different conjectures about what spacetime is, the nature of gravity, non-linearity effects in spacetime, a geometric duality that reverses linear dimensions and undermines spacetime, theories about the true nature of particles and waves, or differing kinds of energy and mass relative to differing speeds – the list goes on.  All those examinations have one thing in common – they won’t tell us if there a Divine Cosmic Mathematician underpinning it all, because they are heuristics that deal only with the descriptive aspects of nature’s law and disorder.  As the last few thousand years of science has shown, our heuristics are almost always subject to augmentation with further knowledge and technology.  Most importantly, though, descriptive science cannot eliminate the burden of contingency related to the ‘Why is there Something?’ question. 

I’ve said all of the above for one good reason; grand theories that explain reality will not take the form of descriptive physics – they will take the form of a conflation of mathematics and philosophy, because both those subjects bootstrap our physical descriptions - and physical descriptions of reality are not complete, as they only simulate possibility.

What I’m going to say isn’t one bullet-pointed proof of God – it is a picture of a worldview that suggests to me that the cosmos is designed by a Cosmic Mathematician.

1) If mathematics is the language to describe the signature of God as some sort of Cosmic Mathematician, then nature can be modelled by some sort of mathematical template or blueprint that deals purely with constitution in numbers. This is because when seen through the mind of omniscience, nature as we know it and engage with must be amenable to statistical description if concepts like laws, patterns and information are to mean anything. For this reason, given that at a simple human level mathematics is the language we use to embed conceptual reality into patterns of description, I can conceive that a complete and totalising description of reality in the mind of omniscience will be (at least in one form) a complete set of information that consists of mathematical pattern storage.

2) As nature is reducible to bits of information, then to an omniscient mind (with no gaps in knowledge) the whole cosmic spectrum of law and disorder is computable - so even though omniscience has other forms of complex conception that we cannot grasp, we know of at least one way to describe nature in that way – a description of pure pattern storage.

3) From points 1 and 2, a fairly obvious corollary follows. Nature provides us with a form that is descriptively compressible. But descriptive science can only compress so far - we reach a point at which our road to compression hits a conceptual brick wall. Further, even compression doesn't tell the full story because each physical compression requires an algorithm, so the ultimate compression of our cosmos must involve algorithmic precursors to enable compression, so whichever way we look at it, we seem to be faced with a reality that ‘just is’ – and that seems like a miracle. 

Note: Mathematical compression is not to be confused with the reducible complexity found in the material substrate.

4) Given that information is measured using probability, and probability doesn’t have a negative value, the formal structures of data compressing equations and algorithms must always return ‘something’ not ‘nothing’ – so we can’t reach the point of reducing or compressing the universe out of existence or ‘to nothing’ anymore than we could compress one of Hooke’s springs to zero. 

Note: Most complex forms can only be converged upon algorithmically if either the algorithm is executed for a period of time far beyond the capacity of any human or computational machine or if we had access to the initial precursory conditions.  There’s no way of escaping it - given what we know of our universe, those precursory algorithms would have to be alarmingly complex if they underwrite our cosmos, because we know how alarmingly complex our cosmos is, even in its most elemental statistical descriptions.

5) When it comes to ultimate explanations, complexity only comes from precursors that are also at the upper end of the complexity spectrum. That's not to say, in the simple physics of our universe, that with a long execution time simplicity cannot produce complexity, because it can - but that's not a satisfactory 'ultimate' explanation, because it fails to eliminate the burden of contingency, and it doesn’t leave us with a plausible ‘just is’ closure – it only relates to mathematical patterns ‘within’ physics. An algorithm posited as an ultimate explanation must be scrutinised to provide a reason why it exploits a principle that is algorithmically ordered at all - and so, we are left with a multiple regression of 'why?'

6) At the heart of “something” the mathematical configurations must be complex, because at every instance we are always left with complex brute fact algorithms. At the very least we know that any bootstrapping algorithms must have complex blueprints because we know for a fact that this universe has an incredibly complex blueprint. In fact, the algorithm that underwrites the cosmos may well be as long as the cosmic data itself, and that won’t just pop up out of nothing.

Note: Considering the patterning view of randomness - it is a dynamic that produces a sequence, and this could be anything from a book of random numbers, through to a computer printout, to the heads/tails sequence of coin tossing. Hence if we have, say, a coin that we continue to toss, as far as the patterning notion of randomness is concerned, the eventual sequence of heads and tails stretches out producing a pattern notion that has a denumerable (in other words, ‘countable’) set of possibilities available to it, and so we know that the sequence generated by the coin tossing will assume one value taken from a countable set of possibilities, it’s just that we don’t know which one!  The patterning view of randomness sees that the ‘to-be actualised’ possibility is simply an unknown pattern stretching out into the future before us! This is configurational randomness; it is a rigorous mathematical description of what our intuition tells us are ‘untidy’ and complex sequences of 1s and 0s.  So, a configurationally random sequence is a particular class of pattern.

7) Given the foregoing, the universe and all its laws are bootstrapped by complex algorithms, and as complex algorithms of that order will not just pop up, nor are they intelligible at all unless they are reified on an up and running sentience, there seems to be a senselessness without a mind to reify them, because patterns are meaningless without a mind to interface with them. What this hints at is that the universe is endowed with a network of computation that is itself only intelligible if 'mind' is at the core of that intelligibility. It appears very plausible that complex sentience bootstraps the kind of universe we find ourselves in, and in an extraordinary way, our minds make everything intelligible by reifying those concepts. To that end, the relationship between mind and mathematics can be regarded as being extraordinarily ‘hand in glove’.

8) In our universe of compact and neat physical laws we can conceive of a type of data compression, because it is the ordered patterns that make it amenable to compression.  However, like all data compression, there comes a point when no further compression can take place, so we are left with this problem of what I call an ‘is-ness’ that just won’t go away and cannot be removed from the burden of contingency. That is to say, I’ve said that those compact and neat physical laws in our universe cannot be compressed to a mathematical zero, but what of the patterns that provide the compression for those laws? 

9) In the morass of disorder those highly compressed compact and neat physical laws would be highly unrepresentative patterns in that configurational system. Pictured mathematically, what we have is a mathematical generating system that generated mathematical configurations which tended towards maximum disorder, yet embedded a constraint on itself to produce the order of stars, planets, life, and minds that would go on to understand concepts (including those of God) with high level self-awareness.  This wouldn’t be unreasonably construed to be giving exhibition to conscious sentience creating and sustaining the cosmos in its vast mind.

Note: In trivial and simplistic form, most of these algorithms are counting operations which involve systematically sifting through a search space of all possible permutations of characters.  Whichever way we look at it, whether from a theistic or naturalistic perspective, knowledge of the entire cosmos would bring with it a system with a permutation of characters that effectively holds the data describing our cosmos – and this will consist of a finite map of information, with a scale of order and disorder, and this what we are looking at here.

10) Logical incompressibility has to do with equations and algorithms used for data compression. Although in mathematical terms it is true that physics effectively defines a set of stable structures in morphospace (where morphospace = the richly ordered mathematical configurations that facilitate biochemical life) - with the random walk of morphospace and physics, combinatorial space has a huge class of possible configurations which simulate many other alternative possibilities embedded in the mathematical potential. 

Note: Combinatorial space is the level at which something is computationally complex – and hereby refers to the space of possibilities that are unconstrained by an evident set of mathematical laws and constants.  

11) Clearly amongst the class of every cosmic possibility the overwhelming number of configurations in the cosmos tends to more disorder than order. Inside that class, the complex ordered configurations of life has a representation as very very very negligible (1 in 10 to the power of many many many trillions).

12) Even if we are the only life in the entire cosmos (that is doubtful), and our history does appear somewhat cosmically fortuitous, this outcome (when compared to the vast number of possible configurations that tend towards disorder) is a vast over-representation of an otherwise very unrepresentative class of configuration in probability terms. So instead of wondering why the universe seems so life-unfriendly, the question is rather; why does the cosmos have this extraordinary mathematical bias that allows it to facilitate any kind of order at all?

13) We know that the universe is expanding. Expansion is part of a universal principle from the maximum order of the big bang (zero entropy) to greater and greater trends towards disorder as spatial expansion increases, and with increased velocity space, maximum entropy is everything we should expect from an expanding random walk universe. Yet against all odds we have a biased random walk universe capable of creating living things with low entropy. And the astonishing thing is this; the only time in the history of an unbiased random walk universe that we should expect to see the sort of low entropy we see in systems like life, or even planets and stars, is at the point of the big bang. After the initial expansion of spatial space and increase in velocity space in an unbiased random walk universe, we should see less and less chance of ordered microstates like stars and planets with every increasing passing of time. But the mathematical equations that govern our universe are not the same equations that govern random walk; and that is the mathematical bias that shows us that Cosmic Sentience is behind the equations.

An analogy: Suppose by way of analogy that the most ordered state of the universe, the point of the big bang, represents a book in its most ordered state - let’s say Shakespeare's play Hamlet. In this state the book is at its most ordered in terms of grammar, text and thematics. Now imagine all the text of Hamlet gets plugged into a computer program that breaks it all apart into a less and less ordered state, and does so at an accelerated rate too. So after the first few minutes, acts and paragraphs have been separated, sentences have been broken into pieces and the thematics have become more and more fragmented. Then the disordering process accelerates so that after a while longer mere words are broken up into letters, and the book becomes a random configuration of letters, maxing out at a level of disorder than contains no structure of words whatsoever.

This is essentially what the universe is set up to do after the big bang - as matter stretches out, and the combinatorial search space of possibilities for order becomes more and more diffuse, the properties of hydrogen and helium become less and less like acts and paragraphs, and more and more like fragments of words. Like the text of Hamlet subjected to the computer program that increases disorder, there is less and less likelihood of the kind of order you see in stars, planets and life. The second law of thermodynamics allows for local decreases in entropy as long as those decreases are balanced out by an increase in entropy somewhere else: but the bias appears to be that the otherwise unbiased nature of random walk is not unbiased, but is, in actual fact, a biased random walk, which is somewhat remarkable beyond remarkable. In fact, if you don't understand how remarkable it is, you don't understand it at all. It would be a bit like running the computer program with Hamlet text, increasing the disorder of the linguistic configurations to blow apart all the acts, paragraphs and sentences being retained in the configuration, and then finding somewhere down the line that the program had thrown up whole paragraphs of Macbeth, Othello and Romeo and Juliet. The universe's biased random walk behaves like a computer program that started with Hamlet, began to break up the text into greater and greater disorder, but has throughout the execution time been manipulated by instructions in the program to obtain formations of entirely new Shakespeare plays later on in the process. That is the bias in the universe.

Note: I must bring to attention one common misconception about how nature behaves with regard to thermodynamics.  The second law of thermodynamics says that in a closed system disorder increases with time, but some people would likely disregard the possibility of the huge mathematical constraints I am talking about by pointing out that amongst the tend towards disorder when one bit of the system becomes quite ordered, there will be an exhaust of disorder elsewhere to offset the decrease in entropy, and that the overall effect still produces higher disorder. This is akin to saying that because thermodynamics is very complex low-mass and low-speed Newtonian contingency barrier on general relativity, and that because there is an overall increase in disorder to compensate for the pockets of order, that this somehow relegates the postulation of a mathematical bias down to the realms of pure speculation.  In a moment we will see why this isn’t true.

14) Why do we have a universe with laws that ought to tend heavily towards maximum disorder (and do in most cases)? The reason for this is fairly straightforward; at an atomic level thermal energy has a diffusion that arranges mass in random motions causing an increase in disorder. But that doesn’t mean that increasing entropy necessarily corresponds to increased complexity due to the random arrangements of mass and energy – in fact, it is a mistake to equate simplicity with disorder because there is a vast degree of complexity in highly disordered random systems because their complexity is such that they contain vast numbers of cases in which they are not amenable to a simple mathematical system. 

Note: When it comes to means and frequencies in mathematics, even a highly disordered system is configurationally complex in that it contains a lot of complex data. Even in the evolution of life, in the mathematical sense phenotypical organisms are configurationally not maximally ordered or disordered, and this means that high order doesn’t necessarily entail high complexity. 

15) We have a universe with laws that ought to tend heavily towards maximum random walk disorder but that also contains an astonishing mathematical skew in its emergent order towards stars, galaxies and planets, and the eventual facilitating of genetic algorithms, conservation of sequence and function in biology, and maximisation of fitness of those organisms. This alone tells us that there is such a constraint provided by the physical regime of laws.

Note: Obviously the distribution of energy in the universe doesn’t tend towards maximum disorder – if it did there would be no thermal energy and chemical energy to produce stars, planets and life.  Once we get to the stage of the emergence of biochemical life and the point at which organisms begin to evolve and eventually pass on their genetic material we can say that the active information in the laws of physics has leaped over a significant hurdle, because bit by bit evolution achieves progression through the system of cumulative ratchet probabilities

16) We can see that in less localised terms the laws of physics impose order on the universe in that the physical model has many possible states, but regulating laws limit those possibilities.  In actual fact, the question of why we have any order at all in the universe is a very worthwhile one, so the fact that we have an order of the magnitude that produces stars and planets is quite astounding.  The problem is that most people think of stellar explosions and the seemingly happenstance occurrence of the formation of the planets in our solar system and see a pretty chaotic and disordered mess, thanking their lucky stars (literally) that we ever got here at all.  On one level this is acceptable, but in truth the sort of mathematical skew I am talking about makes even that seemingly fluky activity incredible.  Here’s why.  Given that the universe should be heavily tending towards maximum disorder, even something like the emergence of stars requires an extraordinary restriction on the laws to facilitate the cosmic bottle neck to eliminate every other possible state to see that such a facilitation occurs.  This cosmic frontloading should under no circumstances be such that this kind of order should ever occur, because a universe without a mathematical bias would run down to maximum disorder very freely. 

Note: As I’ve said once we get to the earth’s biochemistry the appearance of another bottleneck is easier to reconcile because with biochemistry the severe constraint on the space of possibilities has already limited the possibilities and produced an increased ratchet probability where the statistical weighting favours the probability of life and not maximum disorder. 

17) As an illustration concerning combinatorial space, consider that quantum mechanics is all about measuring probabilities.  In quantum mechanics the wavefunction is a single-valued function of position and time, and it is a model we use to support a value of probability of finding a particle at a particular position and time.  Even concerning one particle we have a complex conjugate because specifying the real physical probability of finding the particle in a particular state involves a fairly broad search space.  Search space is best seen as a metaphor for our representing the probability amplitude for finding a particle at a given point in space at a given time.  Now imagine the complex permutational variables in a cosmos that has been expanding for nigh-on 14 billion years.  That is a lot of information and an incredibly vast search space of possibilities.  For the conditions of any order to be met, the laws of nature must preclude so many degrees of permutation (trillions of trillions) that far from physical probability being even diffused throughout nature, it must heavily constrain the laws in favour of non-maximal disorder (let alone biochemical life) to the following extent; that whether one believes in a personal God or not, the fact that the cosmos looks blueprinted for life is impossible to deny. 

18) As a second illustration; consider in biology one of those tiny gradual steps up evolution’s mount improbable.  Yes, it’s an accumulation of bit by bit selection, but that doesn’t tell the whole story - for even one very simple beneficial mutation which is just one small step in a long evolutionary history is itself woven into a huge fabric of other possibilities, and is just one tiny part of an incredible bias that drives the laws of physics towards life – a bias already embedded in nature and that is required to severely reduce the size of the cosmological search space by providing what seem to all intents and purposes carefully blueprinted generating algorithms that produced an information-rich universe set up for life.

Note: What the second law of thermodynamics does is produce a random walk across all possible states, and settles on the state that the statistical weight skews it towards.  In other words, the second law facilitates a migration based on huge statistical weighting whereby the skew directs a system towards its most probable state. As a simple illustration, if I turn on a gas flame, the statistical weighting does not tend towards heat all staying in the same region, it tends towards diffusion into colder regions away from the flame’s output.  The reason being is that the colder regions provide far more microstates (that is, possible combinatorial search spaces) for the diffusion to arrange itself in than the hottest regions, so the heat tends towards regions with the greatest possible search space. 

19) On a greater scale, what we are seeing is thermodynamically optimum diffusions, but under the constraints of the laws of physics, and as a mathematical pattern we are seeing this right through nature’s blueprint.  Many make the mistake and say ‘Well in a universe the size and age of ours we are bound to have the occasional cosmic fluke that then goes on to produce stars and (if we’re ‘really’ lucky) planets and life’.  But such a claim shows that the person has a misjudged understanding of the subject at hand, because the very very tiny number of ways of locking in to order are not do with serendipitous moments of cosmic fortuity that just happen to throw up the odd fluke, they are to do with the enormous unlikelihood of having laws that constrain a system enough to produce anything other than maximum disorder - that is what is so remarkable. 

20) I fancy that a universe without a designer would be nothing like the universe we see – it would be maximally disordered and we would not have ever been born to talk about it, because a physical regime where disorder is unconstrained by a mathematical bias wouldn’t produce any biological evolution at all.

Wednesday, 10 October 2018

The Financial Crisis 10 Years On - And Still The Myths Perpetuate


It’s roughly the ten year anniversary of the financial crash of 2008 - a topic about which plenty of people have opined as though they are expert, in a subject in which they are utter novices (see link at the bottom of the page). One misconception that is still doing the rounds in the notion of bailouts preventing the loss of value. This is sketchy understanding, because the loss of value occurs when people fudge things up with bad sub-prime mortgage loans - a bailout does not prevent the loss of value. The bailout simply shifts the loss from stockholders and creditors to taxpayers.

The other common confusion I keep reading, in relation to financial crash references, is the dubious connection made between interest rates and the amount of money in circulation (see my sidebar on 'interest' for further reading on this). The thing that connects interest rates with money in circulation is the rate of change of the circulated money, not the aggregation of the supply itself, which has a much more negligible effect. The problem (or one of the problems) with an artificially increased money supply is that when prices rise above supply and demand levels, interest rates are inflated too because lenders need to be compensated for the fact that they will be paid back in pounds or Euros or dollars worth less than the ones at the time the loan began.

It is easy to see why if you understand that interest rates are not the price of money. If interest rates were the price of money then more money added to circulation would lead to lower interest rates. But it does not, because when prices are high, your money is not worth as much. The interest rate is the price of borrowing, not the price of money. If lots of people have wealth they would be willing to lend out, that should keep the price of borrowing lower - but this isn’t to do with the aggregate of the money supply in circulation. During the financial crash the mortgage packages were worth less than the buyers thought. This led to a vertiginous drop in the money supply, as banks collapsed and depositors went broke or withdrew their funds from the banking system.

Further reading: previously I’ve blogged in more detail about the financial crisis and the many misunderstandings surrounding it - see here.

Friday, 5 October 2018

Party Conference Lament


Having watched all the party conferences, and listened to the squalid way that politicians speak untruths about reality, newer readers who haven’t trawled my archives (yet!) might appreciate a summary of the basic problems that underlie these political promises - promises that only gain traction because of the wilful ignorance of most of the electorate, who lap up this stuff like dogs lap up water from their drinking bowl.

The primary problem is the base fallacy that economies do not suffer terribly when politicians impede functioning markets. The reality is, they do! And almost every citizen’s well-being is negatively affected by it. Politicians impeding functioning markets - either by socialism, communism, theocracy, military dictatorship, political hegemony, or a combination of all five - encroach on the market’s efficiency, and obstruct the price system’s ability to allocate capital and labour to their most valued areas of society.

Instead of being driven by supply and demand and consumers’ revealed preferences, a top down interference in the economy is driven by the short-sighted whims of politicians, their self-serving manipulations of the system, and their Promethean fantasies that they know how to manage an economy better than individuals engaging in voluntary, mutually beneficial exchanges. This is the story of Cuba, North Korea and Venezuela, just as it was the story of the Soviet Union and East Germany - the effects of which are still felt in those countries today.

The free market is, by a long a way, the most effective and efficient way for societies to increase their health, wealth, well-being and standards of living. And what’s even more remarkable about it is that any individual agent in a marketplace - from the highest earners to the lowest earners - can only make themselves better off by serving the needs of others and adding value to society.

It’s not as though it’s especially difficult to see the extent to which political interference prevents the efficient functionality of the price system. Contrast the fruitful, dynamic, highly competitive industries of food, clothing, cars and digital technology with the bureaucratic, inefficient, uncompetitive, overly-regulated health, education and housing industries.

The party conferences differ slightly in content - but their overarching thematic is the same: more state interference sold under the pretext that it’s exactly what we need. Those who can see through it are painfully aware that what they are offering is a bit like a parent offering to increase the food supply of her morbidly obese teenager.

Monday, 1 October 2018

On The Nature Of Time: How It Informs & How It Beguiles


Time is one of the most intriguing aspects of the reality we inhabit - it informs us and it beguiles us, often simultaneously. In this blog post I will explain both. I will begin with how it informs us. The first thing to say is that time is not just a concept in our head (although it is certainly that too) - it is a fundamental part of the reality we study empirically. Time informs us of the age of just about everything, from the age of the universe (about 14 billion years), to the age of the earth (about 4.6 billion years) and the age of life on earth (about 3.8 billion years). 

Whether it is radioactive analysis regarding rates of decay, the potassium-argon method of dating, geologic observation, isotopic analyses, or paleontological evidence, the age of the earth is measured using reliable multivariate evidence gathering (those same methods work equally well with other astronomical objects too - meteorites, specifically - that formed out of the same material as the earth did). The science is reliable and robust.

On top of that we have a wealth of material gathered on the unification of genetic and paleontological data for creating phylogenetic trees in biological evolution. The human genome project has provided us with accurate information about the history of evolution through analysis of DNA and the chain of events which led to our species and other primates. The dates all complement the overall picture perfectly, from studies of taxa and the many ancestral lineages (although in some cases, with cladistic ‘hierarchies’ or ‘trees’ it can be difficult to practically place species in their correct topological relations, even though the theoretics are robust), to translocation of individual genes, even knowing at which point in the tree of evolution the chromosomal fusion occurred in our ancestors for Homo-sapiens to evolve. 

The upshot is, time is a very real and epistemologically robust phenomenon. The notion of an arrow of time - time as a one way forward direction - has its root in the second law of thermodynamics. And at the Einsteinian level of physics, the more intense the gravitational field, the slower time passes. In other words, an atomic clock on top of the Eiffel tower will show a slightly faster time than an atomic clock at the bottom of the Eiffel tower due to the earth's gravitational effects on time.

To give you an idea of how absurd it is to argue against time being a real, measurable thing - the ultra-precision of atomic clocks in the present day can measure the accuracy of time to about a margin of error of 1 second every 30 million years. The gulf between people who argue for a young earth and those who accept the multitude of repeated, verifiable scientific measurements of time is about as wide as can be.

To put that into perspective; if Jack wrongly believes the universe is 6,000 years old, and Jill rightly believes it is 14 billion years old, then Jack is wrong by an astronomical 7 orders of magnitude. To understand what a whopping error this is, an order of magnitude is written as 10 to the nth power (100,000 has six digits, 1,000,000 has seven, and so on). The universe is literally 2.3 million times older than Jack thinks it is.

Time, Einstein and relativity
I said that time provides us with a reliable dating metric, but I also said that it is an utterly beguiling phenomenon - and that is what we'll turn to next. The first key thing to understand is that the nature of time is very much bound up in our perceptions of reality from a given vantage point. When asking "What is reality?" Einstein showed with his special relativity that we are as well to ask “What is reality for ‘me, and under what condition am I experiencing it?”. I do not mean to suggest that all of the things we call objective facts are to be swallowed up and swiftly replaced by some form of woolly relativism – I mean merely that reality is perspectival, because facts about reality depend on the perspective of the individual, and time is no different. 

Reality cannot be confined to a single perspective, because for a human being, reality changes when perspective changes. Here’s an example of how the mind and the universe harmonise to confound our intuition. Our intuition tells us that an object cannot be both 2 metres long and 3 metres long at the same time - it seems logically impossible, particularly as Euclidean geometry forbids it, and because we intuit ‘facts’ such as ‘a line connecting two points can only have one length’ as being concrete truths about nature. 

But Euclidean geometry is an intuitive part of our brain that does not apply at the deeper cosmological level where other geometries can be used. If every truth or fact or observation remains true relative to the mind’s frame and reference point at any given moment, then it stands to reason that there are going to be a number of lenses through which we view reality, just as there are a number of ways to describe the world using ordinary language.

Once we learn to describe reality through the lens of various perspectives, we see how an object can be both 2 meters long and 3 meters long at the same time. Much of this was thanks to Einstein, who showed that the faster an object moves, the more compressed it becomes, and that if the object was travelling at a substantial fraction of the speed of light, then someone measuring it who was moving at the same speed could measure it as 3 metres in length, whilst an outside observer would observe it as 2 metres in length. 

(Note: the length of a mass gaining object in uniform relative motion is less than that measured by an observer at rest with respect to the Lorentz-Fitzgerald contraction. That is, a material body moving through the ether with a velocity v, contracts by a factor of V(1-v2/c2) in the direction of motion, where c is the speed of light in a vacuum.)

What all this means is that Euclidean geometry is a good approximation to reality which doesn't, on first inspection, appear to be violated, but can be with further knowledge of different lenses of reality. That doesn’t alter the fact that the shortest distance between two points is a straight line, any more than the fact that a wall is made up of largely empty space alters the fact that if I kick it hard I will hurt my foot. This is the key to perspectivalism; one approximation to reality does not necessarily make it wrong if in other aspects of realty it is violated somewhere, because it all depends on the perceptual lens of reality for the beholder. 

That was an example of how the reality of space changes for a when perspective changes. Now consider how the reality of time changes for a human when their perspective changes. In physics, time is defined in terms of gravitational and electromagnetic vibrations, because vibrations have a wavelength in terms of a space metric. The term spacetime is so-named because it consists of three dimensions of space - which in simple terms one may think of as up/down, left/right, and forward/back - and a single dimension of time, represented in a four dimensional manifold (this is known as Minkowski space). 

In physics, space and time are a continuum, yet we experience them as separate, and psychologically distinct. Time is dependent on physical concepts in order to gain cogent meaning, because time is also a measurement construct, and a useful one when it has real events to bookend a discrete package of moments between one event and another. If I say "10 minutes has passed", it doesn't mean as much as saying "10 minutes has passed since I woke up at 7am this morning" - I immediately link the past to the present. 

Time itself does not seem to exist as an event, it merely helps us "locate" events and order in the universe. The same is true of space; if I say "I’m five metres away", it would also be meaningless unless I have linked it in reference to another previously established point in space with specific reference to at least one of the 3 space dimensions. If I say “I’m five metres away from the tennis court net” we now have two established points in space with which to give the five metres context. Entropy is closely linked with time (especially in its direction), because entropy increases with time. That is a statistical law which tells us that you can turn an egg into an omlette, but never an omlette into an egg.

Even though space and time are, to us, discretely package together - in physics the time component of the distance measured in spacetime is different to the space components. We are able to move freely in three dimensions in space (up-down, forward-backwards, left-right), but we can’t move freely in the linear constraints of time. 

Now things get really interesting…
To show perspective in its most potent force, let’s return to spacetime for another fascinating duality of perspective. With his special relativity, Einstein showed that if one measures time and space using electromagnetic phenomena (such as the case when light bounces between mirrors) then due to c (the constancy of the speed of light), time and space are mathematically interlocked in Minkowski space. This entails that things moving at different velocities can measure different distances, times, and states of events (called the Lorentz transformation)*

(Note: This also explains our earlier topic of how from different perspectives an object can be both 2 metres long and three metres long. It all depends on the perspective, just as the slowness of a 3 hour wait in a hospital waiting room depends on the perspective of the person waiting.)

With Minkowski space we have the interlocking of the derivative physical quantities such as energy, force, momentum, and mass, with special relativity showing that the concept of time depends on the spatial reference frame of the person making observations. I call this the inertial reference frame effect, because there is a kind of ‘frame’ impact on states of mind at different speeds. That's also why, once we plug the mathematics into the physics, we find there is no true single frame to describe physical reality. Here’s an illustration that should help:

Imagine a human observer (A) travelling in a spaceship at moderate speeds, and another observer (B) travelling in a spaceship at a speed close to the speed of light. Special relativity provides equations that respectively quantify the extent to which observer B experiences the contraction of space and slowing of time. The faster B travels the more slowly time is for him. From his own first person perspective, observer B is totally unaware of this contraction of space and slowing of time. The reason being; B’s spaceship has contracted with him, and so any attempt he makes to measure the contraction would fail because any object that he might use as a standard of length to measure the contraction has also become contracted. The same is true of his time - he would observe no peculiarity with his time, as all the standards he could use to measure time would be slowed – that would include, presumably, his brain processes and thus there would be no subjective sense of time being slowed.

If a human could get to c (light speed), he would have converted all available motion through time into motion through space; he can't go any faster. Naturally in real life terms, this is impossible for a human, but obviously it is not for a photon. In theory, if you could shrink your consciousness into a photon you would be timeless. Point of note on this: when observer B looks back at observer A he sees an exactly similar situation: observer A’s spaceship appears to have shrunk to nothing in the direction of motion with all processes in observer A’s spaceship appearing to come to a standstill.

(Note: For a photon it exists at all points along its trajectory at the same time, and only for literally an instant, a subjective duration of zero seconds.)

The only way we could record how time passes from the frame of reference of a photon is to be in that reference frame - but technically, of course, it is impossible for consciousness to reach c. Given that, as far as we can tell, consciousness is time-dependent, we require necessary awareness of the passage of time from moment to moment in order to experience the feeling of being conscious. And yet time is just another direction of motion, so when you hit c you stop moving in the time 'direction' (so to speak). 

I’ve already said that to approach c every conceivable standard in one’s frame shrinks so that one has no way of detecting a change, because the shrinkage of the moving frame is relative – relative to the standards of the “stationary” frame. The stationary aspect is important because, of course, from your reference frame there is no way to tell if it's you that is moving near c, or whether you are stationary and something else is moving past you at c.

Given the foregoing observations, as you approach c, an observer at rest relative to you will see your time slow down, but you will also see their time slow down (it’s what they call the 'twin paradox'). But it is not so much of a paradox really, because theoretically the time dilation doesn't happen to you, it happens to you as seen from another reference frame. But it is still very real, because with relativity all observations from all reference frames are seemingly equally real for the person doing the observing, yet each is different from the other when seen in overarching terms.

This is the peculiar truth attached to human perceptions of reality; if you plug the speed of light into the equation for time dilation you end up dividing by zero, which doesn't really result in infinity – it just leaves us with a hypothetical nonsense. The outcome in relativity is the same - either the time dilation is infinite at c, or time doesn't apply to motion at c. There have been lots of tests that confirm this (using two atomic clocks), as well as confirming that time does have a real status when twinned with space. The atomic clocks are quite incredible; they tick at around 7 or 8 billion times per second, and the time differential is always to the exact degree predicted by Einstein.

Another test that proves this is with the many satellites we use – they wouldn’t function properly without the allowances for relativity, because they rely on an exact timed signal being sent. This is delicate and precise because the satellites are moving very fast, and they have been tested many times, and again they match Einstein’s predictions. In the four-dimensional manifold of Minkowski space, time is motion, so at c the direction of time ceases. 

Time as a psychological force
Einstein once said - “Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. That's relativity!". We know from our own everyday experiences that this is true. Observing a kettle boil for three minutes will seem like longer than having a three minute chat with a fascinating person. When watching an exciting 90 minute film at the cinema, time will go much quicker than when watching a dull 90 minute film. Time is perspectival at a psychological level. 

Have you ever noticed that when you turn to look at a clock the first second seems to take a bit longer to arrive? You are actually genuinely experiencing the delay, due to the brain editing your own perception of time. This is because when you turn your head to look at the clock, the brain extends what you see backwards in time by about 50 milliseconds, back to the moment before you turned your head, which is why the first second appears fractionally longer.

The mental representation of time is contingent upon the person observing it, and it seems to me that this is because time is a fundamental concept that cannot be broken down to smaller pieces, or constructed by use of other concepts, or turned on itself for analysis. Of course one can create concepts to practically engage with time, like seconds, minutes, hours, days months and years - but they are only fictional constructs that help us order our world.

Time, like personality itself, cannot be turned on itself and be separated from the context of the mind that hosts it. This is because trying to define temporal things with anything other than the concepts related to temporality is impossible, just as trying to define mental things with anything other than the concepts related to mentality is impossible. Once we hit that discrete barrier of personal conception, we know that time out there (like say the decay of a carbon atom) isn’t time like the clinical experience of watching a kettle boil – it is a definition of a concept based on quantitative measurements and/or personal experience. 

Don’t misunderstand. The quantitative measurements in quantum theory enable us to define time in terms of vibrations of the caesium standard, but that is only because we compromise external reality in order to have a clear and linear notion. It’s not just the caesium standard - you can add to that virtually every model or metaphor we use to decipher reality – our entire set of descriptions of reality are partial and compromised, because they are implicitly human, and as a consequence, explicitly analogical and metaphorical (as are most of the rest of our descriptions).

Just as the limit of realistic perspective occurs in quantum theory where the more thinly spread the wavelength the less precise the specifics, with time one only gets a precise but narrow perspective by focusing on one single moment, where the wider one frames the context to get a clearer overall picture the less one zooms in on the detail.

The next oddity of time
This is where we reach a hugely significantly truth about the mind in relation to the time-dependent nature of consciousness itself. That is to say, the flow of time is essential for consciousness to sustain self-awareness. Given that consciousness (or brains) cannot achieve infinite energy, travelling at c is impossible - but what is clear is that the dialectic between the physical and the theoretical is also hugely prohibitive to clear cut communicable facts, and that leaves us with a brute mystery of reality that we'll probably wholly solve. This is because from whatever inertial reference frame one measures, there is a true result relative to the reference point of the one doing the measuring - so we have a simultaneity that defies our intuitive thinking. 

In other words, something has got to give here, because the theoretical equations lead to a physical conclusion that cannot be realised in real life. Theoretically time ceases at c, which means you’d stop aging, but yet consciousness is needed for time to be perceived, and consciousness cannot get to c without the body’s mass increasing to infinite amount of energy to speed.

So which is real, the practical limit on the physics of the body, or the infinite equations that embed the reality? I think the best way to answer is to say that the physical truths are accurate through the physical lens, and the theoretical parts (zeros, infinities, and difficult to interpret negative numbers) will, when pursued, tap us into the more complex reality of mathematics that exists over and above the physical universe. This is problem I attempted to solve here (and I did so, at least to my satisfaction - although it is only a snippet of a whole book I have written on the subject). 

To keep it as simple as possible, here’s another example of something theoretical that does not cross over into the practical or physical. Think of Hooke's’ law of spring compression, which states that the force needed to extend or compress a spring is proportional to that distance. In theory one can use a variation of it to predict zero or even negative lengths under certain compressive forces. But even a child knows that, in practice, it is obvious that in the physical world a spring cannot be compressed to zero or to negative lengths.

We define things in human terms by using human constructs, so length is defined in relation to other objects and sizes. What the above shows is that we are forced to use hypothetical inferences, because in nature consciousness cannot reach c and springs cannot compress to zero, and if they could, very weird things happen that run counter to our intuition. To theoretically approach c where every conceivable standard in my frame shrinks, it is still only true to say that any such contraction occurring would be contraction with respect to one's relationally defined frame at a stationary position - so the contraction is relative to every other reference frame. 

Here’s another one to consider; take the example of gravitational singularities in black holes. Say an object was approaching the black hole; the object would be torn apart by tidal forces. As the debris settled into the event horizon, time would pass much more slowly for the stuff than it passes for an outside observer of the stuff. But then the debris would be trapped in a 'forever falling around the hole' situation, never making it into the hole, because time becomes distorted at the event horizon and approaches infinity as a limit (for anything to happen in the event horizon).

This shows another clear problem between the theoretical and the actual; terms like ‘event horizon’ and ‘infinity’ or even 'time coming to a stand still at c' cannot really be literally true in a finite spatio-temporal cosmos. They are only useful as theoretical approximations, not as actual facts about reality. Although current physics does come up with two infinities in the form of gravitational singularities and in re-normalisation in quantum field theory, this involves the hypothetical subtraction of one infinity from another, so it is only applicable as a concept of ideals rather than as practically comprehensive in dealing with nature. 

Imagination is a way of leaving reality behind - but there are some interesting overlaps between reality and imagination, as the imagined is part of reality at a different conceptual level. Infinities may be thought to be a place where the real and the imagined lock horns, but a proper understanding of the real nature of mathematics helps square this circle.
/>