Thursday, 19 December 2024

33 Blind Mice and Eternity

 

With the rise of digital connectivity, attempted arguments for and against theism are more widespread than ever before, and frequently conducted with philosophical sophistication, confidence and enthusiasm. But while it's good to embark upon such ontological ventures of the mind, one should proceed with prudent caution. Imagine, if it were possible, 33 blind mice formed a symposium to discuss the nature of human civilisation. We might be impressed with their efforts to understand us through their mouse-like limitations, but we’d know the size of gulf between what we know and experience and what they could possibly understand.

I think this is the kind of humility (and more) we ought to employ when we try to comprehend things that belong to God’s realm, beyond creation. Things related to the first cause, the nature of possible creations, or the very essence of existence itself in relation to God’s eternal power and wisdom, are interesting topics on which to speculate - but the gulf between what we humans can comprehend and the actual reality of God's existence and Divine workings is unimaginably vast. We may only reasonably approach such matters with a humble curiosity and utmost reverence, recognising that our insights can only ever be, at best, mere shadows of the truths and causes beyond our comprehension.


Sunday, 8 December 2024

The Cult's Loyalty Snare


One of the strangest things about humans is the phenomenon whereby cults often grow stronger, not weaker, after their beliefs, predictions, assumptions, and leaders all turn out to be wrong. We can justifiably hope to live in a world in which the opposite is true, but evidence shows we do not*. The primary reasons are the threefold old favourites of cognitive dissonance, sunk costs, and in-group tribal identity and association. 

Cognitive dissonance is well studied, and we know that people experience mental discomfort when confronted with evidence that contradicts their deeply held beliefs. In a cult, members invest a lot of emotional, social, and sometimes financial energy in their beliefs. So, when cults fail, they experience strong cognitive dissonance. People also feel more committed to beliefs and decisions as they invest more into them - whether it’s time, money, relationships, or personal identity – and after sinking those costs into the group, rejecting it and admitting they were wrong may feel like losing a heavy investment. To avoid this sense of loss, members might double down on their beliefs, and even intensify their involvement in the group, reinforcing their commitment and loyalty. 

Cults often provide a powerful sense of community and identity, which can be especially important for people who felt isolated before joining. When faced with a crisis of belief, members may cling to their group even more strongly to preserve their sense of belonging and identity. Leaving the group could mean losing this social support system and experiencing isolation, which many members would rather avoid.

You have to remember too, that whichever cult you’re in – Jehovah’s Witnesses, Scientology, Just Stop Oil, Answers in Genesis (to name but four) - the golden rule of cult deception is that you don’t know you’re in a cult. Cults often operate by encouraging members to distance themselves from non-believers, which limits access to outside opinions and evidence that might otherwise challenge their beliefs. This isolation makes it easier to control information and, in turn, to reinterpret or ignore contradictory evidence. Leaders can reinforce the idea that doubting or leaving the group would mean betrayal, disloyalty, or even punishment, which keeps members aligned even when beliefs are under strain.

And don’t forget too that, at this stage, cult members are riddled with confirmation bias - where members are so heavily primed to seek or interpret information in a way that confirms their existing beliefs, that they habitually interpret events in a way that upholds the underlying narrative, even if the outside world has consistently showed it to be wrong. I’ve actually seen documentaries where failures and refutations are enthusiastically reinterpreted as steps toward a grander, impending moment, which heightens members' commitment to prepare or sacrifice even more.

The paradoxical nature of belief reinforcement through disconfirmation – driven by these toxic combinations of cognitive dissonance, sunk cost fallacy, social isolation, and commitment to a group identity - must qualify as one of the strangest of all human phenomena.

And, alas, one of the most stultifying things about being in a cult is that cults manipulate from the top down in a way that starves the members of the freedom to fulfil their potential. Membership of a cult is predicated on the understanding that none of its members are freely encouraged to be the best they can be in life – instead they are merely encouraged to serve the cult’s agenda; a dynamic that can only usually occur through repression and indoctrination.

*See the studies of Festinger, Riecken, Schachter, Stark, Weber and Landes for further reading (and Asch’s infamous conformity experiments are interesting in the above context too).


Friday, 6 December 2024

Offensive By Proxy Offence

 

One of the biggest societal blots of the modern age is the act of choosing to be offended on behalf of other people who aren't themselves offended. It's what I call the societal blight of proxy offense. 

Here, I think, is the right way to think about offence. Offence isn't really given - it is taken by the person who chooses to take it - so if you personally choose to be offended at something, that's on you, and others are free to decide how they feel about you. Sometimes people may feel like your choice to be offended is a reasonable one - and on those occasions, your offence might improve their future behaviour and make them more mindful of their conduct. Sometimes they may find your choice of being offended pitiable, and tell you you're being unreasonable.

But…..none of this happens to the same extent when someone is offended on other people's behalf - they’ve simply chosen to be the kind of citizen that gets on most people’s nerves - and their only in-crowd consists of other people that are widely considered to be as equally annoying and victim-seeking. I'm not, of course, talking about those who courageously stand up for a good cause for the underdog. Proxy offence-seeking is different. In most cases (there are always exceptions), those who habitually choose to be offended on other people's behalf only serve to inflate the reality of what reasonable people are likely to personally find offensive; they distort behavioural signals about what balanced individuals ought to find acceptable; and they help create a society of people trapped in a gilded cage of self-imposed inadequacy, where the cage door is bolted shut from the inside, and where people become weaker and more and more over-sensitive, utterly unable to cope with other people's ideas, opinions, lifestyle choices, tastes and freedoms.


Thursday, 5 December 2024

Science & Climate Change - Closing Thought: Myopic Fears, Transformative Solutions

 

These climate discussions we've been discussing this week are about a complex set of considerations involving assessing trade-offs, allocation of resources, forecasting, and so on – and one thing we know for sure from history is that present day analyses that fail to factor in this complex suite of considerations are always woefully sub-standard in their analysis, leaving their protagonists ill-equipped to make prudent decisions and sensible forecasting.

Here’s what else we know. We do know that virtually every time we’ve tried to solve problems without foresight for how we are growing in knowledge and enhancing our capabilities we make errors. For example, in the late 19th century, Manhattan faced a serious "horse problem." Horses were the primary means of transportation, pulling carriages, wagons, and streetcars, and their population created significant issues, like massive amounts of manure, filthy, smelly streets, a disposal of dead horses problem, and the spread of diseases due to the unsanitary conditions. What those concerned didn't foresee was the advent of a transformative technology: the automobile. By the early 20th century, motor vehicles began to replace horses, effectively solving the horse problem in a relatively short period. By the 1920s, cars had become the dominant mode of transportation, and horses were largely relegated to recreational or ceremonial roles.

Paul Ehrlich’s 1968 book The Population Bomb predicted that overpopulation would lead to mass starvation and societal collapse, due to insufficient food supply – being myopic about advances in agricultural technologies, including high-yield crops, synthetic fertilisers, and modern irrigation – and how they dramatically increased food production, and ensured sufficient food for growing populations in many parts of the world. And we all know about Malthus’s similarly miscalculated prophecies of doom in the late 18th century, inaccurately predicting that population growth would outpace food production, leading to global famine and societal collapse.

There are so many more examples of this kind. People have thought there was a potential wood crisis without understanding the transition to coal (and later coal and electricity). People have feared urban darkness by failing to see the transition from oil lamps or candles to gas and electric lighting. People have been concerned about a potential telecommunications saturation with increasing population, with not enough insight into how a digital telecommunications revolution made communication scalable on a global level. Early computer scientists worried that the memory storage and processing power of early computers could never scale to meet the needs of more complex tasks – overlooking the invention of transistors, followed by integrated circuits and modern semiconductor technology, that enabled exponential growth in computing power.

These examples, and countless more, reflect exactly what is inadequate about extreme environmentalism, which continues to spread unremittingly like an unhealthy social contagion. It’s not that we need speculative faith that the trajectory of technological and scientific innovation will just eliminate all problems that once seemed insurmountable. But the balance of analysis and reaction is way too far on the myopic side of failing to account for the transformative power of present breakthroughs and of potential of future breakthroughs – largely because the whole thing has been politicised and manipulated to pay scant regard to them. These truths don’t serve politicians’ interests well, they don’t make splashy headlines for the media, they don’t enable the narcissism of virtue-signalling, and they are thinly spread so they are harder to apprehend for people whose considerations and agendas lack sufficient balance, perspective, wisdom and historical knowledge.

Wednesday, 4 December 2024

Science & Climate Change Part III: Understanding the Limits of Climate Models in Risk Assessment


Following on from part 1 and part 2 in this series, let’s conclude by exploring climate models and risk assessment. On the physical nature of climate change, some scientists argue that climate modelling should be trusted because it is specific and can point to physical laws that are currently observable and constant. Alas, this is only partially true - but even if it were wholly true, that still does not justify such confidence that the world’s extreme and hugely costly reactions to climate change are sensible, balanced and well-conceived. Just because a model relies on physical laws doesn't mean it has far-reaching predictability. The specific weather on any given day relies on physical laws, but it does not have far reaching predictability. The predictions are relatively short-term; in issues surrounding the perturbations of the environment, short-term predictions are not very reliable antecedents for long-term outcomes. Climate change science suffers from the same problem. Trying to rely on long-term predictions by extrapolating current patterns would be a bit like a man from another planet visiting earth for the first time in January and measuring the temperature in Trafalgar Square every day from January 1st through to August the 1st (increasing over the months from freezing up to 28°), and hypothesising that by December the temperature in Trafalgar Square will be 40°. But I don’t suggest that illustration just in terms of future problems – it’s current and future problems plus current and future solutions. Once you factor in responsive and pre-emptive human innovation into the equation, the model is not as unyielding as most environmentalists assume by their narrow projections.

Furthermore, focusing solely on the situation from a purely physical perspective is not helping the so-called climate scientists' cause. No one disputes that the underlying physics behind any purported climate changes gives us empirical objects of study - and few deny that changes will occur, and there will be problems to solve. But the climate change considerations must give more emphasis to how humans will respond to those changes. The environmentalists’ fear of the rate of temperature change - and that its impact on ecosystems, societies, and economies can outpace the ability of ecosystems and human systems to adapt – is highly likely completely backwards. Because what we are dealing with is slow, gradual change in temperature, and a rapid rate of change and adaptability from human ingenuity and natural scientific and technological advancement. Most environmentalists fear x is fast and y is slow, when the reality is almost certainly that x is slow and y is fast.

Yes, it is almost certainly true that climate change is in some parts anthropogenic, but most of what we’ve done industrially and technologically has been to the huge benefit of the human race, not least in the way in which the industrial revolution and consequent progression-explosion of the past 200 years has increased standards of living, life expectancy, prosperity, well-being, knowledge, and the many other qualities that benefit the human race. Don’t forget that our global emissions in the past century have been part of the very same scientific and industrial advancements that have facilitated this extraordinary human progression. To criticise our innovations as being environmentally detrimental is a bit like criticising a vegetable patch for ruining perfectly good soil, or criticising medicine for ruining perfectly good plants.

Professor Richard Tol (do Google his work - there's plenty of it) has perhaps done the most of anyone I've researched to show that when you factor in the economic, the ecological, the humanitarian and the financial considerations, there is an overall positive effect in climate change. He arrived at this conclusion after undertaking 14 different studies of the effects of future climate trends. One of professor Tol's key findings is that climate change would be beneficial up to 2.2˚C of warming from 2009 (when his paper was written). Some say those temperatures may not be reached until the end of the century, some say even longer. The IPCC predicts we will reach that temperature increase by 2080. This means that, far from being a so-called ‘climate emergency’, even at worst case scenario, global warming could continue to be of net benefit for another 60 years. And even if it is the case that global warming will only benefit us for another 60 years (assuming current conditions) then the people who will have to deal with it in 2080 will be about nine times as rich as we are today (assuming economic growth continues on its present trajectory), and more scientifically and technologically advanced than we can possibly imagine. While I'm encouraged by Richard Tol's research, I actually think he slightly underestimates the mood for optimism by making an understated assumption himself. He talks of global warming possibly being a problem by the time the planet undergoes 2.2˚C of warming (in 2080) without paying enough regard to just how much better equipped we'll be in 60 years from now to tackle perceived problems in 2009 (or even today).

This has always been a strange solecism from climate change alarmists too: Look at how the world has gone from 1924 to 2024. Nobody sane thinks that the world's population hasn't benefiting immensely from industrial progression and technological advancements alongside a changing climate during the past 100 years. Given that we are richer and more advanced in this day than in 1924, it’s absurd that so many people are unconvinced that the world's population won't benefit immensely from industrial progression and technological advancements alongside a changing climate in the next 100 years. Moreover, given that we in 2024 have most of the advancements to have been able to solve the majority of economic problems people in 1924 faced, we should be more confident of having similar capacities 60-100 years henceforward, given that we are starting from an even stronger place, and that we have far more people on the planet to help solve the problems that might arise. We seem drastically unfair on ourselves when it comes to forecasting our ability to work together to solve complex problems.

The climate change alarmists' assumption is that because climate change is an emergency, we should be risk-averse, and risk-aversion here means spending more money and resources on tackling climate change in the here and now. But this is faulty reasoning, because risk-aversion should primarily focus on the world’s biggest risks - and the biggest risk of all is not that future (richer) generations will be born into a warmer climate, it is that present (poorer) people are going to be born in a poverty-stricken state where they can’t afford access to cheap, necessary, dependable energy. The way to be rationally risk-averse is to help poorer people become more prosperous - not adopt short-sighted climate change policies that make energy unaffordable for those that need it most.

Here I refer you to a passage about risk in my previous series on climate change risk:

“Risk is assessing the potential costs with known probability. Uncertainty, on the other hand, is not knowing the probability, which means an inability to calculate a risk. If I have to draw a Jack, Queen or King card from a 52 card deck to win £1,000,000 or else die, that is a ‘risk’ because I can calculate the probability (12 in 52). On the other hand, if I have to draw a Jack, Queen or King card from an unspecified pile of cards, and I don't know how many are missing from the pack, then I have ‘uncertainty’. I cannot calculate the probability of drawing a picture card because I don't know if any picture cards have been removed.

Let me make it even clearer with an illustration. Suppose there is a pile of 99 cards - all of which are either a Jack, a Queen or a King, and all three cards are represented. You know that 33 of the cards are Jacks, but you don't know the ratio of Queens and Kings in the remaining 66 cards. You can choose from two scenarios:

Scenario 1: You win £1,000,000 if you draw a Jack, and nothing if you draw a Queen or King. 

Scenario 2: You win £1,000,000 if you draw a Queen, and nothing if you draw a Jack or King.

Which scenario would you prefer? Due to scarcity of information there really is no way to know which scenario is preferable because you don't know the ratio of Queens and Kings - you only know there are 33 Jacks. If you choose Scenario 1 you know you have a 1 in 3 chance of £1,000,000. If you choose Scenario 2 you don't know what chance you have because you don't know how many of the remaining 66 cards are Queens - there could be as few as 1 or as many as 65. Scenario 1 offers you a risk; Scenario 2 offers you uncertainty.

The climate change assessments are generally more like Scenario 2 than Scenario 1 - they involve uncertainties where drastically little is understood about the probability. It was important to mention that before we got under way with the series. In the next part I will look at how mindful we should be of future generations, and what we owe them.”

Rising tides sinking some boats?
Let’s now focus one of the other main messages of the environmentalists - that even small increases in temperature in the next 100 years are going to be disastrous for people living in coastal areas (this amounts to about 650 million people according to a BBC report in 2019). Alas, this prophecy of doom is a presumption they never attempt to justify. Whatever science tells us about the changing climate, the future is far too complex for anyone to know the magnitude of the effect of those changes, how future humans will be equipped to deal with them, and who will be better and worse off. Anyone who tells you otherwise is either mistaken or lying (or perhaps a bit of both).

Suppose the world gets a little warmer in the next 100 years, as predicted. Through today’s lens of analysis, it’s expected to have a net negative effect on places like Ethiopia, Uganda, Bangladesh and Ecuador. But no one talks about the net positive effect it could have in regions of Russia, Mongolia, Norway and Canada, where inhabitants are subjected to harsh winters. But even that’s too simplistic, because you then have the unenviable task of considering what future Norway or future Bangladesh will be like compared to now, and undertake a separate measurement of forecasted temperature increase alongside perceived impact at any given time. This is not a method of analysis that we can undertake right now – and this is something that seems to be almost entirely missing from the climate discussions.

Not only is a forecasted temperature increase alongside perceived impact at any given time very complex, it’s almost certain to be short-sighted and hasty. China in 1965 would be very poorly-equipped to deal with a metre of rising tide compared with the China of now or future China, who could pay for it with loose change. Just as in every decade that has passed recently, global warming has produced both negative externalities and positive externalities, and future global temperatures are too hard to predict in terms of whether or not longer growing seasons and milder winters produce a net cost on the world.

All that said, let’s be generous to the environmentalists and declare that their spectre is wholly accurate (against what my own reasoning says) - that increases in temperature in the next 100 years are going to be disastrous for people living in coastal regions. What might they still be overlooking? Currently we live in a world in which about 71% of our world’s surface area is ocean, where it could rise by half a percent if the ice caps melt very much in the next few decades. Humans have done pretty well in the past few hundred years adapting their industry in a world in which 71% of our world is ocean – so it shouldn’t be so hard to believe that people in the future with more money, greater knowledge and better technology will find it within their grasp to adapt to a world in which 71.5% of the world’s surface is ocean.

Not convinced? Ok, let’s take a worst case scenario - that all of the 650 million people living in coastal regions are going to be negatively affected by rising sea levels in the next hundred years. A few key facts: firstly, almost all of those 650 million people won’t be alive in 100 years, and during that time they and future descendants will have had the capacity to move inland or make the necessary infrastructural changes in response to the very gradual increase in sea levels. During slow, gradual changes, the next 100 years is a long time to make adjustments, especially in a future in which everyone is richer than now and more technologically astute. Remember, environmentalists fear x is fast and y is slow, when the reality is almost certainly that x is slow and y is fast.

We are not sure how many of the 650 million people (and more factoring in population increase and migration to cites) will be affected by rising sea levels, but here's what we do know. If moving inland or making the necessary infrastructural changes would be costly, not moving inland and not making the necessary infrastructural changes will be a lot costlier. It's one thing to discuss the costs of moving inland and making the necessary infrastructural changes and weigh up those against all the benefits and the future capabilities of dealing such things - but it's quite another thing to warn about staying in coastal areas and getting washed away, because that's just not going to happen.

If some relatively short-term extreme changes are the price that future unborns have to pay for living in such a prosperous world (and it's still a big IF), then it is certain that those future unborns will pay those costs, and almost certainly a lot more easily than we can pay them. If rising oceans and dealing with the consequences are not the price that future unborns have to pay, either because we are burning almost no fossil fuels in the future (which is highly likely to be the case) or because climate alarmists have got their predictions wrong, or because future humans have technology that easily helps them adapt to the gradual changes (which is almost certainly going to be the case), then the alarmism has been absurdly wasteful and largely unnecessary, because global market innovation is already doing about as much as it can, and will continue to do so.

The environmentalists frequently seem to be confused by a base rate fallacy regarding what they are doing. Even if we ignore the fact that this level of uncertainty is not an obvious call to action (and we shouldn’t ignore that, but we will for simplicity’s sake), and the fact that these reactionaries have no real clue of the appropriate measure of range of possible outcomes against range of possible actions, they are utterly confused by the concept of ‘doing’. They peddle the narrative along the lines of ‘What we should be doing’ when really they mean ‘What we should be doing now’. And I’m not saying that everything we are doing is reactionary – we are making some terrific progress on a whole range of innovations to help make us greener – but doing reactionary things now for projected future scenarios is hasty and presumptuous because time is inevitably going to reduce the cost of dealing with the problems (because we’ll be richer, and with better technology, and have more information and understanding).

That fact that uncertainty will decrease over time, and our knowledge, resources and richness will increase over time is an argument that, relative to our abilities, the problem will get smaller not larger, and our ability to manage it will get better not worse. If you don’t believe me, and still think we need immediate action otherwise it’ll be too late, you only need remember that this has been said for every decade for at least the past five decades, and with every passing decade we have gained in understanding, reduced our uncertainty, made humanity better off, reduced poverty, increased global trade and prosperity, become greener, and enhanced our technology - and this in spite of the extreme environmentalists, not because of them.

So many people are getting taken in by the doomsday eco-fundamentalism, on the pretext that ‘we have a climate emergency’ (or worse 'the end is nigh') is a consensual view among climate scientists. Climate scientists are experts at understanding the climate (the clue's in their job title) and the problems we are facing, but they are not economists, so they are unlikely to present the full menu of considerations. Climate scientists can tell us about the relationship between our activities and global warming, and they can tell us about how different levels of carbon emissions in the near future are likely to impact on climate change (to a degree, pun intended). But the climate change situation is not simply a matter for the physical sciences, it's largely a matter for economics.

Science is the systematic study of the physical environment within nature. Economics is the science of allocating resources efficiently amidst competing preferences. Science tries to tell us which challenges a region of the Middle East might have to face if the planet is n degrees warmer in 20 years' time. Economics tries to consider the future resources and technology available to change human behaviour in the region. Science tells us what might happen to our ocean levels. Economics tries to consider how our coastal regions will adapt to those changes. Politicised climate science focuses largely on the costs of climate change, and is wilfully myopic when it comes to trade-offs. Economics focuses on the costs and benefits of climate change, and on the complex trade-offs that have been made over the past 150 years of humanity's great material enrichment and unprecedented rise in living standards.

Climate scientists speak of future problems with scant regard for how innovative, collaborative future humans will be economically, technically and scientifically equipped to solve those problems. Isolated, reactionary appeals to the expert climate science consensus are anaemic appeals, because climate science consensus on its own is too a narrow perspective that neglects to include many of the most relevant tenets of the analysis. Let's have more gratitude and more humility - and we can work together to solve these problems with more balance, and less extremism. Imbalanced extremism almost never acts as a force for good, or as a vehicle for efficient problem-solving. 


Tuesday, 3 December 2024

Science & Climate Change Part II: There Is No Single Scientific Method

 

In the previous blog post, I introduced a poser to show an easy way to establish basic scientific principles. Here, I want to say that despite there being good and bad ways to make inferences, the idea of there being a single ‘scientific method’ is problematic, as the complexity of scientific inquiry cannot be encapsulated by one method or definition. That is to say, the idea that any human mental processes can encapsulate a singular method for science is a faulty one.

Last time, we applied modus ponens (affirming the antecedent) and modus tollens (denying the consequent) to explain why Team B’s method is more reliable and why modus tollens can be effective in some cases. Now, consider the affirming the consequent method:

If P, then Q.
Q.
Therefore, P.

While this logical form is a fallacy, science sometimes uses it to form hypotheses - yet it's precarious without corroboration. For example:

If a carbon atom has 4 valence electrons, then it can bond with up to four other atoms at the same time.
A carbon atom can bond with up to four other atoms at the same time.
Therefore, a carbon atom has 4 valence electrons.

While it is a fact that a carbon atom can bond with up to four other atoms at the same time because has 4 valence electrons, this kind of hypothesis would not qualify as a scientific theory were it stated as a one-off isolated hypothesis. To see why, let’s use a comparable example:

If it is midnight, then my watch will say it's midnight.
My watch says it's midnight.
Therefore, it must be midnight.

In isolation, this kind of fallacy of thinking would be exactly true of science too – one claim by one scientist is inadequate to the task of a reliable inference; we need corroboration from repeated sources, because if stated in isolation there may well be something (as yet unknown) that falsifies the proposition that “a carbon atom can bond with up to four other atoms at the same time because has 4 valence electrons”, just as seeing my watch has stopped gives me reason to doubt whether it really is midnight. 

But equally, suppose my watch says it’s midnight, and also my neighbour’s watch says it’s midnight – I have a much better reason to think it’s midnight, as two watches in 2 houses side-by-side stopping at midnight is less likely. The more verification I get when I look at other people’s watches and see they say midnight, the stronger the corroboration, and the stronger the justification for my belief.

That is what science is all about. Science uses arguments at the singular level, but they become stronger with further corroboration. We infer to the best possible explanation, where a theory best explains the facts in front of us. Of course, if scientific theories are inferences to the best explanation, then the best explanations will also have good predictive success rates too. But context must be established in order for this to happen. For example, the earth is both spherical and flat when seen from the right context. When I place a spirit level on my concrete driveway, I see it as flat, but if I view the earth from space then it is spherical. It would be a fallacy to measure my driveway as flat and infer that the whole earth is also flat. 

The strength of science is found in inferences based on repeated corroboration, but there is no single scientific method from which we obtain this. While we often use the term 'scientific method' for ease (by which we mean testable, repeatable, verifiable, and predictable), it is difficult to justifiably claim science to be amenable to some kind of singular ‘scientific method’. It does little good to simply say that the value of science is in an observation being testable, repeatable, verifiable and predictable, because science is only one particular lens of reality, which by itself has no singular ‘method’. 

Even if we ignore for a moment the fact that science is limited to only a scientific lens of reality, there are many philosophical questions attached to the process of an observation being testable, repeatable, verifiable and predictable. How does one confirm that one’s observations are sufficiently free from psychological bias to be balanced? How does one decide what is testable? At what level does repetition confirm the validation of an observation? How can prediction be informative without a philosophical framework to police our concepts? What links our methods of experiment with the complexity of data? How do we account for the fact that different levels of reality are attached to different levels of physical behaviour in nature? How does the overarching narrative of our interpretation of reality align with the lenses of reality to which science is amenable?

It’s not that these questions can’t be answered; it’s that there is no singular ‘method’ by which they are all answered, as humans have such a complex nexus of perceptions and conceptions, and require many lenses of interpretation with which to understand the world.

Having built the foundations in parts 1 and 2, we are now ready for a meatier part 3, which is up next: Science & Climate Change Part III: Understanding the Limits of Climate Models in Risk Assessment

Monday, 2 December 2024

Science & Climate Change Part I: How To Do Science

 

In my book The Science & Economics of Climate Change: How the Mechanisms and Physics of the World Really Work, I have a bulky section on what many think of as “the science of climate change”. When you hear environmentalists – from the serious scientists, to the raving climate alarmists – you’ll often hear them say "The science is clear on this" or "We have to listen to what the science is telling us and act urgently". And when they say those words, they are signalling to you that they haven’t got a thorough grip on the matter, because no one who understood the complex, multivariate analyses required for a subject like climate change would ever use the term THE science.

How to do science
Regarding the methods associated with science, let me start with a poser. You've been tasked with hiring a team for a science project as yet unknown to you. You have 2 teams of eight from which to choose (call them Team A and Team B), and you know only one thing about the teams. Here's what you know.

Both teams worked independently on the same project; it was an investigation into seventy deaths that occurred in a factory on one day last September. Everyone in the building was found dead on the floor one day, and both teams knew that the factory had recently started to use a new chemical X, but they weren't sure for how long. Team A and Team B were given 35 of the dead bodies each and access to the entire factory, and were tasked with finding the cause of 70 deaths. Here's what they did differently:

Team A gathered evidence A, B, C, D, E and F and sat down together in a room discussing all the possibilities. After some deliberation, they hypothesised that methyl isocyanate had escaped into the factory's atmosphere and caused the deaths. After this, they did a postmortem on the 35 bodies to see if the cause of death matched their hypothesis. The 35 bodies they tested all had methyl isocyanate poisoning.

Team B did the post mortems first, found the cause of death as methyl isocyanate poisoning, and then gathered evidence A,B,C,D,E and F to see if the evidence matched their post mortem conclusion. It did. The evidence suggested that methyl isocyanate had escaped into the factory's atmosphere and caused the deaths.

That's all you know about the teams; you have a project coming up, and you need to hire Team A or Team B. Which do you choose to hire?

Conclusion..............

To see which team is a better candidate for your hiring, we need to consider how science works at a general level - and for that, we should start with two kinds of inference; modus ponens and modus tollens. For those unfamiliar with the terms, modus ponens means 'method of affirming', and modus tollens means 'method of denying'. They are rules of inference, where if the premises are true, we can reach a logical conclusion and make inferences about how the world is. We say an argument is a valid argument when it is not possible for the conclusion to be false if the premises are true.

With modus ponens, if we know that P is true, and we know that P implies Q, we can infer that Q is true. So for example:

If today is Tuesday, then tomorrow will be Wednesday

Today is Tuesday

Therefore tomorrow will be Wednesday

With modus tollens, if we know P implies Q, and we know that Q is false, we can infer that P is not true. So for example:

If today is Tuesday, then tomorrow will be Wednesday

Tomorrow will not be Wednesday

Therefore today is not Tuesday

In the first case, we are affirming the antecedent, and in the second case, we are denying the antecedent. That is to say, with modus ponens we are affirming that today is Tuesday, meaning tomorrow is Wednesday. With modus tollens we are denying that tomorrow will be Wednesday, meaning today is not Tuesday 

Now, if all argument forms were as valid as that, and if all premises were as unambiguous as that, then there would be no fallacies committed. But it isn't the case, because some argument forms are faulty, and some premises are ambiguous. Consider this common type of error:

If I have flu, then I'll have a runny nose.

I have a runny nose

Therefore I have flu

This is an unreliable argument, because my runny nose may be caused by something else (like hay fever) that is not flu. Or consider this:

If it is midnight, then my watch will say it's midnight

My watch says it's midnight

Therefore it must be midnight

Here we get into difficulties again, because just because my watch says it's midnight doesn't mean it is midnight. My watch might have stopped at midnight, and it may actually be 1am. We can also see a logical fallacy when we consider an inference in the negative sense:

If the street is wet, then it is raining.

It is not raining

Therefore the street is not wet.

Clearly, this is also unreliable, because even though it is not raining it does not prove that the street will not be wet. It could be that the street is wet due to a burst pipe, or snowfall, or by being washed by the council. 

So let's look at the above example with our two science teams. We can see that team A used the modus tollens method whereas team B used the modus ponens approach. Here's a reminder:

TEAM A: Team A gathered evidence A,B,C,D,E and F and sat down together in a room to discuss all the possibilities. After this, they did a post mortem on the 35 bodies to see if the cause of death matched their hypothesis.

TEAM B: Team B did the post mortems first, found the cause of death as methyl isocyanate poisoning, and then gathered evidence A,B,C,D,E and F to see if the evidence matched their post mortem conclusion.

Personally, I would hire Team B, because the modus ponens method is a better method for confirming a hypothesis in science (modus tollens is better for falsification). Team A made themselves candidates for the modus tollens fallacy by gathering evidence A,B,C,D,E and F and hypothesising that methyl isocyanate had escaped into the factory's atmosphere. Here they made an educated hypotheses and turned out to be right, but their method may not have accounted for the other possible conclusions that could be reached from evidence A,B,C,D,E and F.

Team B's modus ponens approach means they began with the 'method of affirming', by doing the post-mortems first and finding the cause of death to be methyl isocyanate poisoning. They then gathered evidence A,B,C,D,E and F to see if it matched their post mortem conclusion, which I think is a better way to make inferences. This approach builds a solid foundation by grounding hypotheses in observable evidence. On the other hand, Team A’s approach, where they formed a hypothesis before examining the bodies, had the potential to run into problems if the post-mortem results contradicted their hypothesis. While this approach can work if the evidence supports the initial hypothesis, it is prone to confirmation bias and could lead to ignoring contradictory evidence. Given the foregoing, Team B's approach demonstrates a more reliable method of scientific inquiry, and they are the Team I’d advise hiring.

There are cases when the modus tollens approach gets you to the right answer more quickly. If the hypothesis is that methyl isocyanate escaped into the factory atmosphere and caused the deaths of the 70 people, they could use modus tollens to frame the logic as follows:

P1: If methyl isocyanate escaped into the factory atmosphere, then all the victims' post-mortems should show evidence of methyl isocyanate poisoning.

P2: The post-mortems do not show evidence of methyl isocyanate poisoning.

Conclusion: Therefore, methyl isocyanate did not escape into the factory atmosphere, and it is not the primary cause of the deaths.

This would open the team up to other lines of enquiry.

In conclusion, modus ponens and modus tollens are essential tools in science, each suited to different stages of investigation. Team B's modus ponens approach is ideal for confirming hypotheses based on solid evidence, reducing the risk of confirmation bias. Meanwhile, Team A's modus tollens approach is useful for quickly eliminating incorrect hypotheses.

For the factory investigation, where reliable confirmation of facts is key, Team B’s method is preferable. In seeing that their approach builds more efficient conclusions on observable data, making them the better choice for future scientific inquiries that require careful validation, we have shared in a good insight about some of the basics of science.

Stay tuned for part 2: Science & Climate Change Part II: There Is No Single Scientific Method

Sunday, 1 December 2024

Hell And God’s Love


Some of the New Testament language used about hell is unnerving. But I assume God inspired text that reflects hell as an undesirable outcome, as a potential state that is compatible with God’s love and justice, and as a final state that it is in our control to avoid.

If we have the freedom to control our own ultimate destiny, and if every one of God’s relationships with humans is grounded in God’s love for us, then the corollary seems to be that within the total expressions of God’s love for humanity, there is the inclusion of the liberty for us to self-choose hell as our final state, if we so wish.

That’s pretty profound, especially when you consider how we are also grounded in a striving for cosmic justice. It seems that, in the end, those who self-choose hell will be able to have no justified objections about the ultimate justice of their finality. A creature that chooses to place itself in rejection of God’s love, grace, mercy and justice may have no more natural finality than hell, whatever hell turns out to be.

One way to think more tolerantly about the idea of hell, is to think that whatever hell is in terms of punishment must be ultimately good for creation as a whole, because everything God does for creation is good. When Satan and his army of demons face their ultimate punishment, I assume most Christians think that those punishments are a positive thing in the creation story, just as a village that locks away a local serial murderer is good for the remaining inhabitants.

Perhaps if we can reconcile the punishment of the dark forces as being good for the creation story as a whole, it might be conceivable that the state of hell for some of the baddest people in creation is good for creation as a whole, especially if hell is a state of self-choosing.

It’s profound to think that when the time comes, everyone in heaven and everyone in hell will be bowing before God and recognising Him as Lord, to His glory (Philippians 2:10-11), although under very different circumstances, of course.

Thursday, 28 November 2024

Voyage Through The Cosmos: Science’s Grand Expedition

 

One of the most elegant aspects of science is how, with mathematical modelling, we can infer universal truths from limited data. Often direct measurement isn't possible, but mathematics provides a reliable and consistent framework for exploring and understanding complex systems. Some things are obviously objective. Take, for example, Newton's F = ma - it is a fact (in macroscopic systems) whether you're a man in Nepal or a woman in Sweden. The reason being, it has nothing to do with subjective opinion, because force is equal to the time derivative of momentum, so the relationship between an object's mass m, its acceleration a, and the applied force F is F = ma wherever you are in the world (assuming Euclidian space).

Suppose a crank in Chile decided to opine something different about the quantitative calculations of dynamics, and he came up with a different, but provably wrong, idea about how velocities change when forces are applied. It's an opinion to which he can claim no justifiable entitlement, because objective facts about reality transcend the culture and geography in which they are discovered. The theory of evolution by natural selection is always based on factual accounts of billions of years of biochemical history, irrespective of whether you live in England, India or Brazil.

We know that the nuts and bolts of creation is assessed objectively because the language of mathematics reveals objective truths about physical reality. For example, in the early 1800s, astronomers set out to improve the tables of predictions for planetary positions they had created from Newtonian mechanics by undertaking calculations of the orbit of planets in relation to their neighbouring planets. At the time, Uranus was the farthest planet, but its calculations were proving to be inconsistent with the rest of the planets in our solar system. One suggestion made by several astronomers was that perhaps Uranus's behaviour proved that Newtonian mechanics is not universal. But with further mathematical calculations, a better proposition was offered; one that demonstrated the predictive power of mathematics.

By taking the discrepancies in the orbit of Uranus, astronomers were able to speculate about the possibility of a further planet, and use Newtonian predictions to calculate the possible size and location of a possible adjacent planet that would explain the anomalies with Uranus's behaviour. Using Newtonian mechanics, we could predict the potential whereabouts of a further planet (which we would later call 'Neptune') simply by assessing the behaviour of Uranus, and that is what we did.

For another way to capture the essence of how mathematics allows us to extrapolate from what we can measure to what we cannot directly observe, let’s return to Newton’s second law of motion:

“A body experiencing a force F experiences an acceleration a related to F by F = ma, where m is the mass of the body. “

Force is equal to the time derivative of momentum, so the relationship between an object's mass m, its acceleration a, and the applied force F is F = ma, where acceleration and force are vectors, and the direction of the force vector is the same as the direction of the acceleration vector. This enables us to make quantitative calculations of dynamics, and measure how velocities change when forces are applied.

Newton’s laws were formulated from observations that were made on local objects; for example - dropping objects from high places, calibrating acceleration of gravity for a falling object, observing motional trajectories, and looking at planetary positions. Although Newton's laws are formulated as universal statements, we can infer universality from what we observe locally (although this isn’t an irrefutable claim).

When Newton gave the formula for gravitational force, he claimed the law to be true for any two masses in the universe. But what warrants that leap of induction, and how would one develop certainty about the universality of it? For example, one doesn't directly observe the force of gravity between the earth and the moon - it is evidenced from things like tidal effects, lunar orbit and satellite measurements. Yet we gain evidence for scientific statements that are universal and cannot be measured directly. Mathematical models rely on established principles and constants (like the gravitational constant) that have been empirically derived.

What is required must be described in terms of mass and distance – this gives us the force. However, we cannot measure the force between the earth and any other object that we cannot weigh on a scale. We can weigh any easy to handle localised object (a football, or a snooker ball, or a cannonball, etc) and determine the attractive force of gravity between the earth and the localised object, but we cannot do this with the moon. However, what we can do in the absence of being able to hold the moon in our hand is work it out with simple mathematics, where we can infer this force through indirect measurements, and applying Newton's laws to predict their trajectories.

F = G*M(moon)*M(earth)/distance^2 where G is the gravitational constant, 6.67x10^-11 m^3/kg/s^2, the mass of the earth is 6x10^24 kg, the mass of the moon is 7.3x10^22 kg, and the distance between them is about 3.84x10^8 m.

The force is measured because gravitational force decreases inversely by the square of the distance, so by measuring the distance between the earth and the moon (it varies but its average distance is approximately 239,000 miles), and then the earth’s radius, followed by dividing the earth’s radius into the distance between the two objects, one gets the square result. Using mathematics we have accomplished something that we couldn’t achieve with physical testing.

Not only does a scientific theory work best when it is formulated such that in the Popperian sense it produces highly falsifiable implications, one must also distil from a theory a vast nexus of predictability – in the case of Newton’s laws - a web of implications on the behaviour of all masses under forces including gravity*.

Given that we can distil from this theory a vast nexus of predictability, we can infer this web of implications from the mathematics underpinning the law – we do not have to put a body in various regions of space and repeat-test this theory. We cannot, of course, put the moon on a set of scales, but we do not have to, there are easier methods. We know that the only possible orbit under Newton's Laws is an elliptical one, and we also know that the stronger the gravity of a planet, the farther an object can orbit. By sending a satellite to orbit the moon, we can measure its mass quite accurately – something Newton couldn’t have done in his day, of course.

Nowadays, we can even work out the effect the moon has on our seas and calculate its mass, but there would be greater margins of error if this was the only method we had. In the past, the moon and the earth were closer together (they are moving further apart each year at a rate of about 3cm per year) so the gravitational force would have once been much stronger.  Now of course simple calculations would tell us that if that were the case the tides would have been higher than they are now. Once again, levels of consistency are found in such theorising; geologists frequently find fossilised tidemarks that demonstrate tides were higher in the past – and of course, subject to other earthly consistencies, future tides should become lower as the earth and moon separate further.

Given the position of an orbiting body at two points in time, Newton's laws will also tell us where that object will be at any point in the future. The better a theory, the greater its predictive value, in so far as it produces accurate and useful forecasts that one can anticipate, test and then verify or falsify. With theories such as motion, gravity and evolution, our predictions are always confirmed with localised evidence and simple mathematical equations. In the case of Newton, all orbits for anything we observe are forbidden to act in a way that departs from the predictions and implications of his own laws.

However, Newton's laws did run into trouble in the late 1800s, as Maxwell’s theory of electromagnetism was propounded describing all electromagnetic phenomenon and predicting the presence of electromagnetic waves. The electromagnetic field is a field that exerts a force on charged particles. Naturally, the presence and motion of such particles affects the outcome. Once it was discovered that a changing magnetic field produces an electric field, and that a changing electric field generates a magnetic field, we were able to discover electromagnetic induction - the discovery of which laid down the foundations for the vast array of electronic innovations (generators, motors and transformers) that followed. 

Again, the predictive value here is essential - there must be uniformity and regularity for such endeavours to occur. The theoretical implications of electromagnetism brought about the development of Einsteinian relativity, from which it was evident that magnetic fields and electric fields are convertible with relative motion – that is, the perception of these fields changes depending on the observer's frame of reference, particularly how electric fields can transform into magnetic fields and vice versa depending on the relative motion of the observer and the source - allowing us to (among other things) correctly predict how forces increase exponentially for particles approaching the speed of light (this led us further to knowledge of how Euclidian geometry is challenged with the knowledge that space-time does not quite correspond to our own Euclidian intuitions, nor our intuitive view of past, present and future). This (the electromagnetic force) is one of the four fundamental forces of nature - it shows that the electromagnetic field exerts a fundamental force on electrically charged particles. Add to this the other fundamental forces; the strong and weak nuclear forces (the former is what holds atomic nuclei together), and the aforementioned gravitational force, and we have the four fundamental forces of nature, from which all other correlative forces (friction, tension, elasticity, pressure, Hooke's law of springs, etc) are ultimately derived. Aside from gravity, the electromagnetic force affects all the phenomena encountered in daily life - that is, all the objects we see in our day-to-day life consist of atoms which contain protons and electrons on which the electromagnetic force is acting. The forces involved in interactions between atoms can be traced to electric charges occurring inside the atoms.

But even though Newton’s laws were improved upon, they were still good approximations to reality. Newton's universe was a mechanical universe which has been supplemented by the likes of Maxwell, Einstein, Schrödinger, and Heisenberg, who themselves laid down the foundations for all the 20th century physics and cosmology that was to come. Newton appeared to be right for over three hundred years, but 19th discoveries caused us to reassess his theories and, in this case, augment them.

The main two measures we have of a theory’s veracity is the ability to make accurate predictions from it, and the localised evidences for it. As Newton has shown us, all scientific theories are only approximations of what is really at the heart of a complex nature. Approximations are not necessarily inaccurate, but are instead simplified models that apply under certain conditions. Newton's laws still work in situations that are non-relativistic (that is, at speeds much less than the speed of light), but Einstein’s theories work for both non-relativistic and relativistic situations. Einstein, Maxwell, Schrödinger, Heisenberg and any subsequent physicist and cosmologist all owe Newton a great debt – we are observing that science is progressive and that theories are there to be developed upon.

Once a theory is reached that reconciles quantum mechanics and general relativity, we may see that Einsteinian relativity in its current form will be viewed as inadequate. Just as special relativity the provided a framework that included both Newtonian mechanics (as an approximation at low speeds) and Maxwell's equations, demonstrating how they coexist in a broader relativistic context, so too a future theory (perhaps even a theory of multi-dimensions) will almost certainly resolve the current tension between quantum mechanics and general relativity. And any such theory that unifies the two would have to be consistent with separation of scales between high and low ends of the complexity spectrum – that is, the quantum effects which mostly deal with the "very small (that is, for objects no larger than ordinary molecules) and the relativistic effects that deal mostly with the "very large". Aspects of string theory, superstring theory and quantum gravity suggest progress, but the grand theory that unifies quantum mechanics with general relativity eludes us at present. 

Applying this to epistemology, we can see that all these theories have provided provisional approximations of nature that produce highly accurate and useful predictions, but that by themselves they do not encapsulate a self-consistent whole. Newton’s approximations are better at slow speeds, but once we approach the speed of light the discrepancies start to show, and we require what is called a "relativistic correction" to Newton's predictions. But although we do not accept scientific theories as if they are the final word on a cosmic universal truth, we know that because we can test their implications with degrees to which certainty prevails at the greater universal levels than at the local levels (we rely on mathematics to prove this) we actually possess greater degrees of certainty about the universal levels than we do the local levels.

If we were merely non-mathematical creatures relying on local evidential observations the best we could do is postulate simple deduction with some further intrepid attempts at induction. But with laws, axioms and the vast nexus of contingency that is woven into the mathematical fabric we can make grand theories at the universal level that we know will apply at the local level too. Because of our mathematical fecundity we have made predictions about, and found consistency in, masses, motion, and forces, and we are as certain about these as we are about most localised discoveries.

There is an important balance to be stuck between the broad applicability and predictive power of universal laws and the localised contexts. For example, Newton's the law of universal gravitation applies to all objects - it applies equally to planets, apples, and snooker balls. But when observing a snooker ball on a table, local factors such as friction, air resistance, and imperfections in the surface of the baize can introduce uncertainties. These local conditions can make precise predictions about the ball's motion more challenging than applying the general laws to predict gravitational interactions between celestial bodies. The cosmological model of the Big Bang provides a framework for understanding the cosmic narrative of the universe as a whole, predicting phenomena like cosmic microwave background radiation and the large-scale structure of the universe. But at the local level, when trying to model the formation of a specific star or planet, numerous local variables (such as the presence of nearby stars, gas density, and magnetic fields) can introduce complexities and uncertainties that unsettle the potential accuracy of predictions. Or the laws governing radioactive decay are consistent and can be predicted with high accuracy over large populations of atoms (for example, calculating the half-life of a particular isotope). But predicting the exact moment when a specific atom will decay is inherently uncertain due to quantum mechanics. These local uncertainties does not detract from the reliability of the universal law; they simply illustrates how local predictions can be less reliable despite the overall framework being robust.

Science may not provide us with all the answers, but its own rewards are evident by the human progress it has ushered in; science by its very definition should always lead to progression, and every Kuhnian paradigm shift ought to qualitatively supersede the last. It is easy to look back into history and be under the illusion that many of these advancements were quick and easy, but they were not. Einstein’s relativistic standpoint didn’t swiftly refine the framework of classical mechanics to accommodate Maxwell’s electromagnetic standpoint, yet the retrospective viewpoint may give us the illusion that these transitions were smooth.  When one thinks of the many other transitions; not just from Newton, Maxwell and Faraday to Einstein, Schrödinger and Heisenberg, but from the Ptolemaic cosmological view to the Copernican view; from classical mechanics to quantum mechanics; from Becher’s phlogiston theory to Lavoisier's caloric theory of combustion, right through to the science of thermodynamics; from Lamarckian inheritance to Darwinian natural selection and the reconsideration of Lamarck’s ideas with ‘epigenetics’ which identify possible inheritances of acquired traits - what these shifts (and many others) ought to tell us is that we are always in transition and ought to be prepared for black swans and new knowledge that will augment our present foundations. 

Tuesday, 26 November 2024

Four Steps To Sharpen Your Critical Thinking


As uncomfortable as this truth is, I’m afraid it is true to say that most people are not very competent at critical thinking. Even people I love and value dearly frequently make attempts at critical thinking that range from somewhat inadequate to absolutely hopeless. If you’re curious about where you stand on this matter, I can offer some quick-fix methods for sharpening your critical reasoning skills in a matter of minutes.

If you get these first two steps right, it’ll eliminate maybe 80-90% of your errors:

Step 1
Define your terms precisely, and be clear about what you’re trying to conclude or understand. Unless you get step 1 right you can’t be sure you’re on the path to understanding the situation or solving the right problem. 

Step 2
It’s essential to understand that an argument becomes irrational not simply by containing fallacies but by relying on them to reach its conclusion. If a person’s argument can stand on its own without the fallacious reasoning, then the argument is rational even if fallacies are present. The key takeaway is that the conclusion itself is not dependent on these errors in reasoning. This is important because in many discussions, people may inadvertently include fallacious reasoning, yet their overall argument remains logically sound without it. Rational belief or argument, when stripped of fallacies, should still have evidence or logical reasoning that makes it probable or plausible. Conversely, an argument that collapses when fallacies are removed is highly likely to be defective, reflecting an irrational position. Awareness of this is an effective way to discern whether an argument is grounded in logic or only appears plausible to you due to fallacies you’re not noticing. So, the statement is valid in that it differentiates between an argument that merely contains fallacies and one that depends on them, with rationality hinging on whether or not the conclusion can stand on evidence and reason without these fallacies. This distinction helps avoid what’s called the ‘fallacy fallacy’ (the mistaken belief that an argument with any fallacies is entirely invalid). Here are a couple of examples.

Example 1
Argument: Exercise leads to a longer life.

Fallacious reasoning: Everyone who lives past 90 exercises regularly.

Conclusion: This line of reasoning obviously involves a hasty generalisation, because it exaggerates the role of exercise in longevity without considering other factors. However, extensive evidence does support that regular exercise can improve lifespan and quality of life, meaning the argument is sound without needing to rely on this fallacy.

Example 2

Argument: Reducing government regulations helps businesses grow and stimulates the economy.

Fallacious reasoning: All regulations are bad because they restrict growth and freedom.

Conclusion: All regulations are not bad; some help promote growth and freedom. However, targeted deregulation often correlates with reduced costs and increased flexibility for businesses, which can stimulate growth.

These examples show how fallacies can be present in an argument without making it irrational, as long as the argument’s core conclusions can stand without relying on these flawed reasoning paths. Conversely, an argument that collapses entirely without its fallacies (such as those using only exaggerated fears or appeals to emotion) would lack rational basis.

With a clear understanding of fallacies, let’s move to examining assumptions - which is another key part of critical thinking:

Step 3
Identify the fundamental assumptions underlying your initial beliefs or arguments, and question their validity. Assumptions are often invisible, but they usually heavily influence conclusions. By scrutinising and testing them, you reduce the chance of being blindsided by hidden biases, or of going askew by faulty underlying assumptions.

Step 4
Do your best to present an honest, well thought through set of counter-arguments – as if you were arguing for the view you claim to reject. Not only does this add the finishing touches to steps 1-3, it also sharpens your critical reasoning skills, and invites you to either reconsider your own position, or strengthen it further if it can’t easily be counter-argued.

In summary, the core guidance is to define your terms and arguments clearly, understand that you’re in need of a re-think if the argument depends on fallacies to be convincing, look for underlying assumptions that might be skewing or undermining your argument, and be thoroughly cognisant of all the best counter-arguments to your position. By following these steps, you’ll develop a more rigorous approach to evaluating arguments. 


Sunday, 24 November 2024

A Profound Observation About Love

 

There’s a really profound thought about love, from the psychologist Eric Fromm - he says: 

“If I truly love one person I love all persons, I love the world, I love life. If I can say to somebody else, "I love you," I must be able to say, "I love in you everybody, I love through you the world, I love in you also myself.”

Fromm meant that to love someone fully is to embrace their interconnectedness with others, with a regard to a kind of shared humanity – which I agree is true, but I think it’s even deeper than that. If it’s true that “If we truly love one person then we love all persons”, then the corollary gives us two even deeper truths; 1) that we have to love all persons before we can love our beloved in the way God intends, and 2) that if we don’t love all persons, then we don’t love our beloved in the way that God intends, which means we love them inadequately, and not at the level required to be the greatest blessing to them.

That is complex, and takes a lot of unpacking, but if it’s correct – and I can conceive of the possibility that, at the Divine level, it probably is correct – then it’s a simultaneously challenging and wonderful thing to comprehend. The capacity to love universally (love neighbour as oneself) reflects emotional maturity in understanding that love is not confined to people who offer individual benefit, but as expansive force that spreads across humanity, finds its origin in God, and is revealed through Christ. And one doesn’t have the maturity or capacity to love a beloved as required by God unless one has the maturity and capacity to love universally.

In understanding what is required to truly love a beloved, one comprehends universal love regarding everyone’s infinite value; and in comprehending universal love regarding everyone’s infinite value, one understands Christ’s call to love our beloved Divinely as He loves all of us. The truest love for a beloved grows from an ability to see them not only as an individual but as part of a greater humanity. And the more we embrace a Christ-like, universal love, the more completely we can love our beloved—both through the deeply intimate connection in exclusive, romantic love, but through their connection to everyone else in the world.

Wednesday, 20 November 2024

The Science of Your Political Soapbox

 

One of the other most important things we’ve learned in the past 20-30 years about humans as political thinkers is just how much the left and right are genetically predisposed to their beliefs. In other words, when someone annoys you with their dodgy, ill-conceived political opinions, seek solace in the fact that they often can’t help it, because a significant part of what they think is likely to be ingrained in their mental hardware.

Our moral judgments arise from a set of psychological foundations shaped by evolution to help us thrive in social settings – and there is strong evidence that left and right wing adherents tend to prioritise these moral foundations differently. I said in a recent blog post that leftists tend to more strongly emphasise values like care/harm and fairness/reciprocity, while conservatives consider a broader array of moral considerations - adding loyalty, authority, and sanctity to their core concerns. And personality traits, such as openness to experience and conscientiousness, are partly heritable, and they also correlate with political orientation. Openness to experience is associated with liberal or left-leaning views, and conscientiousness is associated with conservative or right-leaning views. This is especially backed up by twin studies, which have shown that identical twins (who share the same genes, of course) are more likely to have similar political views compared to fraternal twins, who share only about half their genes (I say ‘around half” because the specific combination of DNA inherited by each sibling is random, which leads to slight variations around the 50% average). These findings suggest that genetics potentially accounts for around 30-40% of the variation in political attitudes, with environmental and cultural factors (like upbringing, life experiences, and social influences) making up the majority of the rest of the percentage.

Don’t get me wrong, political beliefs are not fixed and unchangeable – there is a complex interplay between genes and environment – and all dubious political views have the potential to be revised with better reasoning and stronger empirical analysis. But, given that we know that these differences in moral priorities appear to have a heritable component, where genetics predisposes people to certain orientations and beliefs, it ought to make us wiser in how we discuss politics – and also encourage us to take the political rants we see with a huge dose of salt. In fact, when we see our friends waxing lyrical about politics and social justice online, perhaps we can amuse ourselves with the thought that they may have a degree of limited control over these views like they do limited control over preferences for spicy food or their fear of heights.

/>