The proposition is actually mathematically valid - just because A is more likely given B than given not-B, that doesn’t mean B is more likely given A than not-A. But in some specific scenarios, the probabilities may line up in such a way that both directions appear to hold - even though the principle still warns us not to assume that. I think that was at the heart of why Matthew and Matt talked past each other on this (see footnote*).
The footnote has a slight elaboration on why “Just because A is likelier given B than not-B, that doesn’t mean B is likelier given A than not-A” is not actually true, but it’s easy to think of very simple examples that warn against the error of hastily flipping conditional probabilities. For example, just because A (smoke) is more likely if B (a house on fire) happens, does not automatically mean that the presence of A makes B more likely - unless you consider the whole context, because house fires are quite rare and other sources of smoke are common. Or an even more obvious example, just because A (clouds) is more likely if B (it's raining) happens, does not automatically mean that the presence of A makes B more likely - unless you consider the whole context, because clouds often appear without rain.
Let’s move on to apply this to God. Here is an easy example from the dice rolling scenario the debaters used. Imagine you're rolling two fair six-sided dice. Let’s define two events: Event A is that the total of the two dice is 7, and Event B is that the first die shows a 3. Now, if you know the first die is a 3, the only way to get a total of 7 is for the second die to be a 4. Since the second die is fair, there's a 1 in 6 chance of that happening. So the chance of getting a total of 7 given that the first die is a 3 is 1 in 6. Now suppose the first die is not a 3. That leaves 30 possible combinations (that is, suppose first die is not a 3. That means it could be 1, 2, 4, 5, or 6 - giving 5 × 6 = 30 possible outcomes). Out of those, there are 5 that total to 7 - like (1,6), (2,5), (4,3), (5,2), and (6,1). So again, the chance of getting a total of 7 given that the first die is not 3 is also 1 in 6. This means that even though the probability of A (getting a 7) is the same whether or not B (first die being 3) is true, knowing that A happened doesn't actually tell us anything about whether B is more or less likely. So, in this case, even though A is equally likely given B or not-B, that doesn’t make B more likely given A - which is exactly the kind of situation that shows the original statement is true.
Now we’ve cleared that up, let’s go back to the initial question. John walks up to Jack and sees two dice with two 6s on the table. Is it more likely in terms of probability that Jack placed them like that or rolled them? On the surface, the answer seems obvious – it’s more probable that he placed them. Here’s why. The probability of rolling two sixes in one roll is 1 in 36 (1 in 6 x 1 in 6). But if Jack intentionally placed them, the probability of seeing double sixes is 1, because he could simply set them that way. So, the evidence (double sixes) is 36 times more likely under the “placed” hypothesis than under the “rolled” hypothesis.
Doesn’t that mean we should believe Jack placed them? No, not quite. As any good Bayesian will tell you, you don’t get to a posterior belief without a prior. In this case, we must also consider; how likely is it in general that Jack would place the dice instead of rolling them? What if Jack is known to always play fair? What if he’s never been seen to manipulate dice, and we have no prior evidence suggesting deception? What if Jack is just a thoroughly honest and decent guy all round, with almost unimpeachable character and integrity? Then our prior probability that Jack placed the dice may be extremely low - say, 0.001. And despite the evidence (double sixes) being better explained by "placed," the posterior probability may still favour "rolled" unless the likelihood difference is sufficient to overcome the low prior. Here, posterior probability means your updated belief in a hypothesis after taking the evidence into account. You need likelihood x prior to determine the posterior (your actual belief after evidence).This is why Bayesian reasoning is so powerful - it respects both what you believed before and what you just learned, combining them to form a rational belief.
As I’ve written in previous posts and articles, this is the right way to consider the probability of Christianity being true. The existence of a finely tuned, life-permitting universe has often been likened to seeing dice all land on six (although in this case, across an entire world of casinos). The physical constants appear finely calibrated, as though they are set by a cosmic Mind with astronomical purpose and intelligence. But we should proceed with caution there, because even if this state of the universe is vastly more probable given the hypothesis of God than under a godless process, having sufficient prior probability to believe in God in the first place requires a little more, and a lot more if we specify the Christian God.
You may recall my probabilistic model proposed in my Mathematical Bias Theory, in which I framed the universe's apparent structure in terms of a “biased random walk.” As outlined in that blog post, imagine all possible physical configurations forming a vast probability space, with a neutral "random walk" through them generating the high order we see before us, without which such order would be astronomically unlikely. Given a mathematical bias towards law-like coherence - a structure that systematically tilts the cosmic narrative toward order - then such a bias suggests that our universe is the product of an ingenious Cosmic Mathematician (God).
Apply that to the dice model above. Even if your prior for theism is low, then according to the exquisite degree of structure, order, and life-permitting constants we see in the biased random walk - all of which are vastly more likely under theism than atheism - then just as with the dice, our posterior belief in God should increase substantially. That would get us to deism or even theism, but we must keep going to get to Christianity. To keep moving from theism to Christianity we must go further along a Bayesian staircase - a sequence of probability updates, each supported by its own priors and evidential strength. Once you factor in the cumulative evidence for Christianity’s truth – Biblical evidence, Christ’s teachings, credibility of the Resurrection, Biblical prophecies, miracles, healings, personal testimonies, to name just a few - each of which incrementally increases the plausibility of Christianity to such an extent that we have every reason to believe it is true.
For some people, Christianity, like the hypothesis that Jack rolled the dice, may begin with a low prior. But through the accumulation of evidences, each step increasing the probability that the hypothesis is true, it becomes more and more rational to believe Christianity is the truth, as Bayesian reasoning provides a clear path for belief that is proportional to evidence - a climb up a probabilistic staircase, if you like, where every step is supported by reason, evidence, and explanatory power.
*Footnote: In case it’s still not clearer, let’s break it down further Remember, it went like this:
Matt: “Just because A is likelier given B than not B, that doesn’t mean B is likelier given A than not A.”
Matthew: “Problem, it does.”
Matthew was right to say more likely, but misled by a symmetrical result
in a simple example if he thinks it's certain - a situation where the numbers happen to align. But this
symmetry doesn’t hold every time, and as I said above, Bayes’ theorem explicitly
tells us that you cannot reverse conditional probabilities without
re-evaluating all terms. Mistaking symmetry for a universal rule is a common
intuitive trap in certain examples (like fair dice) but it is easy to mistake a coincidence for a law. Bayes'
Theorem governs how P(B given A) behaves. You cannot reverse the inequality in
P(A given B) > P(A given not-B) without evaluating all the terms. So we have to guard against misapplying conditional probability and assuming that an
implication works both ways - which it mathematically does not, unless under
special constraints. It’s a shame because Matthew has the better arguments
generally, but this error proved to be an impediment to Matt seeing the broader
point – although, personally, I don’t think Matt Dillahunty is very open or
particularly teachable.
Let me try to illustrate the error in simple terms. Take a fair 6-sided
die. If I tell Matthew the number is 4, he knows it’s even. If I say it’s not
4, it’s less likely to be even. Therefore, “even” is more likely when it’s 4
than when it’s not 4. And if I tell Matthew it’s even, there’s a better chance
it might be 4 than if I told him it’s odd. So, from Matthew’s perspective, that
seems symmetrical, and I suspect that’s what fooled him. The trouble is, that
symmetry is far less typical than the mathematical rule that supports the
counter. In fact, that symmetry is just a coincidence in the simple
example I gave - it doesn’t work in general. There are many situations where A
is more likely when B happens, but B is less likely when A happens, and I
showed this with the examples I used in the blog post above. Conditional
probabilities like that are not there to be flipped at will.
Edit to add: I think the
distinction we must make is between more likely and guaranteed. The statement
"the fact that X is more likely on Y means that the existence of X makes Y
more likely" is true under standard probability assumptions. This is
because Bayes' rule directly relates the two conditional probabilities, and if
observing Y increases the likelihood of X, then it also increases the
likelihood of Y when X is observed - though the strength of this effect depends
on the base rates of X and Y. So, while this does not imply certainty or
causation, it does imply a directional probabilistic dependence, making the
original statement valid in the probabilistic sense, but not logically or
causally guaranteed.