One of the great insights of Bayes’ Theorem is the gradation of belief. This is in fact not how most people intuitively reason! Most people have this intuitive feeling of black-and-white, zero-or-one, believe-or-don’t-believe. When they’re thinking about something they *want* to believe, they think “Does the available evidence *allow* me to believe it?” and when they’re thinking about something they *don’t*, they think “Does the available evidence *force* me to believe it?”

But if you start a more consistent form of reasoning, one that follows Bayes’ rules, then that “binariness” disappears immediately. And one of the simplest and most direct consequences of that is that there is nothing you can reason about that will ever get to be a certain thing. In other words, there are (almost, I’ll get to it in a bit) no propositions such that or .

To see that, let’s take a look at the product rule. For any two propositions and while reasoning on background information :

Let’s show that taking any proposition to absolute certainty leads us to nonsensical results.

Suppose we want to reason about the propositions A = “The sky is blue.” and B = “I see the sky being green.”

If I’m absolutely sure that the sky is blue, that means , right? Furthermore, there are a few other conclusions. Reasoning naïvely, if the sky is blue, I couldn’t possibly be seeing it be green; conversely, if I see it as being green, it couldn’t possibly be blue. Therefore, , and the product rule just says that , and I learn nothing. Everything is undefined.

Let’s reason in a more mature way. It’s *not*, in fact, true that it’s impossible for me to see a green sky under the hypothesis that it’s blue, nor that it be blue under the hypothesis that I’m seeing it be green. Maybe I’m wearing yellow-coloured glasses. Maybe I have a rare genetic condition. So let’s say that both and are nonzero but very small. In that case, we can use Bayes’ Rule:

However, we can reexpress the denominator:

If I’m positively certain that the sky is blue, then the above is just:

What does that mean? It means that, if you’re *absolutely certain* of a proposition, then nothing you could possibly observe would ever change your mind about it. You’d just find new explanations – maybe you developped a rare eye disease while you were asleep, maybe someone is pranking you, maybe you’re hallucinating -, but you’d never ever conclude that you were wrong about the sky being blue. 0 and 1 are not probabilities, in the same way that and are not real numbers (in fact, that analogy is actually exact).

So saying that something you believe has probability (or ) is equivalent to saying that nothing will ever change your mind. Now, I don’t know about you, but I personally would like there to be *some amount* of evidence that was enough to change my beliefs.

There *are* exceptions to that, however. The way Probability Theory as logic is defined, we have that and . That is to say, the probability of a proposition that’s true has to be , and the probability of a proposition that’s false has to be . And there is a certain class of propositions that are “absolutely” true or false: tautologies and contradictions.

Contradictions are saying something you know to be false. From propositional logic, we have that . That is, a proposition that says a thing and its negation is always false. Thus, they must have the same probability: . But the product rule still applies, and so . Since that has to be true for every possible proposition , it follows that : the probability of a contradiction is always .

Conversely, a tautology is just reaffirming something you already know to be true. We know that . That is, saying the same thing twice doesn’t change anything. From that, . But the product rule applies, from which . And this also has to be true regardless of what proposition may be, which means : the probability of a tautology is always .

So, in general, if the propositions you’re reasoning about include a contradiction, they’ll be , whereas tautologies will be struck out and add nothing new.

But of course, in real life you don’t really ever reason about “pure” propositions. Your background knowledge will never include “The sky is blue.” or “Peano Arithmetic is sound.” It will only include things like “I believe the sky is blue.” or “It seems to be the case that Peano Arithmetic is sound.” or, in general, for any proposition , “I think is true (or false).” You’re always reasoning from inside your head. So even if you *do* observe something that looks like it ought to be logically forbidden by your background knowledge, that just means your background knowledge is, in fact, wrong.

In real life, and are not probabilities. Ever. That’s all.

Pingback: Other ways of looking at probability | An Aspiring Rationalist's Ramble

Pingback: What is a mathematical proof? | An Aspiring Rationalist's Ramble

Pingback: Probability Theory as Extended Logic | An Aspiring Rationalist's Ramble

Pingback: Truth, Probability, and Unachievable Consistency | An Aspiring Rationalist's Ramble

Pingback: Mathematical Hells | An Aspiring Rationalist's Ramble