Bayes’ Theorem is all nice and dandy, but it may not necessarily be the best thing to work with. It’s quite simple:

When laid out this way, what it says is that the probability of some proposition after you learn that is true equals its probability *before* you knew it was true times a factor we could call the *impact* of on :

When explaining what the Bayesian meaning of evidence was, I mentioned the obvious thing that if is more likely when is true, then it’s evidence for it, and it’s evidence against it otherwise.

However, we can look at it another way. If we expand as explainable by a large set of mutually exclusive propositions , then we can reexpress the theorem as

When you look at it like *that*, then the denominator becomes nothing but a normalising factor meant to guarantee that your probabilities sum to 1, and it’s which does all the hard work. And then that term gets two different names depending on which way you look at it.

To explain it, let’s suppose is some hypothesis we’re studying and is the data collected. Let’s rename them and accordingly. Rewriting the theorem:

The term , when considered as a function of the data for a fixed hypothesis , is generally called the *sampling distribution*. For example, suppose the data is a string of results of a binary experiments and the hypothesis is that these results are in fact Bernoulli trials – which means that the probability that each trial will come out one way or another depends only on the specific way it can turn out, and not on previous trials. An example would be the tossing of a coin, and a possible hypothesis is that the coin turns up heads with probability and tails with probability . Then, if we suppose that data consists of a series of outcomes – for instance, HTTHH would mean there was one head, followed by two tails, followed by two heads -, the probability that that data would be observed under hypothesis is where is the number of observed heads and is the number of observed tails. Another way to write it would be where is the number of heads and is the total number of tosses. In that, then, for any given hypothesis , the sampling distribution is a function of and while holding fixed.

However, if you look at as a function of the *hypothesis* for some fixed dataset , then it’s called the *likelihood* , and although it’s numerically equal to the sampling distribution, it’s a function of the parametre space while holding and fixed. Unlike the sampling distribution, it’s *not* seen as a probability, but rather a numerical function that, when multiplied by some prior and a normalisation factor, becomes a probability. Because of that, constant factors are irrelevant, and any function is equally deserving of being the likelihood, where is a function exclusively of the data and independent of the hypotheses under consideration.

Now, if you take the ratio of and you get

and the prior for drops out. We call that ratio the *odds* on the proposition , , and combining both equations we have:

This form has a very nice intuitive meaning, and it’s better for calculating Bayesian updates. In that, if something has probability 0.5, then it has odds 0.5:0.5 or 1:1 (read one-to-one). If something has probability 0.9, then it has odds 9:1 (nine-to-one) and we know immediately that it’s nine times more likely to be true than to be false. Now, since , the odds transformation is just , and if I have the odds, I can transform it back to probabilities by using .

And that last term is called the *likelihood ratio*. Using Yudkowsky’s example:

Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4 / 1.4) = 2/7 = ~29%.

Furthermore, if you have more than one hypothesis at stake – suppose if face 1 comes up there’s a 20% chance of hearing the bell, if face 2 comes up there’s a 5% chance of hearing the bell, and if any other face does there’s a 10% chance of hearing the bell -, then you can use extended odds to calculate your posteriors. In this case, before you throw the die, the prior odds are 1:1:4, and the extended likelihood ratio for hearing a bell is 0.2:0.05:0.1 or 4:1:2. If you throw a die and hear a bell, your posterior odds will be 4:1:8, and your posterior probabilities will be 4/13 = 30.77% for face 1, 1/13 = 7.69% for face 2, and 8/13 = 61.54% for any other face. This is much easier than using Bayes’ Theorem directly.

And a final way to look at probabilities is by using the evidence function, , which is measured in decibels and cashes out to:

Now, while this doesn’t keep the niceness of odds when dealing with more than two hypotheses, it has a few other advantages of perspective. The first is that it’s additive: if a given hypothesis has prior probability 0.01 or prior evidence of -20dB, and you observe evidence that’s 1,000 times more likely when that hypothesis is true than when it’s false, the evidence shift is and the posterior evidence is which is a posterior probability of 0.91. As new pieces of evidence are added to the mix, we just add and subtract to the evidence thus far collected to arrive at our final conclusions.

The second niceness is the one that shows that 0 and 1 are not probabilities. How much evidence would it take to raise a hypothesis to certainty?

And the symmetric argument shows that the evidence needed for negative certainty is also infinite. Just like positive and negative infinity aren’t real numbers, they’re just representations of the boundlessness of real numbers, so are 0 and 1 just representations of the extreme limits of perfect platonic certainty.

And the final interesting perspective is best seen when looking at this graph of evidence against probability:

To get from 10% probability to 90% probability, you just need 20dB of evidence. Well, “just” 20dB means evidence that’s 100 times more likely when your hypothesis is true than otherwise. But look at the extremes of that graph. There are two singularities. The one close to 1 shows that, once you are very sure of your hypothesis, evidence can change your mind only very little. The distance between 0.5 and 0.6 in terms of evidence is much much less than the distance between 0.999 and 0.9999.

The other singularity, however, shows one of the most important things in probability theory: the vast majority of the work needed when proving your hypothesis is in just *figuring out which hypothesis is the right one*. Think about a phenomenon you want to explain, like gravity. Think about all the possible hypotheses that could explain it. Little angels pushing massive stuff. A witch did it. Invisible ropes tied around every particle. A force. The curvature of spacetime. Take that last hypothesis alone. Saying “spacetime curves” is still a very general statement. How does it curve? What’s the degree of curvature? There’s a huge number of possible equations that could describe the topology of spacetime under the influence of mass.

Hypothesis-space is gigantic, it’s so big you can’t even imagine it. The amount of evidence you need just to point somewhere in it, just to say “the right hypothesis looks like this” or “the right hypothesis is in this area,” is astoundingly huge. Do you see that? Once you’ve actually found the correct hypotheses, the work necessary to become reasonably sure of it is nothing, it’s negligible.

When asked what he would’ve said if, in 1919, Sir Arthur Eddington had failed to confirm General Relativity, Einstein famously replied, “Then I would feel sorry for the good Lord. The theory is correct.” But that’s not just arrogance (27 bits = 80dB of evidence, to translate Yudkowsky’s notation to ours). Just to pinpoint those particular equations to describe Nature, Einstein needed a lot of work. And he probably had much more evidence than strictly required, too, because humans often underestimate the probability shift they should suffer under new evidence and are very inefficient.

Just to find the right answer, you already need to work your butt off. Otherwise, you’re jumping to a 80dB conclusion with much less than 80dB of evidence. The vast majority of the work is *there*. Incidentally, that’s why it’s so hard to find the elusive Theory of Everything. Just to pinpoint the correct equations we need a lot of work.

After you’ve found the correct hypothesis, confirming it is comparatively a piece of cake.

Pingback: Orthodox test statistics and the absence of alternatives | An Aspiring Rationalist's Ramble

Pingback: Occam’s Razor | An Aspiring Rationalist's Ramble

Pingback: Agreements, disagreements, and likelihood ratios | An Aspiring Rationalist's Ramble

Pingback: How and when to respect authority | An Aspiring Rationalist's Ramble

Pingback: Bayesian falsification and the strength of a hypothesis | An Aspiring Rationalist's Ramble

Pingback: Absence of evidence is evidence of absence | An Aspiring Rationalist's Ramble

Pingback: Don’t be so sure… | An Aspiring Rationalist's Ramble

Pingback: Learning Bayes [part 2] | An Aspiring Rationalist's Ramble

Pingback: Truth, Probability, and Unachievable Consistency | An Aspiring Rationalist's Ramble

Pingback: Stopping rules, p-values, and the likelihood principle | An Aspiring Rationalist's Ramble

Pingback: Artificial Neural Networks | An Aspiring Rationalist's Ramble