Bayes’ Theorem

Bayes’ Theorem has many, many introductions online already. Those show the intuition behind using the theorem. This is going to be a step-by-step mathematical derivation of the theorem, as Jaynes explained it in his book Probability Theory: The Logic of Science. However, he himself has skipped a bunch of steps, and not always made his reasoning as clear as possible, so what I’m going to do here is elaborate, expand, and explain his steps.

The maths can be quite complex, but I think anyone can follow the ideas. But still, maths cw! So, let’s go, shall we?

The Desiderata

What we want is to create a way to measure our uncertainty of propositions. Or rather, to measure their plausibility. We want to know exactly how sure we are that something is true. We won’t give many constraints, though. We’re trying to be minimal in our axioms here. So the first desideratum is

  • Degrees of plausibility are represented by real numbers.

We’re not going to say anything about any upper or lower bounds. We don’t know yet which real numbers should represent certainty. All we know is that real numbers must be used to represent our plausibility.

We will adopt a convention that says that a greater plausibility will be represented by a greater number. This isn’t necessary, of course, but it’s easier on the eye. We shall also suppose continuity, which is to say that infinitesimal increases in our plausibility should yield infinitesimal increases in its number.

And even this axiom is not incredibly intuitive. Sometimes you don’t even have any idea of how plausible a thing is. However, we’re trying to design an optimal method of reasoning, and I think this is a reasonable thing to expect. You frequently have to make decisions based on incomplete information, and there is some meaningful sense in which you think some states-of-the-world are more or less plausible than others, more or less likely to happen. So it’s that meaningful sense we’re trying to capture here.

  • Qualitative correspondence with common sense

This is a sort of catch-all axiom, and it’s very important. It’s the axiom that says that the meaning of (A|B) is “the plausibility of A, given that B is true.” The argument this proof will try to make is that certain things are desirable of an agent’s reasoning process, and that at the end, we’ll arrive at certain rules. Even if these desirable things can have multiple interpretations, we’re taking one that says that the proper meaning of things like (A|B)  is that you have knowledge that B was observed, so conditional on that knowledge, the plausibility for A is that. Under this interpretation, we do prove the reasoning rules, which means that violating those rules implies that you violated some of the desiderata.

For instance, we should expect that if we observe evidence in favour of something, that something should be more plausible, and vice-versa; that is, if I have that some event B makes A more likely, then my plausibility (A|B) > (A) and also (\bar A|B) < (\bar A) .

So the way this axiom works is: sometimes I will invoke it, and justify it on something one would expect to be fairly reasonable assumptions for belief-updating. At the end, I will show that these assumptions pin the rules of probability down uniquely, and any agent that reasons in a way that’s not isomorphic to these rules will therefore necessarily be violating at least one of these assumptions.

An interesting feature of these desiderata is that time isn’t mentioned anywhere in them. And it shouldn’t! Your reasoning has to be time-independent and in a certain sense objective, and time doesn’t get in at all. These rules are about states of uncertainty conditional on knowledge, and thus your reasoning depends exclusively on your knowledge itself and not on when it was obtained.

  • Consistency

This is in fact a stronger claim than it looks. This system of measuring probability will have a bunch of properties which we label collectively “consistency,” namely the fact that two ways of arriving at a result should give the same result, every bit of information should be taken into account, and equivalent states of knowledge are represented by the same numbers.

An important point about this is that this assumption is about states of knowledge and not logical status. It may very well be that two propositions are logically equivalent or otherwise connected, but an agent is only constrained by that if they know about this logical link (as I discuss here and here).

And now, believe it or not… we’re done. This is enough for us to find Bayes’ Theorem.

The Product Rule

So, suppose we have three statements, A, B, and C. We want to find out the plausibility of (AB|C) . That is, after having found out that C is true (or equivalently, assuming that C is your background knowledge), how plausible is it that both A and B are true?

We can follow two paths to find that out. We can first reason about whether B is true, and then having accepted that figure out whether A is true; or we can reason about whether A is true, and then having accepted that figure out whether B is true. By the consistency desideratum, we need these two methods to yield the same result.

That result is the plausibility (AB|C) . So let us reason about this a bit. For this to be true, it is necessary, obviously, that B be true. Therefore, the plausibility (B|C) should be involved somewhere. Then, if B is true, it is also necessary that A be true, so (A|BC) should be there. But if B is false, then it doesn’t matter what we know about A, AB will be false. Thus if we reason first about B, the plausibility of A will only be relevant if B was found to be true, and we don’t need, after we have (B|C) and (A|BC) , the plausibility (A|C) . That would tell us nothing useful.

Also, since (AB|C) = (BA|C) , the above argument stays the same if we exchange B and A. Therefore, this whole thing boils down to the existence of some function F(x, y) , where x and y are plausibilities, that takes (B|C) and (A|BC) , or conversely (A|C) and (B|AC) , and returns (AB|C). In other words:

(AB|C) = F[(B|C), (A|BC)] = F[(A|C), (B|AC)]

You can also check, if you want, that this is the only form our function F(x, y) can take to return (AB|C), because any other combinations of (A|C) , (B|C) , (A|BC) , and (B|AC) will give you spurious results in some extreme cases, such as when A \rightarrow \bar B , or A = B , or A = C , or C \rightarrow \bar A , or stuff like that.

As a concrete example, suppose we suspect our function is actually of the form F[(A|C), (B|C)] . Suppose A = “The right eye of the person next to me is brown” and B = “The left eye of the person next to me is black.” Either can be very plausible, so (A|C) and (B|C) can be arbitrarily high, but a person with one black and one brown eye is something very rare indeed. This is then an application of the common sense desideratum, and we end up with some function that has the form we defined above.

Now, one of the aspects of the first desideratum was continuity. We wanted that arbitrarily small increases in plausibility meant arbitrarily small changes in the real number. But the same is valid for our operations: we want that arbitrarily small changes in (B|C) result in arbitrarily small changes in (AB|C) . Not only that, but we want that increases in (B|C) correspond to increases in (AB|C) , and decreases in (B|C) correspond to decreases in (AB|C) .

In mathspeak, that is to say, basically, that:

F_1(x, y)\equiv\frac{\partial F(x,y)}{\partial x}\geq 0 \\ \\ \\  F_2(x, y)\equiv\frac{\partial F(x, y)}{\partial y}\geq 0

(We don’t need to assume differentiability, but our work will get easier if we do, and the same result will be achieved otherwise anyway, so might as well. We’ll also assume F’s second derivatives are continuous. This can be taken as a common-sense-axiom requirement of “well-behavedness” of the subjective plausibilities.)

Now, suppose I have four propositions, A, B, C, and D, and I want to find out (ABC|D) , which is the plausibility of A, B, and C, given that D is true. We know that ABC \equiv (AB)C \equiv A(BC) , so our function must have that:

\begin{aligned} (ABC|D) &= F[(BC|D), (A|BCD)] \\ &= F[(C|D), (AB|CD)] \end{aligned}

But we can just reapply this function because (BC|D) = F[(C|D), (B|CD)] and (AB|CD) = F[(B|CD), (A|BCD)] . Thus:

\begin{aligned}  (ABC|D) &= F\left[F\left[(C|D), (B|CD)\right], (A|BCD)\right]\\  &= F\left[(C|D), F\left[(B|CD), (A|BCD)\right]\right]  \end{aligned}

Eek. Looking at this in a computer screen gets really scary. Let’s make this easier to visualise by renaming our plausibilities:

  • x \equiv (C|D)
  • y \equiv (B|CD)
  • z \equiv (A|BCD)

In that case, then, we can rewrite the above as:

(ABC|D) = F[F(x, y), z] = F[x, F(y, z)]

Now things are starting to look neater. Our function must have associativity. This is yet another expression of our desire that no matter what way we use to get to a result, we’ll always get the same result. The order doesn’t matter, as long as you always consider everything you know and follow the rules neatly.

To make the next steps even easier on our eyes, let’s add two new letters, u and v:

  • u \equiv F[(C|D), (B|CD)] = F(x, y)
  • v \equiv F[(B|CD), (A|BCD)] = F(y, z)

So our function is reduced to

(ABC|D) = F(u, z) = F(x, v)

Since those functions have to be identical all the time, we have that their first derivatives must be, too:

\frac{\partial F(u, z)}{\partial x} = F_1(u, z)F_1(x, y) = F_1(x, v) = \frac{\partial F(x, v)}{\partial x}

\frac{\partial F(u, z)}{\partial y} = F_1(u, z)F_2(x, y) = F_2(x, v)F_1(y, z) = \frac{\partial F(x, v)}{\partial y}

\frac{\partial F(u, z)}{\partial z} = F_2(u, z) = F_2(x, v)F_2(y, z) = \frac{\partial F(x, v)}{\partial z}

(The above is true because of the chain rule, and these F_1  and F_2  are the partial derivatives of F  as defined before.)

Let’s define a new function, G(x, y) :

G(x, y)\equiv \frac{F_2(x, y)}{F_1(x, y)}

So if we get rid of F_1(u, z) in the two first equations by dividing the second by the first:

(1)\ \ G(x, y) = G(x, v)F_1(y, z)

Now, look, the left side depends exclusively on x and y, which means the right side must necessarily be independent of z also. Now, if we multiply both sides of the above equation by G(y, z) we get

(2)\ \ G(x, y)G(y, z) = G(x, v)F_2(y, z)

As we’ve seen, (1)’s right-hand side G(x, F(y, z))F_1(y, z) , must be independent of z, and so this must mean that:

\begin{aligned} \frac{\partial}{\partial z}[G(x, v)F_1(y, z)] &= G_2(x, v)F_2(y, z)F_1(y, z)+G(x, v)F_{12}(y, z) \\ &= 0 \end{aligned}

However, if we take the partial derivative of the right-hand side of (2) with respect to y, we get:

\frac{\partial}{\partial y}G(x, v)F_2(y, z) = G_2(x, v)F_1(y, z)F_2(y, z)+G(x, v)F_{21}(y, z)

By Schwarz’ Theorem, we have that F_{12} = F_{21} . This means those two derivatives are the same, and therefore are both 0 . And this means that the left-hand side of (2) must be independent of y, too, which is to say that G(x, y)G(y, z) is independent of y.

Then we’re trying to find the most general solution to the above constraints. More specifically, we want the constraint “G(x, y)G(y, z) is independent of y” to be obeyed by the function G(x, y) whatever it may be. And the most general type of solution for that is

G(x, y) = r\frac{H(x)}{H(y)}

where r is some constant and H(x) is some function of x. We’re not specifying what it is, it’s not important right now. It can be shown that every solution must have that form, too. If we replace that in (1) and (2) we end up with these:

F_1(y, z) = \frac{H(v)}{H(y)}
F_2(y, z) = r\frac{H(v)}{H(z)}

Now, recall that v = F(y, z) . Thus we can take the differential of F(y, z) : dv = dF(y, z) = F_1dy + F_2dz . Replacing the two equations above in this differential form, we have that:

(3)\ \ \frac{dv}{H(v)}=\frac{dy}{H(y)}+r\frac{dz}{H(z)}

Now suppose we create another function:

w(x)=e^{\int\frac{dx}{H(x)}}

Then, by manipulating (3) and the function w(x) , we arrive at:

w\left[F(y,z)\right] = w(v) = w(y)w^r(z)

And this is, of course, more general than just y and z, and any two variables can be put there. The equality will hold.

Now, remember all the way back, our association rule? F[x, F(y, z)] = F[F(x, y), z] ? Or, with our new variables, F(x, v) = F(u, z) . We can take the function w(x) of both sides: w[F(x, v)] = w[F(u, z)] . According to the rule derived above, this will give us

w(x)w^r(v) = w(u)w^r(z)

Reminding ourselves that v = F(y, z) and u = F(x, y) and replacing in the above, that gives us

w(x)w^r(y)w^{r^2}(z) = w(x)w^r(y)w^r(z)

It’s clear that the only way for us to have a nontrivial solution (i.e. w(\cdot) \neq 0 and w(\cdot) \neq 1 ) is to have r = 1 :

w\left[F(x,y)\right] = w(x)w(y)

And we can finally replace stuff there at the beginning:

w(AB|C) = w\left[F\left[(B|C),(A|BC)\right]\right] = w(B|C)w(A|BC)

And this is symmetrical with respect to A and B, of course.

Now, let’s do some work. First, suppose A is a direct logical consequence of some part of C, and that knowledge is also contained in C. That is, whenever C is true, so is A, and the agent knows this. That means (AB|C) = (B|C) because the plausibility of AB, given that you know C, is exclusively conditional on the plausibility of B, since A is certain when C is true anyway. Furthermore, since knowing C, A is certain anyway, we have that (A|BC) = (A|C) because no matter what one learns in addition to C, A is already certain. We can see these constraints as another aspect of the common sense axiom. When that is the case, then:

\begin{aligned} w(AB|C) &= w(B|C)w(A|BC) \\ w(B|C) &= w(B|C)w(A|C) \end{aligned}

It follows that this function w(\cdot) has to represent complete certainty by having that w(A|C) = 1 .

Now, suppose A is in fact impossible given C. That is to say, if we know C, we know that A cannot be true in any way. Then we have that (AB|C) = (A|C) , because the plausibility of AB given C is wholly determined by A given C. And as above, (A|BC) = (A|C) because once you know C, you know A is impossible, and therefore nothing else you learn will affect your plausibility of A. These constraints are also common-sense-axiom constraints. That means, then, that:

\begin{aligned} w(AB|C) &= w(B|C)w(A|BC) \\ w(A|C) &= w(B|C)w(A|C) \end{aligned}

And the values that satisfy this are either w(A|C) = 0 or w(A|C) = \infty . And indeed, we can work with w_1(A|C) = 0 and w_2(\cdot) = w_1(\cdot)^{-1} , and both w_1  and w_2  are as good as any other functions to define our work. Therefore, we choose as a convention to have 0 \leq w(\cdot) \leq 1 , to have this w(\cdot) , whatever it is, increase with the plausibility of its statements, as we desired at the beginning.

Now… what exactly is this work?

We said that (A|C) is the real number associated with the plausibility of the proposition A, given that we know C is true. But we don’t need to work with it anymore because we have this weird function w(x) which has the rules defined above: w(\text{certainty}) = 1 , w(\text{impossibility}) = 0 and w(AB|C) = w(B|C)w(A|BC) = w(A|C)w(B|AC) . This function was defined with regards to a weird F(x, y) and G(x, y) and H(x) , which we didn’t in fact even define.

And guess what? We don’t need to. We can work solely and exclusively with w(x) , and it’s good enough to represent our knowledge. And your hunch is probably close to correct. Just wait until the next rule is defined, and we’ll be good to go.

The Sum Rule

Now, these propositions about whose plausibility we’re reasoning must follow, according to our original desiderata, common sense. Which means in particular that they must follow basic logic. So the proposition A\bar A (that is, A is true and A is false) must always be false, because no proposition can be true and false at the same time, and therefore w(A\bar A|C) = 0 ; conversely, the proposition A + \bar A (that is, A is true or A is false) must always be true, because every proposition must be either true or false, and therefore w(A + \bar A|C) = 1 .

Not only that, but there must be some relation between the plausibilities (A|C) and (\bar A|C) . And if we define

  • u \equiv w(A|C)
  • v \equiv w(\bar A|C)

then there must be some function S(\cdot) such that S(u) = v . Also, since the actual meaning of A is arbitrary and we can just define any proposition B such that B = \bar A , it must also be true that S(v) = u . That is, the function that relates w(A|C) and w(\bar A|C) is its own inverse: S[S(u)] = u , or S^{-1}(\cdot) = S(\cdot) .

Moreover, we need S(0) = 1 and S(1) = 0 , because since S[w(A|C)] = w(\bar A|C) , when (A|C) is certain (and w(A|C) = 1 ), (\bar A|C) must be impossible (and therefore w(\bar A|C) = 0 ), and vice-versa. The common sense axiom constrains us thusly.

Let’s take the plausibilities (AB|C) and (A\bar B|C) . By the product rule, we have that

w(AB|C) = w(A|C)w(B|AC)\\  w(A\bar B|C) = w(A|C)w(\bar B|AC)

for whatever propositions A and B. We can rewrite this last equation when w(A|C) \neq 0 :

w(\bar B|AC) = \frac{w(A\bar B|C)}{w(A|C)}

And because by the definition of S(\cdot) we know that w(B|AC) = S[w(\bar B|AC)] and thus

w(AB|C) = w(A|C)S\left(\frac{w(A\bar B|C)}{w(A|C)}\right)

And of course, since A and B are interchangeable in w(AB|C) , it must be the case that:

w(A|C)S\left(\frac{w(A\bar B|C)}{w(A|C)}\right) = w(B|C)S\left(\frac{w(B\bar A|C)}{w(B|C)}\right)

And the above has to hold whenever w(A|C) and w(B|C) aren’t  0 . In particular, it also has to hold when \bar B \equiv AD , where D is any other proposition you might like. And in that case, we have that A\bar B \equiv AAD \equiv AD \equiv \bar B , and that B\bar A \equiv \overline{(AD)}\bar A \equiv (\bar A + \bar D)\bar A \equiv \bar A .

w(A|C)S\left(\frac{w(\bar B|C)}{w(A|C)}\right) = w(B|C)S\left(\frac{w(\bar A|C)}{w(B|C)}\right)

And we apply the definition of S(\cdot) once again.

w(A|C)S\left(\frac{S[w(B|C)]}{w(A|C)}\right) = w(B|C)S\left(\frac{S[w(A|C)]}{w(B|C)}\right)

To make things easier to see, let’s name a few new symbols:

  • x \equiv w(A|C)
  • y \equiv w(B|C)

Then

xS\left(\frac{S(y)}{x}\right) = yS\left(\frac{S(x)}{y}\right)

(One might be interested in the fact that if we set y = 1 the above is reduced to x = S[S(x)] which agrees with what we’ve already discussed. Of course, this is necessary, and if it hadn’t been the case then some step in our derivation must have  had been faulty.)

And you see, what we want to find out here is the shape of this function S(x) . What it looks like. So it’s much like the function F(x, y) we had before, and like before, we won’t really care too much about its specifics. It will be a tool in helping us develop our rules.

So we have these constraints that must be obeyed by S(x) and we need to figure out a way to find it. One such way would be to study S(1 - \delta) as \delta approaches 0 . Let’s define a new function q(x, y) such that

\frac{S(x)}{y} = 1-e^{-q}

If we define, then, that \delta = e^{-q} , we can create yet another function J(q) :

e^{-J(q)}=S(1-e^{-q})=S(1-\delta)

As q approaches positive infinity, \delta approaches 0 , like we were hoping. So if we figure out how J(q) behaves in that case, we’ll figure out how S(\cdot) behaves at the edges (since it must be symmetric).

So first, let’s take the equation before the one above and invert it:

\frac y {S(x)}=\frac 1 {1-e^{-q}}

Now we can use a series expansion of \frac 1 {1 - x} around 0 :

\frac{y}{S(x)}=1+e^{-q}+O(e^{-2q})

That is to say that, when q is very large, if we approximate \frac 1 {1 - e^{-q}}  using 1 + e^{-q}  the error we’ll be making will be of order e^{-2q} . Since we’re taking q to be approaching infinity, this error is very close to 0 . Now we multiply both sides by S(x) :

y = S(x)+e^{-q}S(x)+O(e^{-2q})

You’ll see that I didn’t show S(x) multiplying the last term. That’s because in the interval of interest, we have that 0 \leq S(x) \leq 1 . Therefore, the error still has order at least e^{-2q} . Now I’ll apply S(\cdot) to both sides:

S(y) = S[ S(x)+e^{-q}S(x)+O(e^{-2q})]

Let’s make a pause to explain this next step. We know that O(e^{-2q}) will get exponentially tiny as q gets close to infinity. But then, so will e^{-q}S(x) . Let’s do a bit of Calculus:

\lim\limits_{d\rightarrow 0}\frac{f(x+d)-f(x)}{d} = f'(x)

This is the definition of a derivative. So, for sufficiently tiny d, we have that f(x+d)\approx f(x)+f'(x)d . And you know what’s sufficiently tiny? e^{-q}S(x) + O(e^{-2q}) is sufficiently tiny. So if we take the case where f(x) = S[S(x)] and d = e^{-q}S(x) + O(e^{-2q}) , we can approximate our equation by:

S[S(x) + e^{-q}S(x)+O(e^{-2q})] \approx \\ S[S(x)] + S'[S(x)](e^{-q}S(x)+O(e^{-2q}))

Therefore:

S(y) \approx x+e^{-q}S(x)S'[S(x)]+O(e^{-2q})

So if you divide the whole thing by x you get:

\frac{S(y)} x = 1+e^{-q}\frac{S(x)S'[S(x)]} x+O(e^{-2q})

Do you remember that S[S(x)] = x , though? In that case, if we differentiate both sides, we get that S'[S(x)]S'(x) = 1 . Using this fact on the above:

\frac{S(y)} x = 1+e^{-q}\frac{S(x)}{xS'(x)}+O(e^{-2q})

With some manipulation we can make this better for our purposes:

\frac{S(y)} x = 1-e^{-q}\left(-\frac{xS'(x)}{S(x)}\right)^{-1}+O(e^{-2q})

If we invent a variable \alpha given by

\alpha \equiv \ln\left(\frac{-xS'(x)}{S(x)}\right)

With this variable we have:

\frac{S(y)} x = 1-e^{-(\alpha+q)}+O(e^{-2q})

So we have a few relevant equations. The above and the following three

xS\left(\frac{S(y)}{x}\right) = yS\left(\frac{S(x)}{y}\right)

\frac{S(x)}{y} = 1-e^{-q}

e^{-J(q)}=S(1-e^{-q})=S(1-\delta)

can be used by us. If we replace some stuff:

xS\left(1-e^{-(\alpha+q)}+O(e^{-2q})\right)=yS(1-e^{-q})

Now, by algebra and substitution:

\frac 1 y = \frac 1 {S(x)} \frac {S(x)} y = \frac 1 {S(x)}(1-e^{-q})

So replacing the y^{-1} from two equations above with this we get:

S(1-e^{-q}) = \frac x {S(x)}(1-e^{-q})S(1-e^{-(\alpha+q)}+O(e^{-2q}))

But the thing on the left-hand side is just e^{-J(q)}  so:

e^{-J(q)} = \frac x {S(x)}(1-e^{-q})S(1-e^{-(\alpha+q)}+O(e^{-2q}))

If we take the natural logarithm of both sides we end up with:

J(q) = -\ln\left(\frac x {S(x)}\right)-\ln(1-e^{-q})-\ln[S(1-e^{-(\alpha+q)}+O(e^{-2q}))]

That last term, though… it’s not very helpful. However, we can use that trick with a differential from before, f(x+d)\approx f(x)+f'(x)d for sufficiently small d, where f is S , x is (1-e^{-(q+\alpha)}) , and d is O(e^{-2q}) :

\ln[S(1-e^{-(q+\alpha)})+S'(1-e^{-(q+\alpha)})O(e^{-2q})]

And if we go one step further and apply this method again on the logarithm, we get:

\ln[S(1-e^{-(q+\alpha)})] + \ln '[S(1-e^{-(q+\alpha)})]S'(1-e^{-(q+\alpha)})O(e^{-2q})

Using the chain rule:

\ln[S(1-e^{-(q+\alpha)})] +\frac {S'(1-e^{-(q+\alpha)})}{S(1-e^{-(q+\alpha)})} O(e^{-2q})

But the first term of the above is just -J(q + \alpha) , so what we have now is:

J(q) = -\ln\left(\frac x {S(x)}\right)-\ln(1-e^{-q}) + J(q+\alpha) -\frac{S'(1-e^{-(q+\alpha)})}{S(1-e^{-(q+\alpha)})}O(e^{-2q})

If we subtract J(q + \alpha) %s=0 from both sides and multiply by -1:

J(q+\alpha)-J(q) = \ln\left(\frac x {S(x)}\right)+\ln(1-e^{-q}) +\frac{S'(1-e^{-(q+\alpha)})}{S(1-e^{-(q+\alpha)})}O(e^{-2q})

The last term, ugly as it is, goes to 0 as fast as e^{-q}  when q approaches infinity; so does the last but one term. This means we can rewrite the above as

J(q+\alpha)-J(q) = \ln\left(\frac x {S(x)}\right)+O(e^{-q})

From that we conclude that J(q) has asymptotically linear behaviour, because other than that order-of-e^{-q} term, that difference is independent of q.

J(q) = a+bq+O(e^{-q})

Therefore:

J(q+\alpha)-J(q) = a+b(q+\alpha) - a - bq +O(e^{-q}) = b\alpha + O(e^{-q})

Which means that

b\alpha = \ln\left(\frac x {S(x)}\right)

and b is just some positive constant (it has to be positive because \alpha has to have the same sign as the thing on the right-hand side of the above). By the definition of \alpha :

b\ln\left(\frac{-xS'(x)}{S(x)}\right)=\ln\left(\frac x {S(x)}\right)

Or, in other words:

\left(\frac{-xS'(x)}{S(x)}\right)^b=\frac x {S(x)}

Now let’s define n \equiv b^{-1} . Then:

\frac {-xS'(x)}{S(x)}=\frac {x^n}{S(x)^n}

Rearranging those terms:

x^{n-1}+S(x)^{n-1}S'(x)=0

So we have a differential equation which we can solve:

x^{n-1}dx+S^{n-1}dS=0

\int S^{n-1}dS = -\int x^{n-1}dx

\frac{S^n} n = -\frac{x^n} n+k

S(x) = \left(-n\left(\frac{x^n} n + k\right)\right)^{\frac 1 n}

We have the boundary conditions S(0) = 1 and S(1) = 0 :

S(0) = \left(-n\left(\frac {0^n} n + k\right)\right)^{\frac 1 n} = (-nk)^{\frac 1 n} = 1

S(1) = \left(-n\left(\frac{1^n} n + k\right)\right)^{\frac 1 n} = \left(-1-nk\right)^{\frac 1 n} = 0

Both of them impose the same condition: k = -\frac 1 n . We have finally found our function S(x):

S(x) = (1-x^n)^{\frac 1 n}

However, we found this function by assuming that B had a special value such that \bar B = AD for some D, and also that w(A|C) and w(B|C) are nonzero, which means that the above function is a necessary but maybe not sufficient condition for the consistency requirement given by

w(A|C)S\left(\frac{w(A\bar B|C)}{w(A|C)}\right) = w(B|C)S\left(\frac{w(B\bar A|C)}{w(B|C)}\right)

Let’s see what happens if we replace S in the above.

w(A|C)\left(1-\left(\frac{w(A\bar B|C)}{w(A|C)}\right)^n\right)^{\frac 1 n} = w(B|C)\left(1-\left(\frac{w(B\bar A|C)}{w(B|C)}\right)^n\right)^{\frac 1 n}

w(A|C)^n-w(A\bar B|C)^n = w(B|C)^n-w(B\bar A|C)^n

Now if we apply the product rule:

w(A|C)^n-w(A|C)^nw(\bar B|AC)^n = w(B|C)^n-w(B|C)^nw(\bar A|BC)^n

w(A|C)^n(1-w(\bar B|AC)^n) = w(B|C)^n(1-w(\bar A|BC)^n)

w(A|C)(1-w(\bar B|AC)^n)^{\frac 1 n} = w(B|C)(1-w(\bar A|BC)^n)^{\frac 1 n}

w(A|C)w(B|AC) = w(B|C)w(A|BC)

w(AB|C) = w(AB|C)

So this S(x) is in fact a necessary and sufficient condition for consistency and we’re done.

The rules of probability

If you recall the definition of S(x), we had to have S[w(A|C)] = w(\bar A|C) . Therefore:

w(A|C)^n + w(\bar A|C)^n=1

And it doesn’t matter what value of n we pick (as long as it’s positive). Furthermore, our product rule works equally well with that:

w(AB|C)^n = w(A|C)^nw(B|AC)^n = w(B|C)^nw(A|BC)^n

So if we pick some value for n (let’s say, n = 1), we can just define the function:

P(x) = w(x)^n

This function, then, is our probability function, with the properties

P(A|C) + P(\bar A|C) = 1

P(AB|C) = P(A|C)P(B|AC) = P(B|C)P(A|BC)

And from those the other sum rule is derivable:

P(A+B|C) = P(A|C)+P(B|C)-P(AB|C)

All from our simple desiderata from the beginning of the article. So, as discussed there, any agent that reasons in a way that’s not isomorphic to these rules is necessarily violating one or more of the presented desiderata; and conversely, any agent that follows them has a probability function. Neat, huh?

Advertisements
This entry was posted in Mathematics, Probability Theory, Rationality and tagged , , , , , , . Bookmark the permalink.

4 Responses to Bayes’ Theorem

  1. Pingback: Logical Uncertainty | An Aspiring Rationalist's Ramble

  2. Pingback: Learning Bayes [part 1] | An Aspiring Rationalist's Ramble

  3. Pingback: Truth, Probability, and Unachievable Consistency | An Aspiring Rationalist's Ramble

  4. Pingback: Confidence vs. Credibility | An Aspiring Rationalist's Ramble

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s