## What is evidence?

Simply put, evidence is any observation that changes your probability assignments for a given hypothesis. That’s pretty much it. We can further define evidence for a hypothesis as one that makes the hypothesis more likely, and evidence against a hypothesis as one that makes it less likely.

Generally, for some observation to affect your probability estimates of anything, there needs to be some cause-and-effect chain connecting the observation and the hypothesis. For instance, the hypothesis “It has rained.” and the observation “I see the street is wet.” are connected: it rains, the water makes the street wet, then photons coming from somewhere bounce off the street and hit your retinas, which send an electrical signal to your brain provoking you to make the observation.

That’s valid for anything at all. The chain can be even longer: I may call my friend and tell them that it has rained, and so their probabilistic assignment that it has rained will change, because there is a causal chain connecting the rain to them.

And that’s why other people’s information is evidence for stuff. Because there is some cause-and-effect chain between the stuff and them telling you that the stuff is true.

Information travelling between brains amongst honest folk is evidence, just as well as observing the event itself (although it’s not just as much, since the more steps there are in the chain, the lower your probability will be).

And evidence has an interesting mathematical meaning, too. Suppose you have some hypothesis $H$, and some observation $E$. $E$ is evidence about $H$ if $P(H|EX) \neq P(H|X)$ (as usual, $X$ is your background knowledge). But what is $P(H|EX)$? By Bayes’ Theorem:

$P(H|EX) = \frac{P(E|HX)}{P(E|X)}P(H|X)$

So for $P(H|EX)$ to not be equal to $P(H|X)$, we must have that $\frac{P(E|HX)}{P(E|X)} \neq 1$ which automatically means that $P(E|HX) \neq P(E|X)$.

Now let’s characterise mathematically evidence for and against a hypothesis.

$P(H|EX) > P(H|X) \Leftrightarrow P(E|HX) > P(E|X)$

The meaning of the above is that $E$ is evidence for $H$ if and only if $E$ is more likely to be observed when $H$ is true than otherwise. Likewise:

$P(H|EX) < P(H|X) \Leftrightarrow P(E|HX) < P(E|X)$

$E$ is evidence against $H$ if and only if $E$ is less likely to be observed when $H$ is true than otherwise.

But Probability Theory doesn’t deal with hypotheses and observations. It deals exclusively with propositions. So in fact, what $H$ and $E$ actually mean propositionally, in the equations, is:

$H$ : “Hypothesis H is true.”

$E$ : “Observation E has been made.”

Therefore, the probabilities have different meanings:

$P(H|X)$ : “The probability that hypothesis H is true.”

$P(E|X)$ : “The probability that observation E will be made.”

$P(H|EX)$ : “The probability that hypothesis H is true, given that observation E has been made.”

$P(E|HX)$ : “The probability that observation E will be made, given that hypothesis H is true.”

But the rest of the reasoning is perfectly sound (mathematics works by any name you give it).

This entry was posted in Basic Rationality, Mathematics, Probability Theory, Rationality and tagged , , , , , . Bookmark the permalink.