In part 3, I discussed the problem of finding a way of drawing a posterior point estimate of a number based on a series of point estimates that’s more “theoretically valid” than taking the median, which is the standard of the domain that inspired the post in the first place. I arrived at a likelihood function like:

So I decided to look at what that looks like with the vector and various values of and (for now all the individual hyperparametres of the estimates will be the same).

for something like an “ignorance” prior, with “effective” prior observations of precision , or variance :

and , for again almost no prior observations but with a higher precision:

Pretty much the same thing.

However, for and , one effective prior observation with sample precision (whatever the hell *that* means with only one observation):

Which is pretty, well, pretty. It’s not even multimodal, and the prior confidence in all four estimates is exactly the same, with a fairly low precision. If I take the precision to :

So, yeah. Talk about robustness.

The Student’s t-distribution is the Gaussian when marginalising over the precision weighted according to a Gamma distribution.

That prior distribution is, like I said, equivalent to having observed effective prior points with sample precision . For , however, that’s not normalisable; in fact, in the way it’s given, it’s infinite. But we can rewrite it:

Taking the limit for and approaching gives us , which is the ignorance prior for a scale parametre of a distribution in the same way the constant distribution is the ignorance prior for a location parametre (like the mean). We’ve been using the ignorance prior for ; what if we used the ignorance prior for the precisions as well?

And the above *does* have an analytic form:

So the likelihood function for all estimates is:

Of course, there’s the problem that the above does not converge at *all*, it’s so improper it makes me cry. And well, Jaynes used to say that when the posterior is improper (and the prior was derived from a well-defined limiting process, which in this case it wasn’t, exactly) it’s because we don’t have enough information for inference. I’m not sure *what* information would be sufficient for inference in this case, but well, such is life, I guess.

I’ll probably talk about the Laplace distribution in part 4, when I get to it, but for now, I think the Student’s t is pretty good.

### Like this:

Like Loading...

*Related*

Pingback: Confidence vs. Credibility | An Aspiring Rationalist's Ramble

Pingback: Bayesian Networks | An Aspiring Rationalist's Ramble

Pingback: Learning Bayes [part 3] | An Aspiring Rationalist's Ramble