When I discussed the usefulness (or lack thereof) of Aumann’s Agreement Theorem, I mentioned that the next best thing to sharing the actual knowledge you gathered (or mind melding) was sharing likelihood ratios.
But sometimes… you can’t. Well, most of the time, really. Or all the time. Humans do not actually have little magical plausibility fluids in their heads that flow between hypotheses and are kept track of dutifully by some internal Probability Inspector, just like humans do not actually have utility functions. If a Bayesian tells you that they believe a thing with 40% probability… either they’re crazy, or they’re Omega, or they’re giving you a ballpark estimate of subjective feelings of uncertainty.
And then there’s the time when your fellow rationalist… is not actually someone you know. They might be a friend of a friend, or a famous scientist, or just the abstract entity of Science.
In a recent facebook post, Eliezer Yudkowsky said:
In a Traditional Rationalist upbringing (you grew up in a science-literate, Sagan-and-Feynman family) you learn that to learn well and be a good person, you shouldn’t take things on authority. Even if someone you respect says a thing, but you can’t see why it’s true or it doesn’t seem well-justified, you need to go on questioning them skeptically until the belief seems well-justified. “I can’t believe in General Relativity just because you tell me to. What sort of experimental evidence is there?”
We shouldn’t “respect authority,” we say. The reason why Science-the-ideal has success where campfire tales and religious edict don’t is that it doesn’t trust Authorities that hand down Truth, and instead searches for the truth itself, looking at the actual world, testing its predictions, trying to falsify its theories, etc.
…and in the process, Science became an Authority that hands down truth.
Don’t get me wrong, Science is in fact right, and the truths it “hands down” are in fact the correct ones, or rather they’re pointing in the right direction and always approaching the correct ones. But that’s the case because you can actually go and follow the methods and you’ll be able to reproduce the results yourself (or so you’d believe – maybe Science is a Big Conspiracy, eh?).
The version of this that I would espouse as probability-theory-OK (not leading to updates in a predictable direction), the grownup version of the technique, is that if you respect the other person, you’re aware that they probably know something you don’t—that in their own minds, their words mean a true thing. But you’re also simultaneously aware that it does no good to profess a statement that you don’t understand, or repeat back words whose meanings you may not know; and if you can’t see why the assertion is true or justified, it’s quite probable that you haven’t correctly understood what the other person is trying to say. So you say, “I’m not sure I understood what you mean. Did you mean X, because Y?” E.g., “I’m not sure what sort of state of affairs would correspond to gravity being caused by spacetime bending. Do you mean that… um, actually I’ll just ask what ‘bending’ is?” Or if you’re sure that you understand the assertion, and you think the other person probably is correct, but you don’t know why that thing would be true, you can say that too: “I’m willing to believe X is true, but I can’t see how we know it’s true.” This is what my conversations with, say, Anna Salamon sound like.
Respect For Authority, in Bayesian terms, does not mean that you hear what The Wise Teacher has to say and then repeat it. What it does mean is that you respect that your peer probably has information you don’t, has thought about this for longer than you have (so they have information in the form of computation time), and therefore, unless you have information they don’t, it’s very very likely that you’ll end up agreeing. And even most times you do have information they don’t, they’re probably right anyway because revolutionary tableturning info is hard to come by.
This doesn’t solve the problem of the impersonality of Science, though. It’s all nice and good to talk about how Quantum Mechanics is probably a good approximation to the underlying physics of the universe, and that its experimental predictions are amongst the most wildly successful in the history of science, but if you don’t actually know Quantum Mechanics, the right and proper action is not to say that “Electrons are wave and particle at the same time – Science says so!” That’s reciting the teacher’s password (and it’s the wrong password anyway, that’s been popularised in the past years but doesn’t actually mean anything).
What I’d advise, rather, is a form of epistemic learned helplessness: Yes, scientists talk about something like “wave-particle duality”; no, I do not know what that actually means, and that “belief” does not actually correspond to anything in the territory. Yes, it’s a consensus that man-caused global warming is real; no, I do not know specifically what evidence they have for that. Yes, evolution is real; no, I do not know the specific details of how it works and what predictions that has. Et cetera. It’s okay to confess ignorance. It’s okay to say, “If I had as much information as these people do, I’d probably agree with them.” while at the same time not knowing what that model even ought to correspond to. You don’t have to know everything everyone ever knows, and sometimes getting the information is… simply not high priority. We’re constrained optimisers, and we’re not logically omniscient.
Respecting authority, probabilistically speaking, is knowing that people who know more than you do are probably right about the thing they know more about (at least that’s the ideal case with rational agents), it’s knowing that their words are evidence and should be taken into account. It’s knowing where you’ll get to, before getting there.