The use of Less Wrong

I’ve been planning on writing a post along these lines, and the recent thing on tumblr about the LW community has given me just the right motivation and environment for it. Specifically this nostalgebraist post gave me the inspiration I needed. He described the belief-content of LW as either obvious, false, or benign self-help advice one can find in many other places.

Now, nostalgebraist isn’t a LWer. I am. So let me say what the belief-content of LW looks like, to me. Why do I think LW-type “rationality” is useful? What’s the use of it all? Is it just the norms of discourse?

And of course you have to take this with a grain of salt. I’m a LWer. So I’m severely biased in favour of it, compared to baseline. And even nostalgebraist is pretty warm towards the community, or at least the tumblr community, so even his opinion is somewhat closer to being positive than baseline. To properly avoid Confirmation, opinions of people who have had bad experiences with LW should be sought. I’ve seen quite a few on tumblr too, but none really outside of tumblr so there’s also the set of biases that come from there. This paragraph is supposed to be your disclaimer: I’m not an objective outside observer. This is the view from the inside, rather, why I personally think LW is useful, and why I (partially) disagree with nostalgebraist.

I think my first problem is: nostalgebraist is smart. And he’s got a certain kind of smarts, one that I find with some frequency in LW, that makes him say stuff like “‘many philosophical debates are the results of disagreements over semantics’ — yeah, we know.” The first point is: we don’t. I don’t know if I’m too used to dealing with people outside of LW, or if he’s too used to dealing with people around as smart as he is, but this sort of thing is not, in fact, obvious. Points like “don’t argue over words” and “the map is not the territory” and “if you don’t consciously watch yourself you will likely suffer from these biases” aren’t obvious! Most people don’t get them! I didn’t get them before I read LW, the vast majority of people I meet (from one of the 100 best engineering schools in the world) don’t know this!

LW-type “insights” are not, in fact, obvious to most people. Most people – and yes I’m including academics, scientists, mathematicians, whatever, people traditionally considered intelligent – do in fact spend most of their lives ignoring this completely. So I’ll get back to what exactly those insights may be later.

The second problem is… I also think he’s objectively wrong about what beliefs are actually common amongst LWers. Just take a look at the 2013 LW Survey Results. In fact, the website itself barely talks about FAI, so I don’t understand where the idea that Singularity-type beliefs are widespread comes from. Maybe it’s because everyone outside of LW doesn’t talk at all about FAI and Singularity and we talk a little about it? I dunno, my personal experience with LW is that much less than 0.5% of the time we spent talking is dedicated to this kind of discussion, and even belief in Singularity/FAI is oftentimes permeated with qualifiers and ifs and buts. And even the hardcore Bayesian thing isn’t all that settled either.

At any rate, there’s much more to it than just that.

Object-level skills

So what are the not-at-all-obvious things LW purports to teach? What are its main points? First we can talk about the Sequences themselves.

The map is not the territory. This sort of dissociation between model and world, between belief and reality, this sort of what-you-see-is-not-in-fact-all-there-is, isn’t an insight I’ve really seen explicitly spelled out outside of contexts like LW and the Heuristics and Biases research (I’m going to henceforth lump these two together because, really, a majority of LW content is just a presentation of that). It’s a sort of mind-structure that is probably common amongst scientists and people who actually test hypotheses for a living, but the ability to compartmentalise most people possess guarantees they don’t generalise the notion from the existence of illusions.

(In fact, the very idea that context-specific knowledge can be generalised into broad cross-domain knowledge isn’t totally obvious to many people!)

The answer to a mysterious question should not be, itself, mysterious. The idea of replacing symbol with significant, of making sure your brain isn’t dealing with unmarked black boxes. The idea that names hide much more complexity than they let on. The idea that just because something looks like an answer doesn’t mean that it is. Curiosity-stoppers, belief in belief, having accurate and restrictive models of reality, this sort of epistemic self-honesty about what you really know and what you don’t. That things with words where people pretend words have intrinsic meanings and go into long-winded meaning debates. This is not obvious.

The idea of not getting attached to ideas. The seemingly paradoxical notion of being so in love with your beliefs that you’re willing to throw them away as soon as evidence points elsewhere. Resistance to Confirmation, motivated cognition, realising what your model actually predicts and actually sticking your neck out to see which way the axe will swing, realising that beliefs are quantitative not qualitative.

Meta-level skills

Et cetera. We got all these little specific object-level ideas and skills that are each maybe obvious to some people but the ensemble of them are not. And behind it all we have a sort of overarching master idea, which is that humans do not reason optimally by nature, and need to constantly be on the lookout for these errors and biases.

And this is, I think, a very important meta-idea, one that’s said by every self-respecting sceptic but is actually practised by so, so few people: question literally everything. Start from scratch, don’t take anything for granted. Tradition is not enough, authority is not enough, you need to actually have the knowledge. One of the most common types of discussion amongst LWers is exactly bringing up some idea and questioning whether it makes sense, or whence it comes, or why people believe it, or whether we should act like we believe it even if we don’t, etc.

And most importantly, we develop the habit of second-guessing ourselves. Most good sceptics say they do this, of course, but LWers are the only group of people I’ve ever seen that actually does it. That does things like the thing I did at the start of this post, where I mentioned explicitly that I think this post is biased, that you should take it with a grain of salt. This is cake compared to the sort of second-guessing that goes on internally in my mind, and that I see daily amongst my friends. People taking criticisms seriously, and offering criticisms seriously. People who believe that being correct and accurate is more important than status games or feeling superior or feeling knowledgeable. The idea that having the knowledge is much more important than appearing to have it, that’s claimed by many, but I’ve only seen honestly practised by us.

And this sounds a lot like cheering for my team! And in a certain way it is because I can’t claim this of all of LW, and in fact I’ve been exposed to a lot of LW that’s not-at-all like this either… I’m just talking about the little bubble inside of LW that includes tumblr LW and SSC and the Bay Area. That bubble includes me. That makes it worse and I have absolutely no way of convincing you that this is true unless you actually know me and my friends, and in that case I think you’d just get it. And having a community, a group, helps a lot with this because we hack our own in-built social circuits (self-hacking is also a useful LW-type skill) by making this kind of thing into our own social norms, and thus we keep each other’s individual tendencies to self-serving biases and skewed views in check.

Norms of discourse

Which brings me to the point I really, really agree with nostalgebraist in, which is that the norms of discourse adopted by LWers are pretty damn awesome. They’re not really explicitly enforced, in the sense that no one ever decided on them in advance. They just… sorta… emerged (snickers) from it all. They’re kinda the sort of norms of discourse you’d expect, given the sort of people who are attracted to LW and the ideas present there.

One general idea is being liberal in what you accept, and restricted in what you emit. A specific implementation of this is trying to be very very careful about how you phrase things, and be as literal and precise as possible, in order to minimise the effort the other person has to spend in order to understand what you mean, while at the same time trying to meet the other person where they are.

Principle of Charity: the idea that other people’s beliefs make sense to them, and if you start from the premise that the other person’s a crazy mutant whose thoughts are an Escherian mess you won’t actually believe true things. The principle that everyone’s human, everyone’s fallible, and you’re not superior to other people, and your thoughts aren’t privileged special snowflake thoughts. Of course you believe you’re right, there’s not even any sense in believing you’re wrong, if you think you’re wrong then you don’t believe that; but even then, the objective of discussions isn’t to convince the other person, it’s to reach truth and agreement. This shift in paradigm is subtle, but it’s distinct. And yeah, it’s exploitable by trolls, but… honestly, I have a personal philosophy that I’m kind even to trolls. I try to be kind to everyone, even people yelling at me or being hateful. It’s how I deal. Of course you can decide not to apply Charity to trolls, but having it as a general norm of discourse is very useful and I’d say even a moral imperative of sorts.

A specific implementation of Charity is steelmanning, which is very hard to do and should still be done. The idea behind it is trying to make your discussion partner’s argument the strongest form it can take – for you. The form that you would have the most trouble defeating. And of course, always mention you’re doing this. You should always prefer to interpret what the other person meant exactly, but what a person means, what they say, and what you understand, are usually three completely different things. When you believe what you understood was not what was meant, you tell the person that, and then you fight the steel man. You say, “What it looks like you mean is this, but I’m not sure I’m right or understand it. I will try to steelman your argument, but please tell me about anything I get wrong!” It’s a sort of epistemic humility, of trying to always shoot yourself in the foot and miss, but always being willing to hear the other person’s words.

Also related is a modified Hanlon’s Razor: never assume malice when ignorance suffices. People aren’t villains in their internal narratives! They usually believe they’re doing the right thing! If they hurt you, it was very likely accidental. Most people aren’t malicious – and when they are, it’s not out of evilness, it’s usually either out of a belief that what they’re doing, though wrong, is justified by its consequences, or out of emotional stress.

Additionally, as a group we sort of take pride in taking all ideas seriously, at least at first. Yes, there are ideas that are simply objectively wrong, and others that are simply objectively right, but we don’t just assume we know which those are, and we don’t assume our discussion partner is aware of them. This is why neoreactionaries find a somewhat comfortable bed amongst us, because we’re sort of the only ones who are willing to hear them. This is why there are so many Objectivists here, because we don’t start out with a prejudice against Ayn Rand that makes us blind to what a person is actually saying and doing. LW discourse is an open field of ideas, one where we may entertain thoughts without actually endorsing them, one where the game of argument is sought out for its own sake.

And we try to be as precise and honest as possible, as a general rule. We want to avoid conflict and misunderstandings as much as possible, and we want to actually reach truth. To do so, we try to engage everything the other person says, and be exact in our words, and leave as little room for ambiguities as possible. And this still fails. That epistemic humility is always necessary when dealing with other people.

Finally, what I think is maybe one of the most important norms in that it sort of enables, encourages, and brings out the others, is the detachment. Trying to be objective and not let anger or partisanship or politics influence. And of course this is impossible, of course you will always be biased by all of that, but if you try to consciously avoid it you’ll certainly do better than if you just throw your hands up in the air and say, “Well, guess I can’t do nothing, right?” If you at least move in the general direction of what you judge to be objectiveness, and make the necessary concessions, it helps.

Incidentally, this is why I think people talking about “detachment privilege” have no idea what they’re talking about. That’s typical-minding, right there. Because they get heated in discussions, they don’t usually imagine that other people might not. And they don’t usually imagine people (like me) who see detachment as a moral imperative, as a value, a goal. And in my specific case, detachment is costly, because when I’m dealing with a partner that isn’t detached, that distresses me. I can’t help it! When another person feels distressed, even if I dislike them, that distresses me. So, yeah, being in a group where detachment is a noble goal, it helps that specific aspect of my weirdbrains.

What do we have, then?

What we have is a bunch of ideas, maybe each one of them not significantly too hard to come by, but the group being pretty unlikely to be found together. They’re not obvious, even amongst smart people. They’re not direct.

We have some meta-level skills that emerge from and shape these object-level ideas. Some generalisations of patterns and methods of thinking.

And we have the norms of discourse, dedicated to making the argumentation field as fertile as possible, and trying to make sure no one is misrepresented or mistreated.

And I didn’t even mention everything. I didn’t talk about typical-minding in specific, or complexity penalties, or taboo. There’s much more than just the above.

And maybe I’m being idealistic. Or maybe I’m describing just my bubble. Or just myself. Or just what I wish I was. It doesn’t matter, in the end. If those aren’t the reality, they’re still the goal, and you can be damn sure I’m going to try to attain them.

So what do we have? How has this helped me?

It has made me kinder. I think that’s the best I can get out of it. It really, really has. It has made me more honest, more hard-working, more conscientious, more efficient, more focused. LW helped me. The only-obvious-in-hindsight ideas made me a better person, made me a smarter person, made me a more knowledgeable person. They made me deal better with other people, and with learning and teaching, and with relationships, and with myself.

So, I disagree with nostalgebraist that LW isn’t useful. I think he’s just smart enough that he was able to figure out the LW stuff on his own, or with other sources. I certainly was not, at least not at first. Now I may be catching on and getting the hang of the kind of thinking that leads to better rationality.

But I have been getting better at winning, I am much better than past!me, and I owe LW for most of this.

Advertisements
This entry was posted in Rationality and tagged , , , . Bookmark the permalink.

3 Responses to The use of Less Wrong

  1. Anon says:

    > It has made me kinder. I think that’s the best I can get out of it. It really, really has. It has made me more honest, more hard-working, more conscientious, more efficient, more focused. LW helped me. The only-obvious-in-hindsight ideas made me a better person, made me a smarter person, made me a more knowledgeable person. They made me deal better with other people, and with learning and teaching, and with relationships, and with myself.

    Endorsed. Especially the “made me kinder” part.

  2. Pingback: Alieving Rationality | An Aspiring Rationalist's Ramble

  3. 1Z says:

    Everyone in philosophy knows that many philosophical arguments are about words, and “don’t argue over words” is the wrong advice. The correct advice is “argue about words to the correct extent at the correct times”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s