A non-absurdity heuristic, and why I’m not a Singularitarian

So er…

Erm.

Yeah. This is no longer a thing.

I wrote a post exactly six months ago explaining why I was a Singularitarian. Or, well, so I thought. Except then I thought about it long and hard. And I finished reading Bostrom’s book. And, well…

My core argument there, that there are many, many ways of getting to an AGI, is sound, I think. The prediction “AGI will be a thing” is disjunctive, and it’s probably correct. However, of the many forms AGI can take, software AI seems to be the murkiest, least well understood one. And… it’s the only one that really promises a “singularity” in the strictest of senses.

The argument, basically, is that a smarter-than-human software AI with access to its own source code and to our knowledge on how it was built would get into a huge feedback loop where it’d constantly improve itself and soar. And that’s a very intuitive argument. Humans are very likely not the “smartest” possible minds, and just eliminating all the cognitive biases and giving us faster processing power would probably be a huge step in the right direction.

But the absurdity heuristic has a converse: if we dismiss things that sound intuitively absurd before looking at their own merits, we accept intuitively plausible ideas too readily before criticising them. And I don’t think this heuristic should have a name, because, well, it’s probably not a single thing, it’s the set of all biases and heuristics, it’s just intuition itself, but my point here is that the argument… has a hard time surviving probing. It’s intuitive, we accept it readily, and we don’t question it enough. Or at least, I didn’t.

We don’t have a well-defined notion of what agency and intelligence are, we have absolutely no idea how to even begin building a software agent, and even if we did there is very, very little exploration on actual theoretical hard limits on improvement. Complexity theory, information theory, computability theory, all of those are highly necessary for us to even begin having a grasp on what’s possible and what’s not.

Which is not to say superintelligence won’t happen! In 300, maybe 200, maybe 100 years, it might be here. I don’t know. I can’t predict that. But right now, the Singularity is Pascal’s Mugging, or some other kind of mugging where the situation is so completely out of any reference classes we’ve known that even giving it a probability would be a farce.

And this is also not to say that research into AI safety isn’t necessary. What MIRI is doing, right now, is foundational research, it’s trying to create the field of AI safety as an actual field, with actual people doing research on it. And yes, it will probably include complexity, computability, information, logic, all of that. They’re starting with logic, because logic can prove things for us that are true everywhere, they’re a place to start. They’re working on decision theory, they’re working on value alignment. Those things are good and necessary, and I’m not going to discuss here what priority I personally believe should be given to each of those approaches or how effective MIRI is.

But I no longer think this is an urgent problem. I no longer believe this is something that needs doing immediately. I’ve unconvinced myself that this is a high-impact high-importance project, right now. I’ve unconvinced myself that… I should work on it.

So what, now? I spent the past five years of my life geared towards that goal, I have built a fairly large repertoire of knowledge that would help me there, I have specialised. My foundation is no longer there.

So I guess I’m going to try to use that, my skills and interests and capacity, to make an impact, somehow.

Advertisements
This entry was posted in Philosophy, Rationality and tagged , , , , , , , . Bookmark the permalink.

3 Responses to A non-absurdity heuristic, and why I’m not a Singularitarian

  1. You and I have similar skills (math, programming, probability theory), so I’m very curious what opportunities to do good you come across.

  2. When so much has been written on the subject, it feels awkward to boil it down to just a few sentences.

    I’m right there with you. Most of my experience with computability suggests that the restrictions on the possible are real and ubiquitous. These restrictions only get worse and more relevant as you scale your problem domain up. It’s a pessimistic outlook. For me, the burden of proof has shifted to those who saying “you can’t prove it’s not possible” since the default assumption is that a thing is not possible. I’m glad that someone is researching AI safety because no knowledge is ever truly wasted but it’s not some holy quest to me.

    You’ve said it better than me, I’m sure you’ve thought about this more deeply than I have. Thanks for putting it in words.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s