Yeah. This is no longer a thing.
I wrote a post exactly six months ago explaining why I was a Singularitarian. Or, well, so I thought. Except then I thought about it long and hard. And I finished reading Bostrom’s book. And, well…
My core argument there, that there are many, many ways of getting to an AGI, is sound, I think. The prediction “AGI will be a thing” is disjunctive, and it’s probably correct. However, of the many forms AGI can take, software AI seems to be the murkiest, least well understood one. And… it’s the only one that really promises a “singularity” in the strictest of senses.
The argument, basically, is that a smarter-than-human software AI with access to its own source code and to our knowledge on how it was built would get into a huge feedback loop where it’d constantly improve itself and soar. And that’s a very intuitive argument. Humans are very likely not the “smartest” possible minds, and just eliminating all the cognitive biases and giving us faster processing power would probably be a huge step in the right direction.
But the absurdity heuristic has a converse: if we dismiss things that sound intuitively absurd before looking at their own merits, we accept intuitively plausible ideas too readily before criticising them. And I don’t think this heuristic should have a name, because, well, it’s probably not a single thing, it’s the set of all biases and heuristics, it’s just intuition itself, but my point here is that the argument… has a hard time surviving probing. It’s intuitive, we accept it readily, and we don’t question it enough. Or at least, I didn’t.
We don’t have a well-defined notion of what agency and intelligence are, we have absolutely no idea how to even begin building a software agent, and even if we did there is very, very little exploration on actual theoretical hard limits on improvement. Complexity theory, information theory, computability theory, all of those are highly necessary for us to even begin having a grasp on what’s possible and what’s not.
Which is not to say superintelligence won’t happen! In 300, maybe 200, maybe 100 years, it might be here. I don’t know. I can’t predict that. But right now, the Singularity is Pascal’s Mugging, or some other kind of mugging where the situation is so completely out of any reference classes we’ve known that even giving it a probability would be a farce.
And this is also not to say that research into AI safety isn’t necessary. What MIRI is doing, right now, is foundational research, it’s trying to create the field of AI safety as an actual field, with actual people doing research on it. And yes, it will probably include complexity, computability, information, logic, all of that. They’re starting with logic, because logic can prove things for us that are true everywhere, they’re a place to start. They’re working on decision theory, they’re working on value alignment. Those things are good and necessary, and I’m not going to discuss here what priority I personally believe should be given to each of those approaches or how effective MIRI is.
But I no longer think this is an urgent problem. I no longer believe this is something that needs doing immediately. I’ve unconvinced myself that this is a high-impact high-importance project, right now. I’ve unconvinced myself that… I should work on it.
So what, now? I spent the past five years of my life geared towards that goal, I have built a fairly large repertoire of knowledge that would help me there, I have specialised. My foundation is no longer there.
So I guess I’m going to try to use that, my skills and interests and capacity, to make an impact, somehow.