When anyone talks about the possibility or probability of the creation/existence of an UFAI, there are many failure modes into which lots of people fall. One of them is the logical fallacy of generalisation from fictional evidence, where people think up instances of AI in fiction and use that as an argument. Another is how the harder a problem is, the faster someone solves it, without spending even five minutes thinking about it. The absurdity heuristic makes an appearance, too.
But someone who’s familiar with LW or the whole cognitive biases shizzaz might be a bit cleverer and argue that most futurists get it wrong and predicting the future is actually really hard (conjunction fallacy). Ozy wrote a post about donating to MIRI in which zie points this out, but in the end mentions talking to, well, yours truly about it, and I think overall there are three points where I disagree with zir.
First, I propose the existence of a fallacy related to the conjunction fallacy and the sophisticated arguer effect, something I’ll call the Anti-Conjunction Fallacy, or perhaps the Disjunction Fallacy, or something. Maybe this is not a direct countercounterargument to Ozy’s point, but it’s a more general countercounterargument to the counterargument that “predicting AIs typically invokes a highly complex narrative with a high Complexity Penalty.”
The Conjunction Fallacy is a fancy name to the idea that sometimes people judge , which is to say that a more complex proposition with more details seems to us more probable than a simpler one due to appealing to our sense of narrative. This is a fallacy because it’s a theorem of probability that the exact negation of that sentence is true, no matter what and are; that is, it is always the case that . But conversely, we have that , that is, a disjunctive story is more likely than any of its components.
My proposed fallacy is this: many people (particularly rationalists) who see a long tale have an instinct to cry complexity penalty without actually checking whether the logical connective between the elements of that tale is a conjunction or a disjunction, AND or OR, and thus fall into the trap of saying that a disjunctive story has a low probability due to this instinct. And in my experience, most AGI predictions seem to be heavily disjunctive, in that the people making them (such as Nick Bostrom in his book) suggest a myriad possible disjunctive ways a superintelligence could arise, each of which relatively probable given current trends (e.g. whole brain emulations are an active research area which has seen actual results), so the posterior probability of the enterprise as a whole is much higher than that of each of those paths. This is true of many parts of the superintelligence narrative, from its formation to its takeoff to its potential powers. I don’t need five minutes to think of five different ways a superintelligence could reasonably take over the world and I’m not superintelligent.
So the moral of this part here is that, when you see a long prediction about something, first see whether it’s disjunctive or conjunctive before looking for fallacies. Isaac Asimov may have been wrong about the exact picture the future would paint, but by golly a large number of his individual predictions did in fact come true!
My second point is not so much an objection as a sort of reminder about what MIRI is actually doing. I’m not sure what its original goals were, but it most certainly isn’t trying, by itself, to program a superintelligence, at least not right now. Ozy says:
So it seems possible the solution is not independent funding, but getting the entire AGI community on board with Friendliness as a project. At that point, I can assume that they will deal with it and I can return to thinking of technology funding as a black box from which iPhones and God-AIs come out.
The thing is, that is one of MIRI’s explicit goals, outreach about AI dangers. And they seem to be at least mildly successful, or at any rate something was, given that Google created an AI Ethics board when it bought DeepMind, and given the growing number of prominent intellectuals that have been talking about the dangers of AI lately, some of which directly mentioning MIRI.
My third and final objection is that I think zie misunderstood me when I talked about the predictive skill of people who actually build technologies. I didn’t mean that they have some magical insider information or predictive superpowers that allow them to know these things; I meant that when you’re the one building a thing, what you’re doing isn’t predicting as much as it is setting goals. Predicting what Google is going to do is one thing, being inside Google actually doing the things is a whole ‘nother, and when AGI researchers talk about AGI there is frequently an undertone of “even if no one else is gonna do it, I am.” Someone who works at MIRI isn’t concerned so much with the prediction that a superintelligence is possible as they are with their own ability to bring it about, or raise the odds of a good outcome if/when it does.
My last point is something Ozy touched upon and on which I want to elaborate. Zie mentioned AGI is fundamentally different than other “large-scale” projects from before in that, unlike, say, nukes, the way it’s done will severely impact its outcome. As it is, I’d argue that almost no conclusions at all can be drawn from the past funding and development of technological advances because… the sample space is tiny. We can’t judge whether individuals funding research is an effective method of getting that research done because this idea, and the means to do so effectively, are brand new. During the 20th century, most technological advances happened due to the military, but that’s perfectly understandable given the climate: two full wars and a cold one spanning large powers, constant change in political and economic climates…
But large tech companies are a new invention, and it is my impression that, since at least mid-nineties, most of the technological advancements have had at least a hand of the private sector, and this seems to increasingly be the case. I’m not sceptical at all of the ability of individually funded technologies, especially software technologies, to play a large part in the future, because that’s what they’re doing right now, in the present.
But at any rate, there are a number of ways AGI could come about, and MIRI is trying to do what it can. So far, other than that, the FHI, and mmmmaaaaybe Google, it seems no one else is.