I am often excited at the opportunity to dunk on Nick Bostrom, because I think a lot of what he has to say is pretty silly. But I should try to maintain a veneer of academic respectability and talk a little bit about why I think concerns about "intelligence explosions" and the value of trillions of possible lives are not as important as the things already on our plate. So what is the argument about existential risk and intelligence explosion? Bostrom's view looks, at core, quite a lot like a brand of utilitarianism. The argument about existential risk is roughly "if future persons have the same moral standing as present persons, and there are likely to be a great many more future persons than there are present persons, then the highest moral imperative is securing the existence of those future persons." This task, securing the existence of future persons, is more important than (and I will quote Bostrom here) "Eliminating poverty or curing malaria." Find the Atlantic interview where he says this here. I think the simple (and, to be fair to Bostrom, relatively easily answered) argument against this is that, if time is linear, future persons don't exist and we ought to value people who exist over people who don't. Maybe a more sophisticated version of this has a sliding scale of moral value in relation to proximity to existence, so people who will soon exist are afforded more moral consideration than people who may exist a billion years from now, but less than someone who exists currently. The intuitive force of the premise that possible persons are exactly as important as actual persons is not strong. One concern is that you can very quickly begin to justify some pretty repugnant conclusions on the basis of securing a marginal increase in the probability of "trillions of future lives." For example, dumping funding into research about sci-fi dystopias instead of feeding people. Or allocating funding to interplanetary colonization instead of healthcare.
I don't want to write an actual philosophy paper with real arguments about Bostrom's views, so here is a pretty accessible cultural analysis of the concern about AI explosion. An important point, I think, is that the arguments about intelligence explosion look very similar to Pascal's Wager. That is, even if the probability is very low the negative results are so bad that we ought to start doing something about it. This, I think, distracts from actual problems like algorithmic bias, our personal data being bought and sold, and the potential for AI to spread huge quantities of disinformation quickly and easily. A utilitarian logic or Pascal's Wager type argument encourages us to ignore these problems or (and this is what motivates my complaint here) direct funding away from them. Should we be building general AI? I have no idea. Should we be worried about AI taking over? Probably not right now.