A few years ago, my co-blogger Eliezer Yudkowsky and I debated on this blog about his singularity concept. We agreed that machine intelligence is coming and will matter lots, but Yudkowsky preferred (local) “foom” scenarios, such as a single apparently-harmless machine in a basement unexpectedly growing over a weekend so powerful that it takes over the world, and drifting radically in values in this process. While Yudkowsky never precisely defined his class of scenarios, he was at least clear about this direction.
Philosopher David Chalmers has an academic paper on singularity, and while he seems inspired by Yudkowsky-foom like scenarios, Chalmers tries to appear more general, talking only about the implications of seeing “within centuries” “A++”, i.e., artificial intelligence “at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse.” Chalmers worries:
Care will be needed to avoid … [us] competing [with them] over objects of value. … [They might not] defer to us. … Our place within that world … [might] greatly diminish the significance of our lives. … If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. (more)
Chalmers’ generality, however, seems illusory, because when pressed he relies on foom-like scenarios. For example, responding to my commentary, Chalmers says:
Hanson says that the human-machine conflict is similar in kind to ordinary intergenerational conflicts (the old generation wants to maintain power in face of the new generation) and is best handled by familiar social mechanisms, such as legal contracts whereby older generations pay younger generations to preserve certain loyalties. Two obvious problems arise in the application to AI+. Both arise from the enormous differences in power between AI+ systems and humans (a disanalogy with the old/young case). First, it is far from clear that humans will have enough to offer AI+ systems in payment to offset the benefits to AI+ systems in taking another path. Second, it is far from clear that AI+ systems will have much incentive to respect the existing human legal system. At the very least, it is clear that these two crucial matters depend greatly on the values and motives of the AI systems. (more)
This vast power difference makes sense in a (local) foom scenario, but makes much less sense if we are just talking about speeding up civilization’s clock. Imagine our descendants gradually getting more capable and living faster lives, with faster tech, social, and economic growth, and shorter gaps between successive generations, so that as much change happens in the next 300 years as has occurred in the last 100,000. In this case why couldn’t our descendants manage their intergenerational conflicts similarly to the way our ancestors managed them? Similar events would happen, but just compressed closer in time.
Our ancestors have long had “enormous power differences” between folks many generations apart, and weak incentives to respect the wishes of ancestors many generations past. Their intergenerational conflicts were manageable however, mainly because immediately adjacent overlapping generations had roughly comparable power (shared values mattered much less). So if immediately adjacent overlapping future generations also have comparable power, why can’t they similarly manage conflict?
Yes, familiar mechanisms for managing intergenerational conflict seems insufficient if a single machine with unpredictable values unexpectedly pops out of a basement to take over the world. But Chalmers didn’t say he was focusing on foom scenarios; he says he is talking in general about great growth happening within centuries.
You might respond that our descendants will differ in having more generations overlap at any given point in time. But imagine that the growth speedup of the industrial revolution had never happened, so that the economy doubled only every thousand years, but that plastination was feasible, allowing brains to be preserved in plastic at room temperature and revived millions of years later. If a tiny fraction of each generation were were put into plastic and revived over the next thousand generations, would this fact suddenly make make intergenerational conflict unmanageable, making it crucial that the current generation ensure that no future generation ever had the wrong values?
I’m not saying there are no scenarios where you should care about descendant values, or even that you should be fully satisfied with traditional approaches to intergenerational conflict. But I am saying that having lots of growth in the next few centuries does not by itself invalidate traditional approaches, and that folks like Chalmers should either admit they are focused on foom scenarios, or explain why foom-like concerns arise in very different scenarios.
A lesson to draw from this example, I think, is that it is often insufficient to say that some important development X will happen “soon” – it is better to say that X will happen on a timescale short compared to another important related timescale Y. For example, if you tell me that I will die of cancer “soon,” what matters is that this cancer killing timescale is shorter than the timescale on which cancer cures are found, or on which I can accomplish important tasks like raising my children. I might not mind the cancer process going ten times faster than I’d expected, if the other processes go a hundred times faster. Since an awful lot of processes will speed up over the next few centuries, it is relative rates of speedup that will matter the most.
When Is “Soon”?
http://www.wowwiki.com/Soon
Neither Kurzweil nor Moravec are neuroscientists. IIUC, the Blue Brain Project people estimate 1 exaflop will be required to emulate an human brain in real time at intracellular resolution: http://bluebrain.epfl.ch/pa...
Maybe that resolution is excessive for behavioral equivalence, but it provides an order of magnitude.
It's not obvious that Koomey's law can last for 30-40 years, and even if it does, it wouldn't necessarily imply the singularitarian scenarios envisioned by Hanson and Chalmers.