An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, … The outside view … focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one. [Kahneman and Lovallo ’93]
Most everything written about a possible future singularity takes an inside view, imagining details of how it might happen. Yet people are seriously biased toward inside views, forgetting how quickly errors accumulate when reasoning about details. So how far can we get with an outside view of the next singularity?
Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning. We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA). The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century. The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month.
Many are worried that such a transition could give extra advantages to some over others. For example, some worry that just one of our mind children, an AI in some basement, might within the space of a few weeks suddenly grow so powerful that it could take over the world. Inequality this huge would make it very important to make sure the first such creature is "friendly."
Yesterday I said yes, advantages do accrue to early adopters of new growth modes, but these gains seem to have gotten smaller with each new singularity. Why might this be? I see three plausible contributions:
The number of generations per growth doubling time has decreased, leading to less inequality per doubling time. So if the time duration of the first movers advantage, before others find similar innovations, is some fixed ratio of a doubling time, that duration contains fewer generations.
When lineages cannot share information, then the main way the future can reflect a new insight is via insight-holders displacing others. As we get better at sharing info in other ways, the first insight-holders displace others less.
Independent competitors can more easily displace each another than interdependent ones. For example, since the unit of the industrial revolution seems to have been Western Europe, Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders. So as the world becomes interdependent on larger scales, smaller groups find it harder to displace others.
The first contribution is sensitive to changes in generation times, but the other two come from relatively robust trends. An outside view thus suggests only a moderate amount of inequality in the next singularity – nothing like a basement AI taking over the world.
Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading. Let’s keep an open mind, but a wary open mind.
Surely what I am about to write is obvious, and probably old. During World War II, when physicists began to realize the destructive potential of nuclear weapons, Albert Einstein was chosen by his peers to approach President Roosevelt. Einstein was perhaps not the best informed of the group, but he was the best known, and was thought to be able to get Roosevelt's ear, as he did. In response, Roosevelt was able to convene all the greatest Western minds in physics, mathematics, and engineering to work together for a rapid solution to the problem. Clearly, the importance of the development of recursively self-improving super-human intelligence has got to be, almost by definition, greater than all other current problems, since it is the one project that would allow for the speedy solution of all other problems. Is there no famous person or persons in the field, able to organize his peers, and with access to the government such that an effort similar to the Manhattan Project could be accomplished? The AI Institute has one research fellow, and are looking for one more. They have a couple of fund-raisers, but most of the world is unaware of AI altogether. This won't get it done in a reasonable time-frame. Your competitors may well be backed by their governments.
While the eventual use of the Manhattan Project's discoveries is about as far from Friendly AI as imaginable, the power of super-human recursive AI is such that no matter by whom or where it is developed it will become the eminent domain of a government, much like the most powerful Cray computers. You might as well have their money and all the manpower right from the start, and the ability to influence it's proper use.
Can/will this be done?
As I mentioned, one point of disanalogy between the farming/industrial developments and AI is that farming didn't put any humans out of work, while the humans put out of work by industry had other places in the economy to go. With AI, it effectively takes out most of the economy out of human hands, maybe leaving a few vacancies in the service industries.
Another disanalogy between the farming/industrial developments and AI is that is is hard to keep farming and industrial developments secret - they are typically too easy to reverse engineer. Whereas with AI, if you keep the code on your server, it is extremely difficult for anyone to reverse engineer it. It can even be deployed fairly securely in robots - if tamper-proof hardware is employed.
Both of these differences suggest that AI may be more effective at creating inequalities than either farming or industry was.
However, ultimately, whether groups of humans benefit differentially from AI or not probably makes little odds.
The bigger picture is that it represents the blossoming of the new replicators into physical minds and bodies - so there is a whole new population of non-human entities to consider, with computers for minds and databases for genomes.