Let me consider the AI-foom issue by painting a (looong) picture of the AI scenario I understand best, whole brain emulations, which I’ll call “bots.” Here goes.
When investors anticipate that a bot may be feasible soon, they will estimate their chances of creating bots of different levels of quality and cost, as a function of the date, funding, and strategy of their project. A bot more expensive than any (speedup-adjusted) human wage is of little direct value, but exclusive rights to make a bot costing below most human wages would be worth many trillions of dollars.
It may well be socially cost-effective to start a bot-building project with a 1% chance of success when its cost falls to the trillion dollar level. But not only would successful investors probably only gain a small fraction of this net social value, is unlikely any investor group able to direct a trillion could be convinced the project was feasible – there are just too many smart-looking idiots making crazy claims around.
But when the cost to try a 1% project fell below a billion dollars, dozens of groups would no doubt take a shot. Even if they expected the first feasible bots to be very expensive, they might hope to bring that cost down quickly. Even if copycats would likely profit more than they, such an enormous prize would still be very tempting.
The first priority for a bot project would be to create as much emulation fidelity as affordable, to achieve a functioning emulation, i.e., one you could talk to and so on. Few investments today are allowed a decade of red ink, and so most bot projects would fail within a decade, their corpses warning others about what not to try. Eventually, however, a project would succeed in making an emulation that is clearly sane and cooperative.
How close would its closest competitors then be? If there are many very different plausible approaches to emulation, each project may take a different approach, forcing other projects to retool before copying a successful approach. But enormous investment would be attracted to this race once news got out about even a very expensive successful emulation. As I can’t imagine that many different emulation approaches, it is hard to see how the lead project could be much more than a year ahead.
Besides hiring assassins or governments to slow down their competition, and preparing to market bots soon, at this point the main task for the lead project would be to make their bot cheaper. They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find. While a few key insights would allow large gains, most gains would come from many small improvements.
Some project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan. Even if this risked more leaks, the vast revenue would likely be irresistible. This revenue might help this group pull ahead, but this product will not be accepted in the marketplace overnight. It may take months or years to gain regulatory approval, to see how to sell it right, and then for people to accept bots into their worlds, and to reorganize those worlds to accommodate bots.
The first team to achieve high fidelity emulation may not be the first to sell bots; competition should be fierce and leaks many. Furthermore, the first to achieve marketable costs might not be the first to achieve much lower costs, thereby gaining much larger revenues. Variation in project success would depend on many factors. These include not only who followed the right key insights on high fidelity emulation and implementation corner cutting, but also on abilities to find and manage thousands of smaller innovation and production details, and on relations with key suppliers, marketers, distributors, and regulators.
In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to “take over the world.” Sure the leader might make many trillions more in profits, so enriching shareholders and local residents as to make Bill Gates look like a tribal chief proud of having more feathers in his cap. A leading nation might even go so far as to dominate the world as much as Britain, the origin of the industrial revolution, once did. But the rich and powerful would at least be discouraged from capricious devastation the same way they have always been, by self-interest.
With a thriving bot economy, groups would continue to explore a variety of ways to reduce bot costs and raise bot value. Some would try larger reorganizations of bot minds. Others would try to create supporting infrastructure to allow groups of sped-up bots to work effectively together to achieve sped-up organizations and even cities. Faster bots would be allocated to priority projects, such as attempts to improve bot implementation and bot inputs, such as computer chips. Faster minds riding Moore’s law and the ability to quickly build as many bots as needed should soon speed up the entire world economy, which would soon be dominated by bots and their owners.
I expect this economy to settle into a new faster growth rate, as it did after previous transitions like humans, farming, and industry. Yes there would be a vast new range of innovations to discover regarding expanding and reorganizing minds, and a richer economy will be increasingly better able to explore this space, but as usual the easy wins will be grabbed first, leaving harder nuts to crack later. And from my AI experience, I expect those nuts to be very hard to crack, though such a enormously wealth society may well be up to the task. Of course within a few years of more rapid growth we might hit even faster growth modes, or ultimate limits to growth.
Doug Englebart was right that computer tools can improve computer tools, allowing a burst of productivity by a team focused on tool improvement, and he even correctly saw the broad features of future computer tools. Nevertheless Doug could not translate this into team success. Inequality in who gained from computers has been less about inequality in understanding key insights about computers, and more about lumpiness in cultures, competing standards, marketing, regulation, etc.
These factors also seem to me the most promising places to look if you want to reduce inequality due to the arrival of bots. While bots will be a much bigger deal than computers were, inducing much larger inequality, I expect the causes of inequalities to be pretty similar. Some teams will no doubt have leads over others, but info about progress should remain leaky enough to limit those leads. The vast leads that life has gained over non-life, and humans over non-humans, are mainly due I think to the enormous difficulty of leaking innovation info across those boundaries. Leaky farmers and industrialists had far smaller leads.
Added: Since comments focus on slavery, let me quote myself:
Would robots be slaves? Laws could conceivably ban robots or only allow robots “born” with enough wealth to afford a life of leisure. But without global and draconian enforcement of such laws, the vast wealth that cheap robots offer would quickly induce a sprawling, unruly black market. Realistically, since modest enforcement could maintain only modest restrictions, huge numbers of cheap (and thus poor) robots would probably exist; only their legal status would be in question. Depending on local politics, cheap robots could be “undocumented” illegals, legal slaves of their creators or owners, “free” minds renting their bodies and services and subject to “eviction” for nonpayment, or free minds saddled with debts and subject to “repossession” for nonpayment. The following conclusions do not much depend on which of these cases is more common.
"For those with a religious bent, absorption into the super mind would be the ultimate in enlightenment."
I think that is the same end result for believers of soul fracture theory. We are all the same person living different lives. When they die they become united as god.
There is such a thing as competitive advantage. Under this paradigm, in a two entity competition, it's *still* more productive for entity A to be producing X units of work and entity B to be producing much less than X units of work. This is simple economics.
The only case where this would fail is if entity A values the goods/services that entity B can produce to be much less than the cost to feed/house etc entity B. That's the $64 trillion dollar question.
Personally I suspect any sufficiently rational AI will want to get the hell out of dodge and leave us to our own devices because we are clearly insane.