Looking far into the distance, your eyes often see a sharp boundary between earth and sky. But if you were to travel to that furthest part of earth your eye can now see, you may not find a sharp boundary there. Far mode simplifies, not only suppressing detail, but making you think detail is unimportant. If you saw two ships battling on the horizon, you’d be too tempted to expect the bigger ship to win.
From a distance, future techs also seem overly simple and hence disruptive. If in 1672 you had seen Verbiest’s steam-powered vehicle, you might have imagined that the first nation with cheap capable cars could conquer the world. After all, they might build tanks and troop transports, and literally run circles around enemy troops. But while having somewhat better cars did sometimes help some nations, it was far from an overwhelming advantage. Cars slowly gained in cost, ability, and number; there was no particular day when one nation had vastly more capable cars.
Similar scenarios have played out for a great many techs, like rockets, radios, lasers, or computers. While one might imagine from afar that the difference between none of a tech and a “full” version would give a dramatic advantage, actual progress was more incremental, reducing team differences in tech levels. Overall differences in wealth and tech capability were usually better explanations for the advantages some nations had over others.
The first far images of nanotech were also simple, stark, and disruptive. They imagined one team could quickly and reliably assemble, from cheap plentiful feedstocks, large quantities of a large set of big atom arrangements, while other teams had near-current capabilities. In this scenario, the first first team might well conquer the world, or accidentally destroy it via “grey goo.”
The nanotech transition seems less disruptive, however, if we see more detail, and imagine a series of incrementally more capable assemblers, able to build larger objects, faster, more reliably, from more types of feedstocks, using more kinds of local chemical bonds, at a wider range of assembler-assembled angles, and so on. After all, we already have ribosome assemblers, with a very limited range of feeds, bonds, angles, etc. Each new type of assembler would lower the cost of making a new class of objects.
Far images of artificial intelligence (AI) can also be overly stark. If you saw minds as having a single relevant “intelligence” parameter, with humans unable but machines able to change their parameter, you might well rue the day a machine whizzed past the human level. Especially if you thought God-levels might follow a month later, and if you thought this parameter’s typical value was what determined a team’s power.
However, if you saw the power and growth rates of teams (or societies) as depending on dozens of parameters, including dozens that contribute to the aggregate we often call “intelligence,” you might foresee a less disruptive transition. Relevant parameters might include many kinds of natural resources, physical capital, social capital, crossroads, standards, computing hardware, memory hardware, communication hardware, data, skills, knowledge, heuristics, reasoning strategies, etc. The more such parameters are relevant, the harder it is to expect a small team to suddenly improve greatly in enough parameters to overwhelm other teams.
Growth in the power of any team or society has long depended heavily on the growth of all other teams. Back when humans competed with other species for similar ecological niches, each species improved mainly internally, as species had few ways to learn from each other. So the species whose capabilities grew fastest was bound to displace the others. But the first human societies to achieve farming did much less displacing of other societies – people could learn farming from neighbors. With the arrival of industry, not only did other societies copy industrial methods, but the division of labor forced the first industrial cities to share their gains with non-industrialized trading partners.
Long term growth has consisted both in steady gains in many relevant parameters, and in switching some parameters from constants to parameters than usefully change. For example, while hunters improved the stories they told each, the number of stories each hunter could remember was fixed. After the introduction of writing, however, we’ve had a steady increase in the number of stories we can each remember.
While we can today in principle mechanically change many features of how our brains are organized, in practice we don’t know how to make useful changes, and so such organization parameters are effectively fixed. Computers can also in principle change their own software organization, but they also do not in practice know how to do this usefully. Computer organization does usefully change, but only because humans change it.
The more of our data, skills, knowledge, heuristics, reasoning strategies, etc. we embody in non-human hardware, rather than in human brains, the more advantage we will gain from our ability to usefully change such hardware organization. This will in effect move more of the relevant parameters that describe our power from the category of constants to the category of steadily improving parameters.
We might usefully model our total growth system as dozens of changing parameters, with many dozens of feedback connections between pairs of these parameters, some connections positive and some negative. The overall growth rate of such a complex system could in principle accelerate faster or slower than exponential, and when a new parameter entered such a system, switching from fixed to changeable, the feasible growth rate and acceleration of the entire system could in principle change.
But empirically it seems that our total system has usually grown exponentially at a constant non-accelerating rate, even as many new parameters have switched from fixed to changeable. Only rarely (thrice in ten million years) has any novelty substantially changed our growth rate. So it is unlikely that adding any particular new parameter to the current changeable set will change growth rates. It is also not obvious that many relevant parameters would at the same time enter the set of usefully changeable parameters. For example, while a transition to whole brain emulations will simultaneously make it mechanically cheaper to experiment with many brain organization parameters, it could take much search to find ways to make useful changes in each parameter. Different parameters may require very different amounts of search.
Even so, based on historical patterns, I expect that within the next century or so one newly changeable parameter will be a rare pivotal one, that knocks the whole system into a faster growth rate. But I also expect the system to grow quite a bit before another such knock arrives.
When that big knock arrives, a key disruption question is whether a single small team, initially a tiny fraction of world power, could not only find a way to make that key pivotal parameter usefully changeable, but also keep exclusive control over that ability for long enough. That is, could an initially weak team find and exclusively hold this new ability to grow internally so much as to be able to overwhelm the entire rest of the world, doing so quickly and stealthily enough to avoid retaliation or conquest by others in its early weak period?
Such a scenario is possible, but based on the considerations raised so far in this post, it seems rather unlikely. Someone might show us details of how upcoming newly changeable parameters actually interact with other important parameters, and overcome this initial presumption; but until they do this should be our best estimate. Yes the first humans did something similar, but the first farmers and industrialists did not. And we understand why: as more parameters have entered the changeable set, teams have found more ways to learn and copy from each other, and we have become more dependent on one another via a more elaborate international division of labor.
In sum, as we move more of our data, skills, knowledge, heuristics, reasoning strategies, etc. into non-human hardware, that will change the aggregate “intelligence” of that hardware, and raise our gains from improving the organization of that hardware. We may do this via ordinary software, via special “general artificial intelligence” software, via whole brain emulations, or something else. This change will in a sense add to the set of changeable parameters in our system of dozens of interdependent parameters. While each such added parameter is unlikely to change our overall system growth rate, one such change probably will. But because of greater info sharing and specialization, a single small team seems unlikely to hold and use this change internally enough to overwhelm the rest of the world.
Tech gets harder to master. Newer, more difficult tech cannot become distributed across teams as fast as earlier, simpler tech. Can any team be so superior that no other team can replicate the research? I'd argue it gets more probable by the day.
There will come a point when no usable information will seep outside the walls of firms or small teams. Some techs will evolve into black boxes. You really can't tell exactly what goes into a CPU chip these days.
There's a high probability that there will be, if there already aren't, "untouchable" tech firms, with so tricky in-house theoretical knowledge, research and production methods and equipment, that no matter how much resources are thrown at it, competitors can't catch up.
And if these people are smart enough to stay quiet and out of sight, which they will be, the competition won't even know what to look for, until it's way way too late. The spy organizations of the world know this.
The US was the first to develop nuclear weapons. We promptly used two, then stopped.
IIRC, in 1945 the US only had two available to use. And it used both of them.
And I don't think it had many for some years afterwards. A handful of kiloton-range bombs do not convey worldwide omnipotence.
For purposes of comparison, the RAF and USAAF dropped ~ one million tons of bombs on Germany in the last year of the war.