Eliezer Thursday:
Suppose … the first state to develop working researchers-on-a-chip, only has a one-day lead time. … If there’s already full-scale nanotechnology around when this happens … in an hour … the ems may be able to upgrade themselves to a hundred thousand times human speed, … and in another hour, … get the factor up to a million times human speed, and start working on intelligence enhancement. … One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in. But you’d have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.
Carl Shulman Saturday and Monday:
I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. … It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems … [For] biological humans [to] retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough … But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.
Every new technology brings social disruption. While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall. The tech’s inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments. So any new tech can be framed as a conflict, between opponents in a race or war.
Every conflict can be framed as a total war. If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury. All resources must be devoted to growing more resources and to fighting them in every possible way.
A total war is a self-fulfilling prophecy; a total war exists exactly when any substantial group believes it exists. And total wars need not be “hot.” Sometimes your best war strategy is to grow internally, or wait for other forces to wear opponents down, and only at the end convert your resources into military power for a final blow.
These two views can be combined in total tech wars. The pursuit of some particular tech can be framed as a crucial battle in our war with them; we must not share any of this tech with them, nor tolerate much internal conflict about how to proceed. We must race to get the tech first and retain dominance.
Tech transitions produce variance in who wins more. If you are ahead in a conflict, added variance reduces your chance of winning, but if you are behind, variance increases your chances. So the prospect of a tech transition gives hope to underdogs, and fear to overdogs. The bigger the tech, the bigger the hopes and fears.
In 1994 I said that while our future vision usually fades into a vast fog of possibilities, brain emulation “excites me because it seems an exception to this general rule — more like a crack of dawn than a fog, like a sharp transition with sharp implications regardless of the night that went before.” In fact, brain emulation is the largest tech disruption I can foresee (as more likely than not to occur). So yes, one might frame brain emulation as a total tech war, bringing hope to some and fear to others.
And yes, the size of that disruption is uncertain. For example, an em transition could go relatively smoothly if scanning and cell modeling techs were good enough well before computers were cheap enough. In this case em workers would gradually displace human workers as computer costs fell. If, however, one group suddenly had the last key modeling breakthrough when em computer costs were far below human wages, that group could gain enormous wealth, to use as they saw fit.
Yes, if such a winning group saw itself in a total war, it might refuse to cooperate with others, and devote itself to translating its breakthrough into an overwhelming military advantage. And yes, if you had enough reason to think powerful others saw this as a total tech war, you might be forced to treat it that way yourself.
Tech transitions that create whole new populations of beings can also be framed as total wars between the new beings and everyone else. If you framed a new-being tech this way, you might want to prevent or delay its arrival, or try to make the new beings “friendly” slaves with no inclination or ability to war.
But note: this em tech has no intrinsic connection to a total war other than that it is a big transition whereby some could win big! Unless you claim that all big techs produce total wars, you need to say why this one is different.
Yes, you can frame big techs as total tech wars, but surely it is far better than tech transitions not be framed as total wars. The vast majority of conflicts in our society take place within systems of peace and property, where local winners only rarely hurt others much by spending their gains. It would be far better if new em tech firms sought profits for their shareholders, and allowed themselves to become interdependent because they expected other firms to act similarly.
Yes, we must be open to evidence that other powerful groups will treat new techs as total wars. But we must avoid creating a total war by sloppy discussion of it as a possibility. We should not take others’ discussions of this possibility as strong evidence that they will treat a tech as total war, nor should we discuss a tech in ways that others could reasonably take as strong evidence we will treat it as total war. Please, “give peace a chance.”
Finally, note our many biases to overtreat techs as wars. There is vast graveyard of wasteful government projects created on the rationale that a certain region must win a certain tech race/war. Not only do governments do a lousy job of guessing which races they could win, they also overestimate both first mover advantages and the disadvantages when others dominating a tech. Furthermore, as I posted Wednesday:
We seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them.
I think in the discussion above, there is a lot of conflation between causes for what is termed "total tech war." You can find yourself in a total tech war merely by believing that the other agents see it as such. Or you can independently analyze the situation and determine that the best way to maximize your own payoff is to treat it as a total tech war regardless of what the other agents will think about it. If the space of advantages as upward cliffs as Eliezer suggests, then it is not unreasonable to believe an agent with a time sensitive, but utterly dominating, advantage will rationally decide the most payoff happens by acting in accordance with a total tech war plan of action. This is especially true if part of the cliff-advantage is the ability to analyze a situation more deeply and rapidly than competitors. I don't see any reason why extra, special arguments are needed to justify this as a realistic scenario within AI FOOM.
if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument.
IT is the bleeding edge of technology - and is more effective than most tech at creating inequalities - e.g. look at the list of top billionaires.
Machine intelligence is at the bleeding edge of IT. It is IT's "killer application". Whether its inventors will exploit its potential to provide wealth will be a matter of historical contingency - but the potential certainly looks as though it will be there. In particular, it looks as though it is likely to be mostly a server-side technology - and those are the easiest for the owners to hang on to - by preventing others from reverse-engineering the technology.