David Chalmers has a new paper on future artificial minds:
If humans survive, the rapid replacement of existing human traditions and practices would be regarded as subjectively bad by some but not by others. … The very fact of an ongoing intelligence explosion all around one could be subjectively bad, perhaps due to constant competition and instability, or because certain intellectual endeavours would come to seem pointless. On the other hand, if superintelligent systems share our values, they will presumably have the capacity to ensure that the resulting situation accords with those values. …
If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. The wrong value system need not be anything as obviously bad as, say, valuing the destruction of humans. If the AI+ value system is merely neutral with respect to some of our values, then in the long run we cannot expect the world to conform to those values. For example, if the system values scientific progress but is neutral on human existence, we cannot expect humans to survive in the long run. And even if the AI+ system values human existence, but only insofar as it values all conscious or intelligent life, then the chances of human survival are at best unclear.
Chalmers is an excellent philosopher, but to me the above reflects an unhealthy obsession with foreigner values, one common among the economically-illiterate. So let me try to educate him (and you).
Why fear future robots with differing values? Here is one possible cause:
Fear Of Strangers: Our distant ancestors evolved a deep fear of strangers. They knew that their complex ways to keep peace only worked for folks they knew, who looked, talked, and acted like them. Unexpected strangers were probably best killed on sight.
This is a good explanation, but much less a good reason, to fear robots. Over recent millennia humans have developed many ways, e.g., trade, contract, law, and treaties, to keep peace with folks who look, talk, and act differently. We only need others to be similar enough to us to use these methods; they need to know what equilibrium behavior to expect, and to speak in languages we can translate. They don’t otherwise need to share our values.
But even if peace is preserved, other reasons for fear remain:
Outbid By Rich: In some situations you can reasonably expect declining relative future wealth for yourself and those you care about. For example, a century ago folks who foresaw cars replacing horses, and who had a very strong heritable preference for working with horses, could reasonably expect falling demand, and lower relative wages, for their preferred job skills. (The horses themselves did far worse; most could not afford subsistence wages.) Now for many things you want it is absolute, not relative, wages that matter. But some things, like prime sea-view property, can be commonly valued and in limited supply. So you might fear others’ richer descendants outbidding yours for sea views.
Note that this fear requires an expectation that, relative to others, your nature or preferences conflicts more with your productivity. Note also that in some ways this problem gets worse as others get more similar. For example, if others prefer mountain views while you prefer sea views, their wealth would less reduce your access to sea views. If this is the problem, you should prefer others to have different values from you.
What if you worry that rich others threaten your descendants’ existence, and not just their sea view access? Well since interest rates have long been high, and since typical wages are now far above subsistence, then modest savings today, and secure property rights tomorrow, could ensure many surviving descendants tomorrow. But you might still fear:
War & Theft: Over the last few centuries we have vastly improved our ability to coordinate on larger scales, greatly reducing the rate of war, theft, and other property violations. Nevertheless, war and theft still happen, and we cannot guarantee recent trends will continue. So many fear foreign nations, e.g., China or India, getting rich and militarily powerful, then seeking world conquest. One may also fear theft of one’s innovations if intellectual property rights remain weak.
Note that these new ways to coordinate on large scales to prevent war and theft rely little on our empathy for, or similarity with, distant others. They depend far more on our ways to make commitments and to monitor key acts. And the mere possibility of future theft would hardly be a good reason for genocide today; we now seem to benefit greatly on net when distant foreigners get rich. This doesn’t mean we should ignore the risks of future war and theft, but it does suggest that our efforts should focus more on improving our ways to coordinate on large scales, and less on preparing to exterminate them before they exterminate us.
Chalmers does not say why exactly we should expect robots with the “wrong” values to give “disaster,” so much so that he is sympathetic to preventing their autonomy if only that were possible:
We might try to constrain their cognitive capacities in certain respects, so that they are good at certain tasks with which we need help, but so that they lack certain key features such as autonomy. … On the face of it, such an AI might pose fewer risks than an autonomous AI, at least if it is in the hands of a responsible controller. Now, it is far from clear that AI or AI+ systems of this sort will be feasible. … Such an approach is likely to be unstable in the long run.
Chalmers offers no reasons to fear robots beyond the three standard reasons to fear foreigners I’ve listed above: fear of strangers, outbid by rich, and war & theft. Nor does he offer reasons why it is robots’ differing values that are the problem, even though differing values are mainly only important for the fear of strangers motive, which has little relevance in the modern world. Until we have particular credible reasons to fear robots more than other foreigners, we should treat robots like generic foreigners, with caution but also an expectation of mutual gains from trade.
Finally, let me note that Chalmers’ discussion could benefit from economists’ habit of noting that our ability to make most anything depends on the price of inputs, and therefore on the entire world economy, and not just on internal features of particular systems. Chalmers:
All we need for the purpose of the argument is (i) a self-amplifying cognitive capacity G: a capacity such that increases in that capacity go along with proportionate (or greater) increases in the ability to create systems with that capacity, (ii) the thesis that we can create systems whose capacity G is greater than our own, and (iii) a correlated cognitive capacity H that we care about, such that certain small increases in H can always be produced by large enough increases in G.
Unless the “system” here is our total economy, this description falsely suggests that a smaller system’s capacity to create other systems depends only on its internal features.
Added 6Apr: From the comments it seems my main point isn’t getting through, so let me rephrase: I’m not saying we have nothing to fear from robots, nor that their values make no difference. I’m saying the natural and common human obsession with how much their values differ overall from ours distracts us from worrying effectively. Here are better priorities for living in peace with strange potentially-powerful creatures, be they robots, aliens, time-travelers, or just diverse human races:
Reduce the salience of the them-us distinction relative to other distinctions. Try to have them and us live intermingled, and not segregated, so that many natural alliances of shared interests include both us and them.
Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us. Minimize any ways those institutions formally treat us and them differently.
What, based on the idea that - if you wait long enough - the "Some circumstances" will eventually crop up? That seems to be a dubious premise.
Politics: lots of people with different values result in political outcomes we disagree with, in one form or the other.
Socially conservative, anti free market robots, anyone?