On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted. Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots. I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.
At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values. It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought they missed the big picture. Let me explain.
Imagine that you were forced to leave your current nation, and had to choose another place to live. Would you seek a nation where the people there were short, stupid, sickly, etc.? Would you select a nation based on what the World Values Survey says about typical survey question responses there?
I doubt it. Besides wanting a place with people you already know and like, you’d want a place where you could “prosper”, i.e., where they valued the skills you had to offer, had many nice products and services you valued for cheap, and where predation was kept in check, so that you didn’t much have to fear theft of your life, limb, or livelihood. If you similarly had to choose a place to retire, you might pay less attention to whether they valued your skills, but you would still look for people you knew and liked, low prices on stuff you liked, and predation kept in check.
Similar criteria should apply when choosing the people you want to let into your nation. You should want smart capable law-abiding folks, with whom you and other natives can form mutually advantageous relationships. Preferring short, dumb, and sickly immigrants so you can be above them in status would be misguided; that would just lower your nation’s overall status. If you live in a democracy, and if lots of immigration were at issue, you might worry they could vote to overturn the law under which you prosper. And if they might be very unhappy, you might worry that they could revolt.
But you shouldn’t otherwise care that much about their values. Oh there would be some weak effects. You might have meddling preferences and care directly about some values. You should dislike folks who like the congestible goods you like and you’d like folks who like your goods that are dominated by scale economics. For example, you might dislike folks who crowd your hiking trails, and like folks who share your tastes in food, thereby inducing more of it to be available locally. But these effects would usually be dominated by peace and productivity issues; you’d mainly want immigrants able to be productive partners, and law-abiding enough to keep the peace.
Similar reasoning applies to the sort of animals or children you want. We try to coordinate to make sure kids are raised to be law-abiding, but wild animals aren’t law abiding, don’t keep the peace, and are hard to form productive relations with. So while we give lip service to them, we actually don’t like wild animals much.
A similar reasoning should apply what future robots you want. In the early to intermediate era when robots are not vastly more capable than humans, you’d want peaceful law-abiding robots as capable as possible, so as to make productive partners. You might prefer they dislike your congestible goods, like your scale-economy goods, and vote like most voters, if they can vote. But most important would be that you and they have a mutually-acceptable law as a good enough way to settle disputes, so that they do not resort to predation or revolution. If their main way to get what they want is to trade for it via mutually agreeable exchanges, then you shouldn’t much care what exactly they want.
The later era when robots are vastly more capable than people should be much like the case of choosing a nation in which to retire. In this case we don’t expect to have much in the way of skills to offer, so we mostly care that they are law-abiding enough to respect our property rights. If they use the same law to keep the peace among themselves as they use to keep the peace with us, we could have a long and prosperous future in whatever weird world they conjure. In such a vast rich universe our “retirement income” should buy a comfortable if not central place for humans to watch it all in wonder.
In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values. Tolerate a wide range of values from capable law-abiding robots. It is a good law we should most strive to create and preserve. Law really matters.
A fairly conventional position is that we will be able to build robots to do whatever we like - more or less. After all, we built them - we ought to be in control of their actions - unless we make a *severe* mess of our engineering.
So: if we want to have them obey the law, then obeying the law is what they will do.
If we can build them to value obedience to the law, then I don't see why we would avoid giving them other values. Non-violence and obedience are among Asimov's classical proposals, for example.
There's so much packed into Robin's posting, some of it cogent and informative, some of it obvious. I was struvk by:"Similar criteria should apply when choosing the people you want to let into your nation. You should want smart capable law-abiding folks, with whom you and other natives can form mutually advantageous relationships. Preferring short, dumb, and sickly immigrants so you can be above them in status would be misguided..."
There is so much here to unpack. Like the fact that it was the Confederate side of American nature, propelling 8 phases of an ongoing, 250 year civil war, who always pushed notion of inherited inferior status. Mark Twain blamed the unjustifiable oath-breaking of secession on three factors... the economic interests of elites... a southern propensity to romanticism, typified by the wildly popular novels of Sir Walter Scott... and a desperate need by lower class whites for someone lower to kick.
In contrast, while immigrants faced racism up north and west, their children generally did just fine, when they had a chance, providing tall, healthy, vigorous Americans to the mix.
But Robin's metaphor is about AI and robots, and his point is clear. Hoping to keep a new, servile caste of automatons down is likely a short-sighted and ultimately futile goal.
I was honestly puzzled by the "wild animals" riff. Each generation of Americans has supported ever greater protections for wild creatures, ever since Teddy Roosevelt cause a national sensation by NOT shooting a bear cub. And hence, that Christmas, out came "Teddy Bears." Yes, we compete less with wild animals than before and they are now rare compared to other living commodities. But that doesn't support Robin's strange point.
" In the early to intermediate era when robots are not vastly more capable than humans, you’d want peaceful law-abiding robots as capable as possible, so as to make productive partners."
Well... I assume that the robotic era will be productive of most human wants and needs. What we fear is robot creating a singleton of centralized power, as did other entities with swords across 6000 years, oppressing 99% of our ancestors because small superiorities gave them advantages to exploit. Look at modern sci fi worry tales about robotics. None are about missing your favorite ice cream flavor because all the bots bought cones before you. Almost all are about AIs and bots seizing power in a "singleton" of oppression.
"If their main way to get what they want is to trade for it via mutually agreeable exchanges, then you shouldn’t much care what exactly they want."
And if they want to turn everything into paperclips? Anyway, alas, that Smithian flat-fair market Robin refers to only ever happened when top elites were finally prevented from cheating. From using their power to put their thumbs on the market and justice scales.
That won't happen under a singleton. It MIGHT happen if AIs and robots are plentiful and reciprocally competitive. Which is the method we used to tame 6000 years of feudal cheating by human elites. And restoring that power to cheat is exactly the top agenda of today's international cabal of oligarchs.
Robin finishes well:"In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the “right” values."
Problem is that such laws must include effective ways to prevent centralized power.