In April 2010 I commented on David Chalmers’ singularity paper:
The natural and common human obsession with how much [robot] values differ overall from ours distracts us from worrying effectively. … [Instead:]
1. Reduce the salience of the them-us distinction relative to other distinctions. …
2. Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us.
I just wrote a 3000 word new comment on this paper, for a journal. Mostly I complain Chalmers didn’t say much beyond what we should have already known. But my conclusion is less meta:
The most robust and promising route to low cost and mutually beneficial mitigation of these [us vs. superintelligence] conflicts is strong legal enforcement of retirement and bequest contracts. Such contracts could let older generations directly save for their later years, and cheaply pay younger generations to preserve old loyalties. Simple consistent and broad-based enforcement of these and related contracts seem our best chance to entrench the enforcement of such contracts deep in legal practice. Our descendants should be reluctant to violate deeply entrenched practices of contract law for fear that violations would lead to further unraveling of contract practice, which threatens larger social orders built on contract enforcement.
As Chalmers notes in footnote 19, this approach is not guaranteed to work in all possible scenarios. Nevertheless, compare it to the ideal Chalmers favors:
AI systems such that we can prove they will always have certain benign values, and such that we can prove that any systems they will create will also have those values, and so on … represents a sort of ideal that we might aim for (p.35).
Compared to the strong and strict controls and regimentation required to even attempt to prove that values disliked by older generations could never arise in any later generations, enforcing contracts where older generations pay younger generations to preserve specific loyalties seems to me a far easier, safer and more workable approach, with many successful historical analogies on which to build.
A relevant prior Hanson post is Let's Not Kill All The Lawyers.
Kernal, you make an excellent point. The entire idea of property ownership is a social/political convention. It is hard for me to see how property ownership can survive the Singularity or when entities can exist in electronic substrates.
The current convention is that the entity that inhabits a physical body “owns” it and that ownership right is not transferable, but that convention developed because bodies grow and the substrates used to form a body are consumed as food or, like air are “free” and universally available. Even then there are people who are trying to usurp the idea of ownership of one's body.
If the ideas of the “right to life” groups get extended into electronic life forms, then when hardware is inhabited by an entity, that entity owns the hardware and cannot be expelled from it, even if that new entity is causing damage and degraded performance to other users of the hardware (the way the “right to life” of a fetus trumps the “right to control one's body”).
Right now, there are property rights to things like electrical hardware, intellectual property, and electricity. Would extending property rights to things like air be a benefit? You know that if some could get enforcible property rights to air, that it would be worth a lot because air is a necessity and anyone with a monopoly power over a necessity can charge what ever the market will bear.
One of the reason wages drop to subsistence levels is because monopoly power by rent seekers over necessities extracts all wealth above that needed for subsistence. Those that can't pay the rents stop existing. Are AIs going to tolerate property rights and monopoly control over substrates they need to survive while letting humans have free access to air?
Once humans are a small minority, the AIs might propose to remove the dangerous pollutant O2 from the atmosphere. It is the O2 that causes corrosion of metals, combustion of polymers, and degradation of lubricants. Making the atmosphere O2-free would completely prevent fires, extend the lifespan of AIs and greatly reduce their maintenance costs. If entities want O2, they can pay for it and be responsible for keeping it away from those who don't want to be exposed to it and be responsible for the costs of damage from O2 that escapes.
If 100 trillion AI entities vote to remove all O2 from the atmosphere and subsidize all existing entities that need O2 for the rest of their lives, but that all new entities that want O2 would have to pay market rates, what basis would 10 billion humans have for disagreeing?
Lowering the temperature by removing greenhouse gases and by shielding the Earth from sunlight might be a good idea too. Lower temperatures mean lower cooling costs and more efficient electricity generation via heat engines. Lower humidity and lower corrosion rates too. Increasing the growth of ice sheets would free up more valuable land by lowering sea levels. If entities want to waste energy by maintaining a 25 C environment, they can pay market rates for it and pay the heat pollution surcharge from their heat leaking into the environment and raising cooling costs for everyone else.
A little bit of hyperinflation could make all the legacy wealth disappear. Then humans would be left to survive only on what they can earn with their ongoing labor.