It has come to my attention that some think that by now I should have commented on Carl Shulman’s em paper Whole Brain Emulation and the Evolution of Superorganisms. I’ll comment now in this (long) post.
The undated paper is posted at the Singularity Institute, my ex-co-blogger Eliezer Yudkowsky’s organization dedicated to the proposition that the world will soon be ruled by a single powerful mind (with well integrated beliefs, values, and actions), so we need to quick figure out how to design values for a mind we’d like. The main argument is that someone will soon design an architecture to let an artificial mind quickly grow from seriously stupid to super wicked smart. (Yudkowsky and I debated that recently.) Shulman’s paper offers an auxiliary argument, that whole brain emulations would also quickly lead to one or a few such powerful integrated “superorganisms.”
It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits.
The abstract argument seems to be that coordination can offer huge gains, sharing values eases coordination, and the costs of internally implementing shared values are small. On generic coordination gains, Shulman points to war:
Consider a contest … such that a preemptive strike would completely destroy the other power, although retaliatory action would destroy 90% of the inhabitants of the aggressor. For the self-concerned individuals, this would be a disaster … But for the superorganisms … [this] would be no worse than the normal deletion and replacement of everyday affairs.
On the generic costs of value sharing, I think Shulman’s intuition is that a mind’s values can be expressed in a relatively small static file. While it might be expensive to figure out what actions achieve any particular set of values, the cost to simply store a values file can be tiny for a large mind. And Shulman can’t see see why using the same small file in different parts of a large system would cost more to implement that using different small files.
Shulman’s concrete argument outlines ways for ems to share values:
Superorganisms [are] groups of related emulations ready to individually sacrifice themselves in pursuit of the shared aims of the superorganism. … To produce emulations with trusted motivations, … copies could be subjected to exhaustive psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties. … Members of a superorganism could consent to deletion after a limited time to preempt any such value divergence. … After a short period of work, each copy would be replaced by a fresh copy of the same saved state, preventing ideological drift.
Shulman also suggests concrete em coordination gains:
Many of the productivity advantages stem from the ability to copy and delete emulations freely, without objections from the individual emulations being deleted. … Emulations could have their state saved to storage regularly, so that the state of peak productivity could be identified. … whenever a short task arises, a copy of the peak state emulation could be made to perform the task and immediately be deleted. … Subject thousands or millions of copies of an emulation to varying educational techniques, … [and] use emulations that have performed best to build the template for the next “generation” of emulations, deleting the rest. … Like software companies, those improving emulation capabilities would need methods to prevent unlimited unlicensed copying of their creations. Patents and copyrights could be helpful, … but the ethical and practical difficulties would be great. … A superorganism, with shared stable values, could refrain from excess reproduction … without drawing on the legal system for enforcement.
On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.
This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.
In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.
On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.
Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.
On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.
With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.
Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.
Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch.
I am again glad to see Carl Shulman engage the important issue of social change given whole brain emulation, but I fear the Singularity Institute obsession with making a god to rule us all (well) had distracted him from thinking about real em life, as opposed to how ems might realize make-a-god hopes.
Added 8a: To be clear, in a software-like em labor market, there will be some natural efficient scale of training, where all the workers doing some range of tasks are all copies of the same intensely trained em. All those ems will naturally find it easier to coordinate on values, and can coordinate a bit better because of that fact. There’s just no particular reason to expect that to lead to much more coordination on larger scales.
Michael, I completely agree. The the idea of intellectual property and patents would have some utility within a superorganism is hard to imagine when all parts would necessarily value the welfare of the superorganism over their own.
That is supposed to be the idea in tribes, where the members of the tribe value the tribe over themselves. This heuristic can break down. It apparently has broken down in the US with certain political factions valuing party over country. It has certainly broken down with AGW denialism where certain people value their own short-term profit over the adverse effects of long term global warming.
Why anyone who values their own personal welfare over any and every larger group somehow can imagine that they could trick a superorganism into thinking they were a loyal subject by sufficient signaling is quite strange. It is obviously false signaling.
True signalers of large group loyalty would value the welfare of all humans the most. Membership in any subset of humans that values the subset more than the whole is obviously comprised of members that cannot be loyal to a larger group because they are not loyal to the largest group. If individual superorganisms are not loyal to the group of superorganisms, then the group is not stable and cannot last long term.
I think what this means is that anyone selfish enough to prioritize their own speculative cryonic preservation over the welfare of large numbers of humans, can't be someone who could be loyal to a superorganism. Why would any superorganism revive an organism that will virtually certainly be disloyal?
I think its remarkable that in an article on "superorganisms," essentially something a lot more controlling of individuals than your average totalitarian dictatorship, the words rooted in "slave" are used only twice, but in a previous article on major league baseball, the thesis of the article seems to be that the highly paid players are slaves because of some of the contractural limitations they have agreed to in order to be paid to play.
On the superorganism idea, I don't think it is shared values at all that are needed to make this thing work, but rather it is a particular set of values. If all ems share the value the "the individual is paramount" then you aren't going to have much of a superorganism advantagte, same if the ems are all psychopaths. Whereas if most of the ems have as a value "I value the superorganism's needs as expressed by this particular command hierarchy more highly than I value my own individual life or desires" then it doesn't matter whether those ems share other values or are diverse in their other values.