Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.
Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.
Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software.
We feel more morally responsible for how we treat more human-like things. We are more inclined to anthropomorphize things that seem more similar to humans in their actions or appearance, when we more desire to make sense of our environment, and when we more desire social connection. When these conditions are less met, we are more inclined to “dehumanize”, that is to treat human things as less than fully human. We also dehumanize to feel less morally responsible for our treatment of out-groups.
One study published in Science in 2007 asked 2400 people to make 78 pair-wise comparisons between 13 characters (a baby, chimp, dead woman, dog, fetus, frog, girl, God, man, vegetative man, robot, woman, you) on 18 mental capacities and 6 evaluation judgements. An “experience” factor explained 88% of capacity variation, being correlated with capacities for hunger, fear, pain, pleasure, rage, desire, personality, consciousness, pride, embarrassment, and joy. This factor had a strong 0.85 correlation with a desire to avoid harm to the character. A second “agency” factor explained 8% of the variance, being correlated with capacities for self-control, morality, memory, emotion recognition, planning, communication, and thought. This factor had a strong 0.82 correlation with a desire to punish for wrongdoing. Both factors correlated with liking a character, wanting it to be happy, and seeing it as having a soul (Gray et al. 2007).
Though it would be great to get more data, especially on more than 13 characters, this study does confirm the usual anecdotal description that anthropomorphizing is essentially a low dimensional phenomena. And if true, this fact has implications for how biological humans would treat ems.
My colleague Bryan Caplan insists that because ems would not be made out of familiar squishy carbon-based biochemicals, humans would feel confident that ems have no conscious feelings, and thus eagerly enslave and harshly treat ems, as Bryan says that our moral reluctance is the main reason why most humans today are not harshly treated slaves. However, this in essence claims the existence of a big added factor explaining judgements related to “human-like”, a factor beyond those seen in the above survey.
After all, “consciousness” is already one of the items included in the above survey. But it was just one among many contributors to the main experience factor; it wasn’t overwhelming compare to the rest. And I’m pretty sure that if one tried to add being made of biochemicals as a predictor of this main factor, it would help but remain only one weak predictor among many. You might think that these survey participants are wrong, of course, but we are trying to estimate what typical people will think in the future, not what is philosophically correct.
I’m also pretty sure that while the “robot” in the study was rated low on experience, that was because it was rated low on capacities like for pain, pleasure, rage, desire, and personality. Ems, being more articulate and expressive than most humans, could quickly convince most biological humans that they act very much like creatures with such capacities. You might claim that humans will all insist on rating anything not made of biochemicals as all very low on all such capacities, but that is not what we see in the above survey, nor what we see in how people react to fictional robot characters, such as from Westworld or Battlestar Galactica. When such characters act very much like creatures with these key capacities, they are seen as creatures that we should avoid hurting. I offer to bet $10,000 at even odds that this is what we will see in an extended survey like the one above that includes such characters.
Bryan also says that an ability to select most ems from scans of the few best suited humans implies that ems are extremely docile. While today when we select workers we often value docility, we value many other features more, and tradeoffs between available features result in the most desired workers being far from the most docile. Bryan claims that such tradeoffs will disappear once you can select from among a billion or more humans. But today when we select the world’s best paid actors, musicians, athletes, and writers, a few workers can in fact supply the entire world in related product categories, and we can in fact select from everyone in the world to fill those roles. Yet those roles are not filled with extremely docile people. I don’t see why this tradeoff shouldn’t continue in an age of em.
Added July 17: Bryan rejects my bet because:
I don’t put much stock in any one academic paper, especially on a weird topic. .. Robin’s interpretation of the paper .. is unconvincing to me. .. How so? Unfortunately, we have so little common ground here I’d have to go through the post line-by-line just to get started. .. a survey .. is probably a “far” answer that wouldn’t predict much about concrete behavior.
That is, nothing anyone says can be trusted on this topic, except Bryan’s intuition. He instead proposes a bet where I pay him up front, and he might pay me at our life end.
Seems to me Bryan disagrees not just with me, but also with the authors of this Science paper, as well as its editors and referees at Science. About what the survey means. But he seems to accept that a similar survey would show as I claim. And since he’s on record to say there isn’t that much difference between a survey and a vote, it seems he must accept this for predicting vote outcomes.
Added July 19: I offer to bet anyone $10K at even odds that in the next published survey with a similar size and care to the one above, but with at least twice as many characters, over 80% of the variance will be explained by two factors, neither of which is focused on the substance (e.g., carbon, silicon) out of which a character is made.
Most of this seems irrelevant to your debate with Bryan Caplan. The critical issue is the importance of consciousness for being human. If you concede (as you seem to, if only for the sake of argument) that future biological humans will be viewed as lacking consciousness, it's hard not to take seriously the likelihood that we will think they aren't human. (Of course, we wouldn't automatically deny them consciousness because of their constitution: unless future folks are all dyed in the woold dualists like Bryan.) But relevant to the study, I'd have to agree that the low rating of consciousness at least requires more attention. It's pretty weird - unless I'm misled by the intensity of qualia proponents about how much many seem to value their illusions of subjective experience.
Maybe I can suggest a different approach to whether ems will be dehumanized. Following Durkheimian sociology, one would expect the solidarity between ems and humans to be a function of their involvement or lack of involment in common interaction rituals. If ems are consituted as outsiders in biological human interaction rituals, they will be dehumanized.
Wouldn't the gross disparity in mental speed be a severe obstacle to there being common interaction rituals?
I do think Bryan's argument here re: docility in particular is very odd. Specifically I'm referring to his claim that an advantage of docile workers, which would cause them to be selected for by the em economy, is that they don't ask for high pay.
I'm I'm understanding this right, it suggests quite a bizarre view our world. Firstly it says that wages today are largely determined by how much workers demand. Highly paid individuals aren't actually any more productive, they just will refuse to work without large salaries, so that's what they're given.
Meanwhile the most productive individuals aren't actually the highest paid: they're randomly dispersed through the pay scale. Some of the best workers in the world are on incredibly low wages, because they're so docile that they'll happily accept that; it doesn't occur to them to ask for more money or find work elsewhere. They aren't ever headhunted, or if they are they turn it down because they're so loyal to their current company.
How do firms deal with the existence of such individuals? Do they just accept them when they appear as an unexpected boon? Do they seek them out before they've become attached to a firm? How do they try to make the docile individuals pick them as the firm they latch onto and never ask for a pay rise from?
The alternative model, which I think Robin is working based on, is one in which most everyone will try to get the highest wages they can, while also being willing to work for very low wages if that's what's needed to survive. Therefore wages reflect the value of the worker's labour; it might be the case that an incredibly productive individual is sitting near the bottom of the pay scale, but it's not the way to bet.
On this model, it doesn't matter what em world workers demand to be paid: they accept the market rate or they don't get to exist. So docility doesn't even come into it.
With the other traits mentioned that ems might or might not have, I think it's more debatable which way it would go. And docility is even included in that -- but this particular argument for docility is, I think, pretty clearly wrong.