In “Enhancing Our Truth Orientation,” Robin argues that Aumann’s theorem applies to moral claims. I’m very skeptical of this position, primarily because there does not seem to be a plausible way to translate moral positions into the kinds of probability judgments suitable for Bayesian reasoning.
What reason do we have to believe that moral positions can be understood as subjective probabilities? Is there anyone who genuinely believes that, say, deontology is true with a probability of .7, virtue ethics with a probability of .299, and utilitarianism with a probability of .001? Or that it’s 35% likely to be true that you can’t lie to the murderer at the door? (Kant’s infamous case.) Does it even make sense to say that? Is it at all coherent? What might it mean to utter the statement “there is a .35 probability of it being wrong to lie to the murderer at the door?”
Here’s what it can’t mean: “If you lie to the murderer at the door 100 otherwise identical times, you can expect to have violated the moral law 35 of those times.” Nobody in their right mind would make that sort of claim. If you utter that statement, you’ve stopped talking about morals and started talking about facts: If it’s wrong to lie to the murderer one time, it’s wrong to lie to the murderer all other times, unless the facts — rather than the values — changed. (I’m ruling out some sort of extreme moral skepticism here, since if you’re that much of a moral skeptic, you shouldn’t be making statements about probabilities of moral conduct at all.)
Here’s what it also can’t mean: “I’m pretty sure it’s ok to lie to the murderer at the door.” That’s not a probability statement. Not even to a Bayesian. (Eliezer’s “technical explanation of technical explanation” nicely explains why — in the context of Star Trek, no less.) Even if that was a statement about probability, it’s implausible to think that the confidence one has in one’s moral claims could be expressed in numbers. How would that work? “I think Rawls’s theory of justice makes sense, except I’m not really sure about his claim that it should be limited to the basic structure of society. That’s about 28.47% of his argument, so I guess I’m a Rawlsian with .7153 probability.” What does that mean? Why isn’t the basic structure limitation 98.4% of his argument, or .00018% of his argument? How do you get an objective measure of that amount? Do you count sentences?
Moreover, even if you accept the notion that Bayesian reasoning can be extended to non-numeric estimates of uncertainty, it’s still really problematic to apply it to normative claims. For one thing, there’s still no objective rule describing how we might reconcile weights. If I think Bernard Williams’s character/integrity argument cases a lot of doubt on utilitarianism, while you think it only casts a little doubt on utilitarianism, on what basis are we suppose to discuss the differences we have between “a lot” and “a little?” I think what it ultimately comes down to is that the “a lot” versus “a little” distinction is a judgment, in the Kantian sense, and not one that can be described by rules. We can’t ever get to common priors on that, because the “prior” is the exercise of a sui generis intellectual faculty.
Furthermore, moral claims are supposed to lead to action, and it makes little sense for this action to be discounted by probability in most cases — not even if there happened to be some kind of probability distribution over moral arguments. Suppose a pro-choicer and a pro-lifer got together and realized their differences came down to the question of whether the woman’s right to bodily integrity trumped the fetus’s right to the potentiality of life or not. Now suppose they’re both Bayesians with common priors and so forth, and so they mutually adjust their probability of bodily integrity trumping potentiality of life to .5. What does this mean in terms of action? Suppose they’re legislators (and no, they can’t default to the status quo, since that reflects a prior moral judgment that’s now problematic) — do they both have to vote for a bill that says that 50% of abortions are now legal? If a woman wants an abortion, must she flip a coin to see whether she gets one? That’s a position that everyone would find unacceptable.
I submit that the only possible subjective probability evaulations for moral claims are 0, 1, and “undetermined.” I further submit that “undetermined” is quite useless when one has to make a decision on a moral question. Consequently, Bayesian reasoners don’t have the capacity to adjust their probability judgments toward each other, and the modesty argument can not apply.
Robin, if we are to apply probability theory to moral claims in a nontrivial way, there has to be correlations between moral possibilities and our sensory perceptions, otherwise Bayesian updating becomes a null operation. But such correlations seem untenable since our sensory perceptions are determined by physics, and physics is independent of morality. The atoms in my brain and the universe in general will do the same things whether "killing is good" or "killing is bad", so nothing I can perceive can possibly provide any evidence as to which is the case.
"Impossible possible worlds" doesn't suffer from this problem.
My post "Why Not Impossible Worlds" appears today.