(Inspired by my conversation with Will Wilkinson.)
In a typical moral philosophy paper, an author proposes a principle to summarize his specific intuitions about some relatively narrow range of situations. For example, he might propose a principle to account for his intuitions about variations on a scenario wherein passerbys learn one or more folks are downing in a lake. This practice makes sense if such intuitions are very reliable, but much less sense if intuitions are very unreliable, as clues about moral truth.
In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data. Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions. (This paper elaborates.)
The fact that our moral intuitions depend greatly on how situations are framed, differ greatly across individuals within a culture, and vary greatly across cultures, suggests lots of noise in our moral intuitions. The fact that moral philosophers don’t much trust the intuitions of non-moral-philosophers shows they agree error rates are high. So I wonder: what moral beliefs should we hold in the limit of expecting very large errors in our moral intuitions?
It seems to me that in this situation we should rely most on the simplest most consistent pattern we can find in our case-specific moral intuitions. And it seems to me that this simplest pattern is just our default rule, i.e., what we think folks should do in the usual case where no other special considerations apply. Which is simply: usually it is fine to do what you want, to get what you want, [added: if no one else cares.]
If you dropped your pencil and want to get it back, well then do reach down and pick it up. If you have been eating your meal steadily and are feeling a little full, then do slow your bites down a bit, if that seems more agreeable. If you are reading an magazine and the current article starts to bore you, why skip to next article if you guess that would bore you less. If you have an itch and no one will know or care if you scratch, well then scratch.
Millions of such examples can be multiplied, all fitting well with the simple pattern: [added: all else equal,] it is usually good for people to do things to get what they want. So this seems to me the natural limit of minimal morality: trust this basic pattern only, and not any subtler corrections. This basically picks a goodness measure close to preference utilitarianism, which is pretty close to the economist’s usual efficiency criterion.
As we back off from this minimal morality, and start to consider trusting more details about our moral intuitions, because of a lower estimate of our moral intuition error rate, what more would we add? We might consider incorporating basic rules like “don’t kill” or “don’t lie,” but a funny pattern emerges with these. We do not think we should apply these rules in many situations, usually situations where following these rules would prevent many people from getting other things they want.
And if asked why these are good rules, people usually explain how following them will tend to get people the other sorts of things they want. For example, they’ll note that since the gains of liars are usually less than the costs to those who believe their lies, on average we are better off without most lies.
Yes, people do clearly often look disapprovingly on other people doing things to get what they want, and they often attribute this disapproval to non-default moral intuitions, i.e., intuitions that go beyond just wanting to get folks what they want. But we can just look at this as a situation of onlookers wanting different behavior from the disapproved folks. And so we can want to discourage such disapproved behavior just in order to get these onlookers what they want.
This all suggests that the minimal morality pattern, of just getting people what they want, plausibly fits a large fraction of the recommended actions of our non-default moral intuitions. Which isn’t to say that it accounts for each exact moral intuition in every particular situation; clearly it does not. But this does suggest that as we turn down the parameter which is the estimated error rate in our moral intuitions, we have to go quite a ways before our best fit moral beliefs will be forced to deviate much from the simple minimal morality of just getting people what they want.
Since it seems to me that moral intuition error rates are pretty high, this is good enough for me; I’ll just take the efficiency criteria and run with it. I’m not saying I’m sure that true morality exactly agrees with this; I’m just saying I don’t trust the available data enough to estimate anything much different from this simplest most consistent pattern in our moral intuitions.
Added: To be more precise, for most situations where someone makes a choice that no one else cares about, the usual moral intuition is that the better outcomes are the ones that person wants more. The simple pattern I see in this is that outcome goodness is increasing in how much each person wants that outcome. Economic efficiency then follows by the usual arguments of Pareto improvements.
See also my clarifying post from the next day.
The idea of an error rate requires the existence of an objective measure of accuracy. If there's no objectively right answer, there can be no real error.
Which would mean that error is simply how far your beliefs/actions deviate from your personal moral values. In which case, the least-error prone morality is that everyone should do whatever they happen to do regardless of anything, since your error will be zero.Or that you're always right no matter what you do, though that doesn't make other people moral.
TLDR: thinking of morality in terms of minimizing error just doesn't work.
Robin,I wasn't trying to say that you didn't mean your clarification. It just doesn't square well with what you say in other places of the post.
But that wasn't my main worry anyway. What do you think about the non-moral character of the pencil case and the other cases you base the minimal principle on? Shouldn't the simple cases we're basing the moral principle on be moral cases that we'd have moral intuitions about?