Many people analyze and discuss the policies that might be chosen by organizations such as governments, charities, clubs, and firms. We economists have a standard set of tools to help with such analysis, and in many contexts a good economist can use such tools to recommend particular policy options. However, many have criticized these economic tools as representing overly naive and simplistic theories of morality. In response I’ve said: policy conversations don’t have to be about morality. Let me explain.
A great many people presume that policy conversations are of course mainly about what actions and outcomes are morally better; which actions do we most admire and approve of ethically? If you accept this framing, and if you see human morality as complex, then it is reasonable to be wary of mathematical frameworks for policy analysis; any analysis of morality simple enough to be put into math could lead to quite misleading conclusions. One can point to many factors, given little attention by economists, but which are often considered relevant for moral analysis.
However, we don’t have to see policy conversations as being mainly about morality. We can instead look at them as being more about people trying to get what they want, and using shared advisors to help. We economists make great use of the concept of “revealed preference”; we infer what people want from what they do, and we expect people to continue to act to get what they want. Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things.
When organizations must make choices, and people talk together about those choices, they may well try to persuade each other by referring to outside advice. To be effective in influencing the policies that individuals will privately push for, such shared advice must persuade its audience than it will help them to get more of what they want. And one good way to do this is for shared advisors to help identify policy choices that tend to be closer to what we economists call the “Pareto frontier” of wants. This is the set of outcomes where no one can get more of what they want without someone else getting less.
Of course even when one can identify this frontier of wants, negotiations over organization policies will still contain a “zero-sum” element of choosing a point on that frontier. Each change from one frontier point to another gives some people more of what they want, at the cost of giving other people less. Even so, it can be quite useful for negotiators to know more about the location of this frontier, as moving the space of policies being considered toward the frontier offers the potential to give everyone more of what they want. And economic tools of analysis are quite directly useful for achieving this goal.
Organization policy choices resulting from negotiations and pressures from different people who want different things can be seen as “deals.” I’ve given the name “dealism” to this framing of policy discussions as being more about what people want, and shared advice as about locating the frontier of wants. Preference Utilitarianism claims that it is morally right and good to give everyone more of what they want. But dealism does not make this claim. Dealism instead says that as everyone wants to induce deals where they get more of what they want, they thus also want policy conversations to be influenced by shared advisors who can point everyone toward the Pareto frontier of wants. Dealism sees a big place for policy conversations that are about such wants and deals. There can be conversations that are primarily about the morality of policy choices, but there can also be other sorts of conversations.
My proposal to use decision markets as the basis of a form of governance, futarchy, can be seen as a dealist approach to governance. In it, a polity must choose an explicit measure of the outcomes that will be preferred by their collective choices, and then betting markets consistently and effectively give them more of this measure. This would push political conversations to be more explicitly about what everyone wants, relative to how they can get it.
In my opinion, one of the strongest criticisms of futarchy is that people prefer more hypocritical forms of governance. Humans like to pretend to want some things, while actually wanting other things, and human minds and culture are highly adapted to such hypocritical conversations. Having policy conversations mix up value and fact considerations makes such hypocrisy easier. Futarchy would instead force people to be clearer about what they want.
A similar criticism can be made against dealism more generally. We like to pretend that morality gets higher weights in our wants than it actually does. This pretense is aided by the pretense that policy conversations are mainly about morality. We must sure care a lot about morality if that is the main topic of our policy conversations! We over-emphasize morality relative to our other wants, and also values of all sorts relative to facts. But for this to work, our morality needs to have a lot of context dependent flexibility, so we can cloak other wants as moral considerations. And we want our shared advisors, at least the ones we pretend to listen to, to also seem to talk mostly about morality.
This all suggests that dealism may be more true than most of us want to admit. We want to actually listen to advisors who point us toward the Pareto frontier of wants, while pretending to listen to advice that is presented as mainly being about morality. These can either be two different groups of advisors, or it can be the same group where the actual basis for their advice is different from what they pretend. For example, we can pretend to listen to pundits while actually listening to hard-headed economists. Or we can listen to apparently soft-headed economists who are actually hard-headed.
If we individually don’t have much influence over policy, yet our associates still judge us strongly on our policy opinions, then the tradeoff is more at the group level, between our groups seeming to focus on morality, versus our groups getting what they collectively want. Individuals will then want to push for policies that their allies will see as making their groups seem to be focused on morality, while actually giving those groups more of what they want. Individual wants won’t matter so much.
"contractarianism claims that moral norms derive their normative force from the idea of contract or mutual agreement" Dealism does not make claims about morality; it is instead about wants. https://plato.stanford.edu/...
Vocabulary question: Is there a difference between dealism and contractarianism?