It’s been mentioned a few times already, but I want to draw attention to what is IMO probably the most interesting, surprising and challenging result in the field of human bias: that mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot "agree to disagree", they can only agree to agree.
This result goes back to Nobel Prize winner Robert Aumann in the 1970s: Agreeing to Disagree. Unfortunately Aumann’s proof is quite static and formal, building on a possible-world semantics formalism so powerful that Aumann apologizes: "We publish this note with some diffidence, since once one has the appropriate framework, it is mathematically trivial." It’s ironic that a result so counter-intuitive and controversial can be described in such terms. This combination of elegance and parsimony of proof combined with the totally unexpected nature of the result is part of what makes this area so fascinating to me.
Aumann’s proof, although elegant, is opaque unless you are familiar with the formalism. Tyler Cowan and Robin Hanson translate Aumann’s proof into English on pages 7-9 of their paper, Are Disagreements Honest? Some other papers that touch on the same result include Geanakoplos & Polemarchakis’ We Can’t Disagree Forever, which discusses the sequence of events as two rational debaters come to agreement; and various "no bet" theorems such as the classic by Milgrom & Stokey, showing that rational people will not participate in betting markets, since the mere fact that someone is willing to take your bet is evidence that you are wrong. Robin has several other papers in this area available from his web site.
There is much that can be said on this topic but I’ll focus on two aspects here. The result can be seen in either normative terms, telling us what we should do as rational thinkers, or positive terms, describing how rational humans behave. In the positive sense, it is obvious that the theorem is not a good description of human behavior. People do disagree persistently, and when they "agree to disagree" it is taken as a sign of respect rather than mutual contempt. It’s possible that this is mere politeness, though, and that we recognize at some level that such failures to reach agreement indicate a certain lack of good faith among the participants. I’d be curious to hear how others perceive such situations.
Normatively what I find most striking is the variation in how people respond upon learning of this result. Many people have a strong intuitive opposition to it, and seek out loopholes and exceptions which will allow them to justify their persistent disagreements. Indeed, such loopholes do exist, the most notable being the assumption that the debaters are acting as Bayesian reasoners with common priors. However as Tyler and Robin note in their paper, a number of extensions and relaxations of Aumann’s original result over the years have increased its scope and made it harder to appeal to these exceptions as a justification for ignoring the results.
It’s odd, because many other kinds of bias in the literature seem to introduce less opposition. For example, overconfidence bias is often freely admitted, with a rueful acknowledgement that it is a human failing to rank oneself too highly. Overconfidence is probably a large part of the reason for persistent disagreement, each party ranking his own knowledge above that of the other. Only a rather complex chain of reasoning exposes the logical contradiction in this conclusion. But even once that flaw is exposed, people seem much more reluctant to admit that their conclusions are likely to be no better than average, than that their abilities are also generally likely to be about average.
This bias is one I’ve found to be prevalent and influential in day to day life, more so than many others. Small disagreements are extremely common. For me, understanding the nature of Aumann’s result has been generally helpful in terms of allowing me to be less committed to my positions and more willing to seriously consider that the other person may have good reasons for his beliefs. There are still times when I am unpersuaded, but I recognize now that I have to see the other person as irrational and biased in order for me to hold my position in the face of his disbelief. As I alluded to above, I suspect that many of us adopt such an attitude unconsciously when we disagree, and it is helpful to be more aware of what is going on in such a common situation.
Imo, it's because beliefs can be rational and/or aesthetic at the same time. A good example of this is my inclination towards socialism. I have heard many compelling criticisms of socialism over the years, some of which have raised serious doubts in my mind as to the feasibility of socialism as a system, but because I have a greater investment in socialism as an aesthetic, I continue to hold this allegiance despite my awareness of these valid criticisms on a rational level. I often wonder if a similar phenomenon is occurring at a cognitive level for those who identify as religious in spite of the overwhelming lack of evidence they are presented with today.
The theorem is not normative. Agreeing to agree may be a logical result of interaction between perfect Bayesian agents; but my simulation indicates that doing this decreases expected correctness.
I also dispute that the theorem says what Aumann claimed it says, for two reasons.
First, it requires agents to know each others' partition functions. This is laughably impossible in the real world.
Second, I believe that Aumann's attempt to justify saying that "The meet at w of the partitions of X and Y is a subset of event E" means the same as the English phrase "X knows that Y knows event E" means, is incorrect.