We often use Bayesian analysis to identify human biases, by looking for systematic deviations between what humans and Bayesians would believe. Many, however, are reluctant to accept this Bayesian standard; they prefer to collect more specific criteria about what beliefs are reasonable or justified. For example, Nicholas Shackel recently commented:
It is no less reasonable, and perhaps more reasonable, to start from the premiss that people do reasonably disagree … and if Bayesianism conflicts with that, so much the worse for Bayesianism.
This choice of Bayesian vs. more specific epistemic judgments is an example of a common choice we face. We often must choose between a strong “simple” framework with relatively few degrees of freedom, and a weak “complex” framework with many more degrees of freedom. We see similar choices in law, between a few simple general laws and many complex context-dependent legal judgments.
We also see similar choices in morality, such as between a simple Utilitarianism and more complex context-dependent moral rules, like that we should distribute basic medicine but not movies equitably with a nation. In a paper on this moral choice, I used the following figure to make an analogy with Bayesian curve-fitting.
Imagine that one has a collection of data points, such as a sequence of temperatures driven in part by global warming. In general one thinks of these points as determined both by some underlying trend one wants to understand, and some other distracting “noise” processes that obscures this underlying trend.
In choosing a curve to describe this underlying trend, one can pick either a complex line which gets close to most points, or a simple line which deviates further from the data. The Bayesian analysis of curve-fitting says that whether the complex or simple line is better depends in part on how strong is the noise process. When there is little noise a complex line will extract more useful details about the underlying trend. But when noise is large, a complex line will mostly just fit the noise, and so will predict new data points badly.
Returning to the subject of human biases, we have many context-specific intuitions about what beliefs seem reasonable in various contexts. But we expect those intuitions to be clouded and polluted by error. If we expect just a little error, our best judgment about epistemic criteria should stay close to those intuitions. But if we expect a lot of error, we are better off choosing a simple general approach like Bayesian analysis, since the context-dependent details of our intuitions are most likely to reflect error.
In curve-fitting, if one has enough data one can estimate the error rate by looking at how well some parts of the data can predict other parts. We might do well to consider a similar exercise to calibrate the error rates in our intuitions about reasonable beliefs.
Today philosophy, literature, and parts of sociology tend to favor many context-dependent epistemic criteria, while statistics, economics, physics, and computer science tend to prefer simple standard closer-to-Bayesian criteria. My knee also tends to jerk in this second direction.
JMG3Y, as Hal notes simple attempts to "debias" usually fail. But anytime someone uses statistical techniques to draw a conclusion, they are implicitly acknowledging that just eye-balling the data would be biased. I'd call that a typically successful attempt to overcome bias.
JMG3Y, there has been a great deal of research on "debiasing", attempts to reduce various perceptual and judgmental biases in different ways. I've looked at a few of these papers, and it seems that the consensus is that debiasing is extremely difficult and usually doesn't work. However, it is not usually done simply by explaining the reality of Bayesian inference or probability theory, then turning people lose on problems. Rather, various tricks are used, such as getting them to consider alternatives, or imagine themselves in certain scenarios, or rewording the problems to try to reduce biasing effects. And as I said, usually these don't help much.
Tetlock told an amusing story of his debiasing experiment that backfired, in his book I reviewed earlier. He attempted to get participants to explicitly consider a wide range of alternative scenarios in making a forecast, to try to overcome a common bias of focusing too soon in analysis. But his single-minded "hedgehogs" refused to take the scenarios seriously since they thought they already knew exactly what was going to happen; their scores didn't change. And his open-minded "foxes" wasted so much time delightedly exploring the intricacies of the new scenarios that they lost track of the bigger picture and ended up doing worse in the exercises.
In general there seems to be something of an unstated assumption that just teaching people Bayesian decision theory would be uselessly abstract; I don't know if this is due to earlier failed experiments, or perhaps reflects experimenters' judgment that the theory is too complex for average subjects to grasp.