I think it might be worthwhile to speculate on ways that bias might have beneficial effects, in the course of asking ourselves how committed we ought to be to its elimination. I can think of four effects that seem to be particularly interesting, and I’ve outlined them beneath the fold.
In summary, the possible benefits I’d like to kick around are as follows: (a) random error (“noise”) might permit truth to develop by an evolutionary process; (b) bias-originated views might break the hegemony of other bias-originated views; (c) some biases might generate beneficial self-fulfilling prophecies; and (d) bias-originated errors might help us exercise and develop our argumentative and educative capacities.. (Warning: this is a fairly long post.)
1. Exogenous Shock to Evolutionary Stable Strategies
It seems fair to say that the truth-finding technology of any given society is always going to be imperfect. In Socrates’ time, there was no probability theory, no game theory, no evolutionary theory, and so forth. In our time, statisticians, mathematicians, and computer scientists are continuing to extend our ability to engage in inductive reasoning and deductive reasoning, as well as our computational power. Thus, there are, in principle, some truths that are not reachable by current truth-finding methods. At the same time, our truth-finding methods are stable: they work extremely well as far as they go, and there’s no disruptive and major dispute (although there are plenty of non-disruptive disputes) about the central ideas of rationality.
It seems like we can accordingly model our current best beliefs about the world (the hypothetical set of each belief x1…xn, where for each xi, no alternative belief would better match our truth-finding processes) as an evolutionary stable strategy, in the loose sense that every item in the set should be selected over every item (mutation) outside the set, such that over time, absent failure to follow the truth-finding procedures, they should be universally accepted.
Now consider a being who has evolved to include a paradigm case of biasing — say that .001% of the time, it chooses its beliefs at random rather than following a rational process. In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the “best beliefs” set because it is not reachable by current procedures. If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society’s incomplete truth-finding process.
2. Space-Clearing against Previous Biases
Closely related to the previous idea is that one false biased idea might sufficiently unsettle another false biased idea, such that the social dominance of the original idea is mitigated and truth can be accepted. For example, we might tell the following story about the Protestant reformation: the Catholic church was dominant and suppressed all dissent. When Luther nailed his theses to the door, he was motivated by bias (assume arguendo that religion is inherently motivated by bias). The ensuing conflict between the Catholics and the Protestants created enough instability in the system of social control to permit things like the enlightenment, which greatly enhanced our truth-finding processes and could never have happened in the face of complete single-church hegemony. Because the Protestant story was so compelling (partially as a result of biases which demanded a religion but wanted to avoid some of the abuses of the main church), it could defeat the hegemony of the church in a way that mere rational argument might not have achieved.
(I have no idea if this story, which I just invented, is true, but the example suggests that the effect is in principle possible. I’d also appreciate any pointers to good historical scholarship on this kind of effect.)
3. Self-Fulfilling Prophecies
I suggested this point in the comments to the earlier post about teaching altruism. There might be some non-empty set of beliefs such that each belief in the set, xi, meets the following conditions: (a) xi is currently false; (b) xi would become true if enough people believed it; and (c) we would all be better off if xi was true, including the people who were initially tricked into believing it. It seems that the belief that people are generally altruistic might fall into this category, and we can imagine others too. To the extent this is true, perhaps we ought to encourage those beliefs? I think there’s basically a collective action problem argument to be made here: no individual has an incentive to adopt falsely altruistic-expecting beliefs, but society would be made better off if we all did.
4. Mill’s Argument
Toward the end of chapter 2 of On Liberty, John Stuart Mill argues that a state (that buys utilitarian theory) must not censor a view opposed to the prevailing dogma, even if it can be absolutely certain that the view is false. His argument, roughly, is that even false views provide overwhelming social benefits: they encourage the rest of society to develop arguments for the truth, thus developing everyone’s critical faculties and deepening their understanding of the true position.
It seems like a similar effect could counsel against society encouraging complete bias-elimination. Might we need someone who, for example, systematically fails to apply conditional probability appropriately, in order that we can learn from refuting their errors?
"Now consider a being who has evolved to include a paradigm case of biasing -- say that .001% of the time, it chooses its beliefs at random rather than following a rational process."
That's a paradigm case of variance, not a paradigm case of bias. (Of course, as a follower of E. T. Jaynes, I probably shouldn't believe in the bias-variance decomposition because it isn't Bayesian enough.)
"In a society with a stable but incomplete truth-finding procedure, it has some non-zero probability of hitting on a belief that happens to be true but is not in the "best beliefs" set because it is not reachable by current procedures."
Not all non-zero probabilities are worth pursuing. Lottery tickets, and monkeys typing Shakespeare, both come to mind. Truth is a much smaller target to hit than error - of all possible ways to obtain it, a random number generator has got to rank among the least effective.
"If that belief happens to be conducive to survival (either because it leads to physical or superior social success), evolution might select for it wholly apart from society's incomplete truth-finding process."
All you've done is describe an additional truth-finding process, and not, it seems to me, a very good one: "Adopt beliefs produced by random number generators, and let natural selection take its course." The problems being, (1), a random number generator is pretty unlikely to hit anything but gibberish; (2), not everything that correlates to the number of surviving offspring is interpretable as a belief, and those interpretable as beliefs aren't necessarily true; (3) random beliefs aren't necessarily heritable with digital fidelity; (4), even if the underlying trick worked, you could do much better by tracking census statistics on what people believe and how many children they have, and examining the statistical conclusions, rather than waiting thousands of generations for natural selection to take its course.
I agree it's worth noticing when an odd-seeming belief seems to correlate to, say, the ability to manipulate physical reality. But it seems to me that it's much better to try to bring this criterion into the deliberate judgment process, than to embrace noise (much less bias) in our cognitive systems.
You think? Maybe so... I'd intended this as nothing more than a thin sketch of some preliminary and loosely related thoughts, but if this isn't productive, I'll break it up over the next week or so and elaborate each point a little further.