To judge if beliefs are biased, Eliezer, I, and others here rely heavily on our standard ("Bayesian") formal theories of information and probability. These are by far the main formal approaches to such issues in physics, economics, computer science, statistics, and philosophy. They fit well with many, but not all, specific intuitions we have about what are reasonable beliefs.
There are, however, a number of claimed exceptions, cases where many people think certain beliefs are justified even though they seem contrary to this standard framework. This interferes with our efforts to overcome bias, at it allows people with beliefs contrary to this standard framework to claim their beliefs are yet more exceptions. I am thus tempted to reject all claimed exceptions, but that wouldn’t be fair. So I’m instead raising the issue and offering a quick survey of claimed exceptions. Perhaps future posts can consider each one of these in more detail.
To review, in our standard framework systems out there have many possible states and our minds can have many possible belief states, and interactions between minds and systems allow their states to become correlated. This correlation lets minds have beliefs about systems that correlate with the states of those systems. The exact degree of belief appropriate depends on our beliefs about the correlation, and can be expressed with exact but complex mathematical expressions.
OK, the following do not seem to be exceptions:
Indexicals – States of physical systems are usually defined from the view of a neutral third party, e.g., what objects are where in space-time. But people in such a system can also be uncertain about their "index" which says where they are in that system, e.g., where they are in space-time. While this introduces interesting new issues, once one introduces a larger set of indexical states, it seems the standard framework works just fine.
Logical Implications – These are the consequences of math axioms, or of concept definitions. As Eliezer tried recently to make clear, logical implications are not exceptions; they in fact fit just fine in the standard framework. When we arrange for error-prone devices (including our minds) to compute implications, the output of such devices are info we can use to draw conclusions about those implications. While the implications themselves are the same in all states, our error-prone beliefs cannot be completely certain of them.
Here are possible exceptions:
Math and Concept Axioms – Some people think that we know more about math than theorems saying what axioms imply what consequences; they think we know which math axioms are true. This is more than saying which mathematical abstractions are how useful in our actual universe. Similarly, many say we know which of the many possible concept definitions are the true ones. But it is not clear how our mental states could have become correlated with such math or concept truths.
Basic Moral Claims – Whether it is right to kill someone can depend on whether they were in fact a murderer, so a moral belief can depend on ordinary beliefs. But we can extract "basic" moral claims which do not so depend, such as whether it can ever be right to kill. Some say basic moral claims are really claims about preferences, while others say they are about what social norms were most adaptive for our ancestors. But most people insist both that moral claims are not just about physical or mental states, and that we have reliable beliefs about such claims. But how could such reliable beliefs arise?
Consciousness – Zombies are imagined creatures with physical bodies identical to ours, but with no inner life or subjective experience; there is nothing it is like to be a zombie. Since zombies would claim to experience consciousness just as we do, our brains have no info whatsoever suggesting that they are not zombies. But, according to David Chalmers and others, we are in fact conscious and we in fact know this. If so, how do we know?
The Real World – A possible world is a completely self-consistent description of how things could be. Each person in such a possible worlds has all the same sort of relations to systems and info in that world that we do to systems and info in our world. So they have just as much info suggesting they exist as we have suggesting we exist. David Lewis famously claimed all possible words are just as real as ours. But most people believe that only one of the many possible worlds is the real world, and that we correctly believe we are in the one real world. If so, how do we know?
Real Stuff – Physics models that many think "end" at some point often allow "analytic continuations" where the math is naturally extended to larger models. For example, space-time can be thought of as ending in the middle of a black hole, or as continuing on out into new regions. Similarly, some say the projection postulate in quantum mechanics destroys all but one branch, while others say all branches continue on independently. Those who say analytic continuations are unreal are saying people described by those continuations are unreal, even though they have the same local info relations as real people. How do real people know they are real?
The "Chinks In The Bayesian Armor" I would list are:
<ul><li>Inability to deal with undecidable propositions;<li>The problem of the priors;<li>Hume's problem of induction;</ul>
Some more possible problems:
http://plato.stanford.edu/entries/epistemology-bayesian/#PotPro
Robin, I don't think it's more rational to say "If I matter, then X" rather than just "X." Here's my argument. Suppose you hold the following beliefs:
- If I matter, then X.- If I don't matter, then not-X.
But if you don't matter, then your beliefs don't matter, so you might as well believe "If I don't matter then X." instead. Then you can simplify both of these beliefs into just "X."