I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.
The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.
Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.
One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.
In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.
Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.
On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.
Then that's perfectly compatible with many people having been saved by doctors.
Almost all the evidence we have is about marginal impacts.