Both private- and public-sector prognosticators must master the same tightrope-walking act. They know they need to sound as though they are offering bold, fresh insights into the future not readily available off the street. And they know they cannot afford to be linked to flat-out mistakes. Accordingly, they have to appear to be going out on a limb without actually going out on one. That is why … they so uniformly appear to dislike affixing “artificially precise” subjective probability estimates to possible outcomes—the only reliable method we have of systematically tracking accuracy across pundits, methods, time and contexts. It is much safer to retreat into the vague language of possibilities and plausibilities—things that might or could happen if various difficult-to-determine preconditions were satisfied. The trick is to attach so many qualifiers to your vague predictions that you will be well positioned to explain pretty much whatever happens. China will fissure into regional fiefdoms, but only if the Chinese leadership fails to manage certain trade-offs deftly, and only if global economic growth stalls for a protracted period, and only if . . .
More here. Hat tip to Henry at Crooked Timber.
Philip Tetlock seems to suggest that prognosticators are fooling us via this strategy, as if we would not tolerate such gaming if only we understood what they were up to. I fear that instead we don’t much mind these games. I suspect that we mostly want to affiliate with impressive folks, and reading their provocative forecasts gives us yet another excuse to do so. As long no one else notices their failed forecasts enough to make them seem less impressive, we don’t really care if they were proved right or wrong.
If you haven't already come across this then I think you'll find this both interesting and pertinent to the topic of the above post. I have difficulty figuring out by what methods the predictions are really being made, but it does seem that D. Sornette is in fact going out on a limb. This is definitely the the right forum to evaluate this.
Dragon-Kings, Black Swans and the Prediction of Criseshttp://arxiv.org/abs/0907.4...
Common group dynamic drives modern epidemics across social, financial and biological domainshttp://arxiv.org/abs/0907.3...
One problem is that if you have a lot of value in your perceived expertise, you don't want to expose yourself to a small number of bets, because your 'estimated value' will have a large standard error compared to your 'true value'. Think Julian Simon and Paul Ehrlich entered in a famous wager in 1980, betting on a mutually agreed upon measure of resource scarcity over the decade leading up to 1990. Ehrlich ultimately lost the bet, and all five commodities that were selected as the basis for the wager have continued to trend downward since 1980. Did Ehrlich change his mind? No. He thinks the timescale was simply too short. He may be right (it's not obviously wrong).
There are an intrinsically small number of 'factors' related to your expertise, so say you believe in Global Warming, and most bets relevant over the next 5 years are highly correlated, all having a large error component. Alternatively, you may think immigration is great in general, but then dislike all the current immigration plans in practice. Any big policy, in practice, is co-mingled with other policies, and often you are comparing to the hypothetical 'what would have happened if we did not do X'.
One solution most people apply is to ignore these issues because they are nonempirical, but clearly this can be disastrous.