Imagine someone made a unlikely claim that to you, i.e., a claim to which you would have assigned a low probability. For many kinds of unlikely claims, such as that your mother was just in a car accident, you usually just believe them. But if the claim was suspiciously dramatic, you may well suspect they had fallen prey to common human biases toward dramatic claims. Here are a four reasons for such distrust:
Incentives If a stockbroker said you should buy a certain stock because it will double in a month, you might suspect he was just lying because he gets paid a commission on each trade. You have similar incentive reasons to suspect emails from Nigerian diplomats, and I’ll-love-you-forever promises from would-be seducers. This doubt might be overcome for those who show clear enough evidence, or showed they actually had little to gain.
Craziness If someone told you they were abducted by aliens, talked with God, or saw a chair levitate, you might suspect a serious mental malfunction, whereby he just could not reliably see or remember what he saw. This doubt might be overcome if he showed you reliable sight and memory, and that he was not then in some altered more susceptible state of mind (e.g., trance). Adding more similarly qualified observers would help too.
Sloppiness If someone told you the available evidence strongly suggests a 911 conspiracy, or that aliens regularly visit Earth, you might suspect him of sloppy analysis. Analyzing such evidence requires detailed attention to which theories are how consistent which observations, and so one needs either very thorough attention to all possibilities, or more realistically good calibration on how often one makes analysis mistakes. You may suspect he did not correct sufficiently for unconscious human attractions to dramatic claims. This doubt might be overcome if he showed a track record of detailed analysis of similar dramatic claims; he might be a professional accident investigator, for example. A group of such professionals would be even more believable, if there were not similar larger groups on the other side.
Fuzziness If someone told you they invented an architecture allowing a human-level AI to be built soon, or a money system immune to financial crises, or found a grand pattern of history predicting a world war soon, and if these claims were not based on careful analysis using standard theories, but instead on new informal abstractions they are pioneering, you might suspect them of being too fuzzy, i.e., of too eagerly embracing their own new abstractions.
There are lots of ways to slice up reality, and only a few really “carve reality at the joints.” But if you give a loose abstraction the “benefit of the doubt” for long enough, you can find yourself thinking in its terms, using it to draw what seem reasonable if tentative conclusions. These conclusions might even be reasonable as weak tendencies, all else equal, but you may forget how much else is not equal, and how many other abstractions were available. Here we can be biased not only toward dramatic claims, but also toward our own we-hope-someday-celebrated creations.
Some critics suspect whole professions, such as literary critics or sociologists of norms, of fooling themselves into over-reliance on certain abstractions, even after thousands of experts have used those abstractions full-time for decades. Such critics want a clearer track record of such a profession dealing well with concrete problems, and even then critics may suspect the abstractions contributed little. For example, Freudian therapy skeptics suspect patients just feel better after someone listens to their troubles. How much more then should we suspect new personal abstractions that give dramatic implications, if their authors have not yet convinced relevant experts they offer new insight into less dramatic cases?
I don’t fully accept Eliezer’s AI foom estimates; I’ve explained my reasoning most recently here, here, and here. But since we both post on disagreement meta-issues, I should discuss some of my meta-reasoning.
I try to be reluctant to disagree, but this applies most directly to an average opinion, weighted by smarts and expertise; if I agreed with Eliezer more I’d have to agree with other experts less. His social proximity to me shouldn’t matter, except as it lets me better estimate auxiliary parameters.
But even so, if I disagree with Eliezer, I must distrust something about Eliezer’s rationality; disagreement is disrespect, after all. So what do I distrust? I guess I suspect that Eliezer has succumbed to the all-too-human tendency to lower our standards on fuzzy abstractions leading to dramatic claims. Yes, he has successfully resisted this temptation other times, but then so have most who succumb to it.
Your not believing in God or Nigerian diplomats or UFOs, or your having felt the draw of dramatic beliefs but resisted them, doesn’t mean you don’t believe something else equally unsupported where you never even noticed the draw of drama. Our intuitions in such cases are simply not trustworthy.
How are his claims “dramatic”? I could list many of their biasing features, but this seems impolite unless requested. After all, I do overall greatly respect and like Eliezer, and am honored to co-blog with him.
Sure, but don't confuse the causality. If EY realized the FAI problem was the most pressing of all problems, and only then inserted himself into the midst of it, then this is an example of rationality not rationalization.
Eliezer: "Thou shalt give me examples."
Put this in a google search box:site:overcomingbias.com "and lo"site:overcomingbias.com "unto"
I'll retract the qualifier "when speaking of himself", which I used because the examples I remembered were from the recent string of autobiographical posts. It seems to be a general inclination to use occasional archaic words.