Most people base most of their judgements on intuition, rather than explicit calculations. Some people do base judgements on explicit calculations, and take such calculations at face value. But many others, especially on social questions, use calculations that include case-specific fudge factors which can be adjusted to ensure that calculations agree with case-specific intuitions. While this might estimate well when intuitions are far more informative than explicit calculations, this often seems to be done to achieve a hypocritical appearance of calculation-based decisions, while actually allowing intuitions to dominate.
As I shall explain below, Holden Karnofsky illustrates this preference for fudge factors:
While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us … based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decision-making. …
People in this [later] group are often making a fundamental mistake, … estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter … are not making nearly large enough downward adjustments.
Karnofsky makes the valid statistical point that if you produce an error-prone estimate of the utilitarian effectiveness of some policy, you should not take that estimate at face value but instead adjust it based on your estimate of how noisy was that estimation process, and your prior expectation of how effective policies could plausibly be. Not doing so, he says, leads to mistakes like:
The Back of the Envelope Guide to Philanthropy lists rough calculations … [that] imply that donating for political advocacy for higher foreign aid is between 8x and 22x as good an investment as donating to VillageReach. …
Numerous people … argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others. …
[If people naively accepted explicit calculations,] it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about. … There would (too often) be no justification for costly skeptical inquiry of [a chosen] endeavor/action. …
Karnofsky’s preferred approach:
We generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good …
The more action is asked of me, the more evidence I require. Anytime I’m asked to take a significant action (giving a significant amount of money, time, effort, etc.), this action has to have higher expected value than the action I would otherwise take. …
I pay attention to how much of the variation I see between estimates is likely to be driven by true variation vs. estimate error. …
I put much more weight on conclusions that seem to be supported by multiple different lines of analysis. …
I am hesitant to embrace arguments that seem to have anti-common-sense implications … A too-weak prior can lead to many seemingly absurd beliefs and consequences. … When a particular kind of reasoning seems to me to have anti-common-sense implications, this may indicate that its implications are well outside my prior.
My prior for charity is generally skeptical.
Now I fully agree that one should discount utilitarian policy effectiveness estimates based on estimates of the noisiness of the estimation process, and of how effective policies could plausibly be. These considerations can justify Karnofsky’s use of a generally-skeptical prior, of his attending to variation between estimates, and of his preferring stronger evidence and multiple lines of reasoning.
These considerations do not, however, obviously suggest people are insufficiently skeptical of explicitly calculated estimates, nor do they obviously support avoiding existential risk charities, Back of the Envelope Guide to Philanthropy calculations, nor calculations that recommend large or anti-common-sense actions, or actions that help strangers.
First, to reject another’s calculation on the grounds that it insufficiently discounts due to errors and priors, one needs some evidence of such actual neglect. Unless we know that this consideration is only rarely included, or that if included it would typically be remarked upon, the mere fact that one does not see people explicitly discuss this consideration seems insufficient evidence for its being neglected.
More important, to reject a calculation of utilitarian charity effectiveness merely because it implies “anti-common-sense” actions, including large actions or those that help strangers, seems to give far too much weight to intuition, including intuitions that we shouldn’t do much or help strangers. Since few humans actually try to maximize utilitarian effectiveness in their charity choices, common human intuitions about good charity choices seem unlikely to be very informative about utilitarian charity effectiveness. So once one has estimated the likely distribution of policy effectiveness, and the degree of error in some analysis process, the additional fact that a calculation recommends weird-seeming actions should say little more about its utilitarian policy effectiveness.
It seems quite plausible that actual utilitarian maximizing policies would be weird, i.e., differ in many distinctive ways from common sense charitable actions. And it seems quite plausible two such difference would be that maximizing policies would have large actions, while common sense prefers small actions, and that maximizing policies might help strangers, while common sense prefers to help neighbors. In this context, your urge to put a lot of weight on common sense probably mainly reveals that you don’t actually want to maximize utilitarian policy effectiveness. That is, you are human, which shouldn’t be much of a surprise.
Holden Karnofsky prefers to rely on his intuitions about which are effective utilitarian charities, and has identified some adjustable fudge factors, i.e., estimates of analysis error and possible effectiveness, that he uses to justify his not endorsing counter-intuitive charities. There is a mismatch, however, between the ways he wants his recommendations to vary with context, and the kinds of variations that these fudge factors can reasonably justify. These fudge factors are not up to this task.
Even if Karnofsky accepts my critique, however, he’ll probably quickly identify some other fudge factors to let him continue to avoid endorsing counter-intuitive charities. After all, he says:
I present what I believe is the right formal framework for my objections to EEV [= explicit expected-value]. However, I have more confidence in my intuitions … than in the framework itself. … If the remainder of this post turned out to be flawed, I would likely remain in objection to EEV.
With new fudge factors, he’d continue to claim that he wants to maximize the utilitarian effectiveness of charities. But really, what are the chances of that?
Hi Robin,
I do believe that Bayesian adjustments are not included in most expected-value estimates of the kind I discuss. More at my comment on Less Wrong.
My understanding from our Google+ exchange is that we agree that the Bayesian adjustment described would have the property of requiring stronger evidence for more counterintuitive claims (all else equal), and that no other "anti-weird-claims" adjustment is needed or warranted.
I sympathize with your uneasiness regarding fudge factors. In my post, I state:
Of course there is a problem here: going with one’s gut can be an excuse for going with what one wants to believe, and a lot of what enters into my gut belief could be irrelevant to proper Bayesian analysis. There is an appeal to formulas, which is that they seem to be susceptible to outsiders’ checking them for fairness and consistency.But when the formulas are too rough, I think the loss of accuracy outweighs the gains to transparency. Rather than using a formula that is checkable but omits a huge amount of information, I’d prefer to state my intuition - without pretense that it is anything but an intuition - and hope that the ensuing discussion provides the needed check on my intuitions.
OK. I see you do like a lot of people.You tacitly decide whether cats or birds are more important. Then you attempt to quantitate the question.This is because normative or simply whimsical solutions are not supposed to be adequate.
Here is how I solved the problem. I fed the cat so the she didn't have to hunt birds,but now there are more rats. My back yard is like the welfare state.