Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”
Followup to: Who Told You Moral Questions Would be Easy?. Response to: Circular Altruism
At the most basic level (which is all we need for present purposes), an argument is nothing but a chain of dependence between two or more propositions. We say something about the truth value of the set of propositions {P1…Pn}, and we assert that there’s something about {P1…Pn} such that if we’re right about the truth values of that set, we ought to believe something about the truth value of the set {Q1…Qn}.
If we have that understanding of what it means to make an argument, then we can see that an argument doesn’t necessarily have any connection to the universe outside itself. The utterance "1. all bleems are quathes, 2. the youiine is a bleem, 3. therefore, the youiine is a quathe" is a perfectly logically valid utterance, but it doesn’t refer to anything in the world — it doesn’t require us to change any beliefs. The meaning of any argument is conditional on our extra-argument beliefs about the world.
One important use of this principle is reflected in the oft-quoted line "one man’s modus ponens in another man’s modus tollens." Modus ponens is a classical form of argument: 1. A–>B. 2. A. 3. .: B. Modus tollens is this: 1. A–>B. 2. ¬B. 3. .: ¬A. Both are perfectly valid forms of argument! (For those who aren’t familiar with the standard notation, the horizontal line is meant to indicate negation.) Unless you have some particular reason outside the argument to believe either A or B, you don’t know whether the claim A–>B means that B is true, or that A isn’t true!
Why am I elucidating all this basic logic, which almost everyone reading this blog doubtless knows? It’s a rhetorical tactic: I’m trying to make it salient, to bring it to the top of the cognitive stack, so that my next claim is more compelling.
And that claim is as follows:
Eliezer’s posts about the specks and the torture [1] [2], and the googleplex of people being tortured for a nanosecond, and so on, and so forth, tell you nothing about the truth of your intuitions.
Argument behind the fold…
At most, at most!, Eliezer’s arguments establish an inconsistency between two propositions. Proposition 1: "utilitarianism is true." Proposition 2: "your intuitions about putting dust specks in people’s eyes, sacred values, etc., to the extent they recommend inflicting a small harm on lots of people rather than a lot of harm on one person, when that the aggregate pain from the first is higher than the aggregate pain from the second, are true." As I’ve noted before, I don’t think Eliezer has even established that. (The short version: utilitarianism is a lot more complicated than that, it ain’t easy to figure out how to aggregate harms, it ain’t easy to map those harms onto hedonic states like pleasure and pain, etc.)
But let’s give Eliezer that one, arguendo. Suppose his argument has established the inconsistency. In symbols, where P = utilitarianism, and Q = your intuitions about dust specks etc., Eliezer has established ¬P∨¬Q. (Not P, or not Q.) It doesn’t establish ¬Q! Unless there’s more exogenous reason to believe P than there is to believe Q, Eliezer’s argument shouldn’t be any more likely to cause us to disbelieve P than to disbelieve Q. This is the step that should make your heart sing, now that I’ve primed you with the review of basic logic above.
Now let’s take the next step. Why should there be more exogenous reason to believe P than to believe Q? Why might one want to believe that utilitarianism is true?
This post is already far too long to go over the abstract reasons why one might accept utilitarianism. But let me make the claim, which you might find plausible, that many of those reasons come down to intuitions. Those intuitions might be about specific cases which lead to inductive generalizations about rules ("I think it’s better to kill one person than to kill five, and better to torture for a week than torture for a year, therefore, it must be best to maximize pleasure over pain!"), or intuitions directly about the rules ("well, obviously, it’s best to maximize pleasure over pain!"). Regardless, intuitions they be.
And now let’s subjectivize things a little further. I’ll bet that the vast majority of the people reading this post, people who hold utilitarian beliefs, came to those utilitarian beliefs largely as a result of articulating their moral intuitions, or reading an argument about normative ethics that spoke to their moral intuitions. Eliezer’s own case is a perfect example: he has expressed his utilitarian beliefs as being a direct consequence of his seemingly intuitive choices.
And now the final step. You get your moral intuitions about the dust specks case from wherever it is that your intuitions come from. You get your utilitarianism from wherever it is that your intuitions come from. They’re on equal footing — you have no more reason to believe your utilitarian intuitions than you have to believe your dust speck intuitions! Therefore, by the claims above, Eliezer’s argument shouldn’t cause you to reject your dust specks intuition.
A summary:
1. An argument establishing that two propositions are inconsistent doesn’t tell you which of those propositions you should reject, unless you have more reason outside the argument to accept one or the other.
2. For any two propositions P and Q, if you accept P for only the same reasons you accept Q, you don’t have more reason to accept P than Q.
3. Your reasons to believe dust specks are better than torture are identical to your reasons to believe utilitarianism is true.
4. Therefore, an argument (Eliezer’s) establishing that dust specks>torture is inconsistent with utilitarianism doesn’t give you any reason to reject dust specks>torture.
Q.E.D.
(A couple objections to this argument: 1) "But what if my intuitions about utilitarianism come from many, many cases, and I only have renegade non-utilitarian intuitions about a few cases — doesn’t that mean I should believe my utilitarian intuitions more strongly?" Answer: Sure, if and only if you think that the strength of intuitions can be summed that way, and it’s not obvious that’s true. Also, I can come up with many more cases than just the dust specks where your intuitions likely get non-utilitarian outcomes. 2) I was recently handed a paper where an undergrad argued that the intuitions of utilitarians tend to [always, even] match the results of utilitarian calculations [should she read this post, I invite her to defend that claim in the comments]. If true, that would cause problems… but does anyone actually believe it?)
This all connects back quite strongly to the point of this blog. Taking an argument of the form ¬P∨¬Q and concluding, on that basis alone, ¬Q is an error in reasoning, and it’s one that strongly resembles a form of overconfidence — or perhaps expecting short inferential distances.
That’s where the real fierceness lies. There’s the naked sword. There’s the solar plasma: in recognizing the limitations of your arguments, the point where the road — or "The Way" — stops.
Unknown: Quite independantly of your point, it seems to me you have a very peculiar notion of "large".
regards, frank
N does not need to be particularly large, because the number of possible brain states a human being can have is not particularly large.
In any case, if 3^^^3 is too small, we can always choose Busy Beaver (3^^^3) instead, compared with which 3^^^3 is very, very, very close to zero.