A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.
Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.
Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,
“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”
Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist.
To complete the proof, Yew-Kwang has to add a very reasonable principle of his own:
Non-Antiegalitarianism: If alternative B has the same set of individuals as in alternative A, with all individuals in B enjoying the same level of utility as each other, and with a higher total utility than A, then, other things being equal, alternative B must be regarded as better than alternative A.
Given that both average and total utility increase and inequality is reduced or unchanged, this principle can hardly be disputed. Avoiding the Mere Addition Paradox (or in Ng’s phrasing, using the Mere Addition Principle), and then applying Non-antiegalitarianism, the Repugnant Conclusion becomes an inevitable result:
Consider the following alternatives:
A: 1 billion individuals with an average utility of 1 billion utils.
A + : The same 1 billion individuals with exactly the same utility levels plus 1 billion trillion individuals each with 1 util (i.e., barely worth living).
E: The same individuals as in A + with a somewhat higher total utility but equally shared by all (i.e., each with, say, 1.01 utils).Clearly, the Mere Addition Principle implies that A + is better than or at least no worse than A, and Non-Antiegalitarianism implies that E is better than A +. So E is better than or at least not worse than A. (Cf. Parfit, 1984, pp. 431-32.) Since a life with 1.01 utils (this positive figure can be made as small as we like by a suitable change in numbers in the earlier example) is still barely worth living, the necessity to say that E is better than or at least not worse than A must still be regarded as an instance of the Repugnant Conclusion.
We must therefore either reject Non-antiegalitarianism, or bite the bullet of the Mere Addition Paradox or Repugnant Conclusion. Non-antiegalitarianism seems impregnable. Biting the bullet on the Mere Addition Paradox would imply that 1 person with a utility of 1 could be more desirable than 1 million people with an average utility of 0.99, even if all of them were living highly worthwhile lives. That is also simply ridiculous in my view. The Repugnant Conclusion suggests that a large number of people with lives just worth living can be better than a smaller number with very good lives. But the values and quantities are hard to grasp. While it is unpleasant to imagine myself living in a world full of people living only barely worthwhile lives, is that in itself a good reason to reject it? Ng argues not:
“…why do most people find [the conclusion] repugnant? This, I believe, could be due either to an inability to understand the implication of large numbers or to misplaced partiality. Consider the following alternative worlds:
A: 1 single utility monster with 100 billion utils. [for a total of 100 billion utils]
B: 1 billion individuals each with 200 utils. [for a total of 200 billion utils]
C: 1 billion billion individuals each with 0.001 utils. [for a total of 1^16 utils]Intuitively, most people prefer B to C and also prefer B to A. This is so because B looks similar to our present world and we are not pre- pared to sacrifice a decrease in average utility from 200 to 0.001 even if the increase in population size more than compensates (in terms of total utility) to this reduction. Also, we are not prepared to sacrifice numbers from 1 billion to 1, even if the gain in average utility overbalances this. But this is taking a partial view from our standpoint. From an impartial viewpoint or from the viewpoint of comparing two hypothetical, mutually exclusive alternatives, if B is better than A, then C is much better than B.”
A is threatening, as I am not a part of it, and C is threatening because so long as I can only be one person I will get a lot less utility from my existence. I am perfectly able to see why B is better than A, because except for the 100 billion utils, the comparison is around figures I am comfortable with. The fact that there are more people living good lives, rather than one person living a great life doesn’t raise any alarms. But if I accept that, why not also accept the move from B to C? I don’t fully comprehend what a billion billion people or 0.01 utils are really like, but by extension it seems desirable. I imagine if I were already a part of C, moving from B to C would seem just fine.
Aliens visiting Earth might well see our lives as barely worth living, at least relative to theirs. In light of that, should we necessarily prefer to replace all of humanity with a single individual living a better life than anyone has so far? I think not.
Of course I would not want to personally move from B to C because I would be worse off. But that selfish desire is not a reason against acting in a way that improve the total welfare of others I don’t personally know.
If I have to accept something, I accept the repugnant conclusion and aim to maximise total welfare. A lot of little bits of good can indeed add up to a lot of good, even if it’s hard to picture!
[1] The non-identity problem is that most important choices that affect the future, don’t just affect that quality of life of people in the future, but also ‘who’ exists, by changing the precise circumstances of people’s conception. If, to avoid having to worry about impacts on who exists, you decided to only concern yourself with how your choices affected the welfare of people who would live in all the future scenarios you were contemplating, then in many cases you would not care about the future at all because there would be no identical people featuring in all of those scenarios.
Update: Yew-Kwang emails to add, “You understate the case against average utilitarianism … one could go further than this well-known mere addition paradox. In an original population of 100 million with AU = 100, the addition of another 100m with AU = 80 and with the pre-existing people AU increases to 110, this change that makes all existing and new individuals happier is still opposed by average utilitarianism, since the AU decreases from 100 to 95. Thus, average utilitarianism is much more unacceptable, repugnant than you thought, and than the repugnant conclusion.”
Why focus on "worth living"? Seeing that majority of future people will be EMs and will not be able to physically die, is there really a lower bound to their utility?
Total measure can't be increased or decreased, only rationed (and maybe not even that, but here's hoping futility theories are wrong)