To a first approximation, the future will either be a singleton, a single integrated power choosing the future of everything, or it will be competitive, with conflicting powers each choosing how to perpetuate themselves. Selection effects apply robustly to competition scenarios; some perpetuation strategies will tend to dominate the future. To help us choose between a singleton and competition, and between competitive variations, we can analyze selection effects to understand competitive scenarios. In particular, selection effects can tell us the key feature without which it is very hard to forecast: what creatures want.
This seems to me a promising place for mathy folks to contribute to our understanding of the future. Current formal modeling techniques are actually up to this task, and theorists have already learned lots about evolved preferences:
Discount Rates: Sexually reproducing creatures discount reproduction-useful-resources given to their half-relations (e.g., kids, siblings) at a rate of one half relative to themselves. Since in a generation they get too old to reproduce, and then only half-relations are available to help, they discount time at a rate of one half per generation. Asexual creatures do not discount this way, though both types discount in addition for overall population growth rates. This suggests a substantial advantage for asexual creatures when discounting is important.
Local Risk: Creatures should care about their lineage success, i.e., the total number of their gene’s descendants, weighted perhaps by their quality and relatedness, but shouldn’t otherwise care which creatures sharing their genes now produce those descendants. So they are quite tolerant of risks that are uncorrelated, or negatively correlated, within their lineage. But they can care a lot more about risks that are correlated across such siblings. So they can be terrified of global catastrophe, mildly concerned about car accidents, and completely indifferent to within-lineage tournaments.
Global Risk: The total number of descendants within a lineage, and the resources it controls to promote future reproduction, vary across time. How risk averse should creatures be about short term fluctuations in these such totals? If long term future success is directly linear in current success, so that having twice as much now gives twice as much in the distant future, all else equal, you might think creatures would be completely risk-neutral about their success now. Not so. Turns out selection effects robustly prefer creatures who have logarithmic preferences over success now. On global risks, they are quite risk-averse.
Carl Shulman disagrees, claiming risk-neutrality:
For such entities utility will be close to linear with the fraction of the accessible resources in our region that are dedicated to their lineages. A lineage … destroying all other life in the Solar System before colonization probes could escape … would gain nearly the maximum physically realistic utility … A 1% chance of such victory would be 1% as desirable, but equal in desirability to an even, transaction-cost free division of the accessible resources with 99 other lineages.
When I pointed Carl to the literature, he replied:
The main proof about maximizing log growth factor in individual periods … involves noting that, if a lineage takes gambles involving a particular finite risk of extinction in exchange for an increased growth factor in that generation, the probability of extinction will go to 1 over infinitely many trials. … But I have been discussing a finite case, and with a finite maximum of possible reproductive success attainable within our Hubble Bubble, expected value will generally not climb to astronomical heights as the probability of extinction approaches 1. So I stand by the claim that a utility function with utility linear in reproductive success over a world history will tend to win out from evolutionary competition.
Imagine creatures that cared only about their lineage’s fraction of the Hubble volume in a trillion years. If total success over this time is the product of success factors for many short time intervals, then induced preferences over each factor quickly approach log as the number of factors gets large. This happens for a wide range of risk attitudes toward final success, as long as the factors are not perfectly correlated. [Technically, if U(prodtN rt) = sumtN u(rt), most U(x) give u(x) near log(x) for N large.]
A battle for the solar system is only one of many events where a lineage could go extinct in the next trillion years; why should evolved creatures treat it differently? Even if you somehow knew that it was in fact that last extinction possibility forevermore, how could evolutionary selection have favored a different attitude toward such that event? There cannot have been a history of previous last-extinction-events to select against creatures with preferences poorly adapted to such events. Selection prefers log preferences over a wide range of timescales up to some point where selection gets quiet. For an intelligence (artificial or otherwise) inferring very long term preferences by abstracting from its shorter time preferences, the obvious option is log preferences over all possible timescales.
Added: To explain my formula U(prodtN rt) = sumtN u(rt),
U(x) is your final preferences over resources/copies of x at the “end,”
rt is the ratio by which your resources/copies increase in each timestep,
u(rt) is your preferences over the next timestep,
The righthand side is expressed in a linear form so that if probabilities and choices are independent across timesteps, then to maximize U, you’d just pick rt to max the expected value of u(rt ). For a wide range of U(x), u(x) goes to log(x) for N large.
Thanks, Robin. I think I now understand your point. But putting it aside for a moment (I'll come back to it), it looks to me that the mathematical reasoning in your post just isn't right, and that your conclusions don't follow from your assumptions. Let's consider a specific numerical example, with an asexual species and U(x)=x. Say the number of periods is 3000, and in each period the choices are R (risky) and C (conservative). If a creature chooses C, he is survived by 100 offspring. If a creature chooses R, he has 50/50 chance of either 1 offspring, or 1000 offspring. The risks are fully correlated within a period, so everyone who chooses R has the same number of offspring. Probabilities are independent across periods. This example satisfies your assumptions, right?
If u(x)=log(x), then creatures should choose C every period, since log(100)=2 > log(1)/2+log(1000)/2=1.5. But choosing R every period maximizes the expected population at the end of 3000 periods. To see this, the expected population of always choosing R is at least .5^3000 * 1000^3000 = 500^3000, which is the probability that r_t=1000 for 3000 periods, times the total population if that were to occur. Choosing C leads to a population of 100^3000 with probability 1, less than the expected population of choosing R. It seems clear that u(x) does not go to log(x) if U(x)=x.
Robin, can you check if my analysis is correct?
But either way, Sinn's math still stands, so let's go back to the question of whether modeling only fully correlated risks makes sense. First, we can check that Sinn's conclusions do apply in the example above: choosing R leads to a greater expected population, but with high probability the actual population will be less than choosing C. So it seems that evolution selects for u(x)=log(x) if we defined "select for" as Sinn's "evolutionary dominance" (ignoring MWI considerations for the moment). But what if the environment also has uncorrelated risks? Suppose that odd periods stay the same but in even periods, the risks of choosing R are completely uncorrelated. Then evolution should select for creatures with time-dependent utility: u(x,t)={x if t is even, log(x) if t is odd}.
In real life correlations of risks do not change this predictably with time, so under Sinn's formalism, evolution should select for creatures with dynamic utility functions that change depending on the creature's estimate of the degree of correlation of the risk in the decision he faces. But that abuses the concept of utility function beyond recognition. Consider the analogy with the theory of investments, where there aren't utility functions over the outcomes of individual investments (changing depending on their risk characteristics). Instead one has an utility function over one's income stream, and risk aversion or neutrality on individual investments emerge from selecting strategies to maximize expected utility under that fixed utility function.
So, I think it makes more sense to say that evolution also selects for behavioral strategies, not utility functions. These strategies tended to maximize expected descendants when risks were uncorrelated and expected log descendants when risks were correlated. That fits better anyway with the idea that we are adaptation executors, not utility maximizers, and perhaps explains why we don't seem to have direct preferences over the number of our descendants.
Wei, one can decompose arbitrary risks into correlated and uncorrelated risks, and preferences can treat those components differently. Since it seems clear how preferences treat the uncorrelated part, the issue is how it treats the correlated part. For the purpose of studying that question it is as I said natural and appropriate to study a model of fully correlated risks.