Future Influence Is Hard
Imagine that one thousand years ago you had a rough idea of the most likely overall future trajectory of civilization. For example, that an industrial revolution was likely in the next few millenia. Even with that unusual knowledge, you would find it quite hard to take concrete actions back then to substantially change the course of future civilization. You might be able to mildly improve the chances for your family, or perhaps your nation. And even then most of your levers of influence would focus on improving events in the next few years or decades, not millenia in the future.
One thousand years ago wasn’t unusual in this regard. At most any place-time in history it would have been quite hard to substantially influence the future of civilization, and most of your influence levers would focus on events in the next few decades.
Today, political activists often try to motivate voters by claiming that the current election is the most important one in a generation. They say this far more often than once per generation. But they’ve got nothing on futurists, who often say individuals today can have substantial influence over the entire future of the universe. From a recent Singularity Weblog podcast where Socrates interviews Max Tegmark:
Tegmark: I don’t think there’s anything inevitable about the human future. We are in a very unstable situation where its quite clear that it could go in several different directions. The greatest risk of all we face with AI and the future of technology is complacency, which comes from people saying things are inevitable. What’s the one greatest technique of psychological warfare? Its to convince people “its inevitable; you’re screwed.” … I want to do exactly the opposite with my book, I want to make people feel empowered, and realize that this is a unique moment after 13.8 billions years of history, when we, people who are alive on this planet now, can actually make a spectacular difference for the future of life, not just on this planet, but throughout much of the cosmos. And not just for the next election cycle, but for billions of years. And the greatest risk is that people start believing that something is inevitable, and just don’t put in their best effort. There’s no better way to fail than to convince yourself that it doesn’t matter what you do.
Socrates: I actually also had a debate with Robin Hanson on my show because in his book the Age of Em he started by saying basically this is how is going to be, more or less. And I told him, I told him I totally disagree with you because it could be a lot worse or it could be a lot better. And it all depends on what we are going to do right now. But you are kind of saying this is how things are going to be. And he’s like yeah because you extrapolate. …
Tegmark: That’s another great example. I mean Robin Hanson is a very creative guy and its a very thought provoking book, I even wrote a blurb for it. But we can’t just say that’s how its going to be, because he even says himself that the Age of Em will only last for two years from the outside perspective. And our universe is going to be around for billions of years more. So surely we should put effort into making sure the rest becomes as great as possible too, shouldn’t we.
Socrates: Yes, agreed. (44:25-47:10)
Either individuals have always been able to have a big influence on the future universe, contrary to my claims above, or today is quite unusual. In which case we need concrete arguments for why today is so different.
Yes, it is possible to underestimate our influence, but surely it is also possible to overestimate that. I see no nefarious psychological warfare agency working to induce underestimation, but instead see great overestimation due to value signaling.
Most people don’t think much about the long term future, but when they do far more of them see the future as hard to foresee than hard to influence. Most groups who discuss the long term future focus on which kinds of overall outcomes would most achieve their personal values; they pay far less attention to how concretely one might induce such outcomes. This serves the function of letting people using future talk as a way to affirm their values, but overestimates influence.
My predictions in Age of Em are given the key assumption of ems as the first machines able to replace most all human labor. I don’t say influence is impossible, but instead say individual influence is most likely quite minor, and so should focus on choosing small variations on the most likely scenarios one can identify.
We are also quite unlikely to have long term influence that isn’t mediated by intervening events. If you can’t think of way to influence an Age of Em, if that happens, you are even less likely to influence ages that would follow it.