Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.
Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.
Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.
Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.
Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.
Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.
Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.
Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.
Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.
Some book quotes on his key claim:
In any environment that is sufficiently rich, self-propagating systems will arise, and natural selection will lead to the evolution of self-propagating systems having increasingly complex, subtle, and sophisticated means of surviving and propagating themselves. … In the short term, natural selection favors self propagating systems that pursue their own short-term advantage with little or no regard for long-term consequences. …
Self-propagating subsystems of a given supersystem tend to become dependent on the supersystem and on the specific conditions that prevail within the supersystem. … In the event of the destruction of the supersystem or of any drastic acceleration of changes in the conditions prevailing within the supersystem, the subsystems can neither survive nor propagate themselves. … But as long as the supersystem exists and remains more or less stable, natural selection … disfavors those subsystems that “waste” some of their resources in preparing themselves to survive the eventual destabilization of the super system. … Natural selection tends to produce some self-propagating human groups that operate over regions approaching the maximum size allowed by the available means of transportation and communication. … [Today,] natural selection tends to create a world in which power is mostly concentrated in the possession of a relatively small number of global self-propagating systems. … If small-scale self-prop systems organize themselves into a coalition having worldwide influence, then the coalition will itself be a global self-prop system. … Intuition tells us that desperate competition among the global self-prop systems will tear the world-system apart. ..
Earth’s self prop systems will have become dependent for their survival on the fact that conditions have remained within these limits. Large-scale self-prop human groups, as well as any purely machine-based self-prop systems, will be dependent also on conditions of more recent origin relating to the way the world-system is organized; for example, conditions relating to economic relationships. The rapidity with which these conditions change must remain within certain limits, else the self-prop systems will not survive. .. If conditions ever vary wildly enough outside the limits, then, with near certainty, all of the world’s more complex self-prop systems will die without progeny. .. With several self-prop systems of global reach, armed with the colossal might of modern technology and competing for immediate power while exercising no restraint from concern for long-term consequences, it is extremely difficult to imagine that conditions on this planet will not be pushed far outside all earlier limits and batted around so erratically that for any of the Earth’s more complex self-prop systems, including complex biological organisms, the chances of survival will approach zero. …
There is another way of seeing that this situation will lead to radical disruption of the world-system. Students of industrial accidents know that a system is most likely to suffer a catastrophic breakdown when (i) the system is highly complex (meaning that small disruptions can produce unpredictable consequences), and (ii) tightly coupled (meaning that a breakdown in one part of the system spreads quickly to other parts). The world-system has been highly complex for a long time. What is new is that the world-system is now tightly coupled. This is a result of the availability of rapid, worldwide transportation and communication, which makes it possible for a breakdown in any one part of the world-system to spread to all other parts. As technology progresses and globalization grows more pervasive, the world-system becomes ever more complex and more tightly coupled, so that a catastrophic breakdown has to be expected sooner or later. …
There is nothing implausible about the foregoing explanation of the Fermi Paradox if there is a process common to all technologically advanced civilizations that consistently leads them to self-destruction. Here we’ve been arguing that there is such a process.
tl;dr: if you're discussing human extinction, TK's written opinions aren't relevant. he's only concerned with civilization collapse for the purpose of preserving humanity.---I'm re-reading what you've written and I believe there may be a straw-man element to your argument.
I don't think TK has ever argued that humanity will face an extinction event - or to be precise, that we are capable of avoiding massive-scale events such as an unanticipated large object impact with the Earth. What TK discussed was the dangers inherent in civilization that could mold humanity into something we do not recognize as human. To that end, I would venture that his goal is a reduction of the population on the order of 4 to 6 orders of magnitude, which I he seems to think would collapse civilization as we know it, while preserving humanity as we know it.
Without the context "TK wants to wreck civilization to save humanity" included in any interpretation of his writing, discussions can blindly lead to strange conclusions and extrapolations regarding his opinions. For instance, in quoting his contemplation of the Fermi Paradox, I don't believe TK concludes that all life int he galaxy has died due to a system-wide collapse we can yet contemplate, but due to some as-yet unforeseen consequence of the evolution of life in concert with the kinds of civilization systems he opposed.
His thesis is not that tech kills man, per se, but that tech makes man something different, which is then more fragile and subject to extinction - but in the definition sense of humanity-as-we-know-it, man is long gone by the time the final extinction event occurs.
I am talking about human extinction. An economic collapse would be bad for those who suffered it, but humanity would continue and revive soon on a cosmological timescale.