In general, adaptive systems vary along an axis from general to specific. A more general system works better (either directly or after further adaptation) in a wider range of environments, and also with a wider range of other adapting systems. It does this in part via having more useful modularity and abstraction. In contrast, a more specific system adapts to a narrower range of specific environments and other subsystems.
Systems that we humans consciously design tend to be more general, i.e., less context dependent, relative to the “organic” systems that they often replace. For example, compare grid-like city street plans to locally evolved city streets, national retail outlets to locally arising stores and restaurants, traditional to permaculture farms, hotel rooms to private homes, big formal firms to small informal teams, uniforms to individually-chosen clothes, and refactored to un-refactored software. The first entity in each pair tends to more easily scale and to match more environments, while the second in each pair tends to be adapted in more detail to particular local conditions.
The book Seeing Like a State describes how states often impose more general systems in order to help them tax and monitor locals, replacing a previous variety of systems of law, language, names, etc. Human minds start out general and flexible when young, and become more specific and inflexible as they age. Large software systems tend to evolve over time from general to specific. At first, the developers of large software systems better understand their architectures, and can more easily change them, even if users are less satisfied with specific system features. Later on, such systems contain more user-requested features, but have architectures that are less well understood or changeable.
More specific systems are more at risk from big changes to their environment, but with only modest environmental variation they tend to be better adapted to local conditions. That is, most successful biological and cultural systems in our world are not very general. Specific systems have even stronger advantages when a set of systems adapt together to each other. When environmental changes remain modest, such sets of mutual adapted systems can entrench themselves indefinitely; to win, competitors must replace the entire set of systems with new variations.
Consider the example of biological cells. For eons, cells faced the world individually, and evolved complex interdependent sets of subsystems to deal with this difficult task. The sharing of cell part designs created pressures for designs to be somewhat general; designs that could work in more situations could be more widely shared. Even so, cell subsystems tended to become well adapted to each other, and the whole set of standard cell designs has become rather entrenched.
The cells in the human body vary by a factor of at least one hundred thousand in volume. This shows that standard cell designs embody substantial generality with respect to cell size. Yet even this generality has its limits; it was apparently very hard to stretch standard cell designs to create single-cell organisms good enough to compete with the familiar large organisms we see in our world. Evolution instead opted to create multicellular organisms — many small cells grouped together to create a single large unit.
Pause to notice the enormous waste involved in this choice. Each cell in a multicellular organism redundantly retains most of the features needed to exist as a single cell creature in a hostile world, even though it no longer lives in such a hostile world. It has its own barrier against the world, and is careful to control what goes across this barrier. It has its own sensors to detect dangers and opportunities outside, and a full range of local manufacturing abilities. Instead of taking advantage of the sort of production scale economies that are central to our industrial economy, each cell makes almost everything for itself!
But if you think that a strongly competitive environment couldn’t possibly tolerate such inefficiency, then you just don’t appreciate the titanic power of entrenched systems. Over eons, standard cell designs became a very well honed and oiled machine, with thousands of parts all carefully designed to fit well with each other. To create a similarly effective large organism that isn’t built out of many small cells, evolution would have to mostly start over and search a very long time in the space of designs for much larger systems. Yes, eventually it might find much better designs, but before then it might have to search as nearly as long and hard as it had previously searched to find small cell designs. So far, that has just been a bridge too far for biological evolution. Far too far. For a half billion years, evolution has much preferred the small-cell bird in the hand to the new-big-organism bird that might be found after searching an astronomical-sized bush.
Now consider the future prospects for human minds if they compete as workers with other kinds of software. Assume that we will eventually find a way (as with ems) to extract the software in human minds from the hardware in which it is now embedded, so that human mind software faces no hardware advantages relative to other kinds of software. Given this assumption, the question becomes: how effective is human mind software relative to other kinds of software in accomplishing future mental/computational tasks?
Some think it obvious that because human minds evolved to win in a distant past environment, they couldn’t possibly win in a different future environment. But this same logic would also conclude that small single cells couldn’t possibly win when biological evolution selects for larger organisms. It ignores the possibility that human minds may be valuable carefully-honed packages of interdependent systems resulting from a vast evolutionary heritage. The future might not be willing to fund the enormous search required to find something very different and better. At least during a future era that lasts long enough to have an importance comparable to the last half billion years of multicellular animals.
Human brains are “general” in the sense of being able to do a rather wide range of tasks moderately well. However, they don’t seem to achieve this via a consistent design “generality” of the sort discussed above. Compared to the software that we humans write, the software in our brains is in many ways less general, abstract, and modular. In our brains, events are poorly synchronized, hardware is mixed up with software, memory is mixed up with processing, addresses are mixed up with contents, and doing is mixed up with learning; this doesn’t happen in the more modular better abstracted systems we design. While our brain has distinguishable subsystems, these subsystems are far more interconnected and less modular than in typical software systems.
When evolution honed the human brain, and its animal brain ancestors, it faced strong space limits. Most software was tied to dedicated hardware, and brains could only hold a limited amount of hardware. When we humans write software, in contrast, we quickly achieve modest competence via abstraction and modularity, which is helped by our having plenty of space to store software separately from hardware when not in use. When we want software to do a new thing, we mostly just write a new tool to do it. But brains had to instead make do with continuing over a very long time to change and reintegrate its existing tools.
The net result is that, compared to familiar software, the human brain is a marvel of highly integrated tools, each of which is useful in many task contexts. But this integration came at great cost in evolutionary search, and these subsystems are now highly entrenched and entangled with each other, and with supporting social systems. So like the carefully honed cells in multicellular animals, future competitive minds may prefer to often reuse human brains, modified to the modest degrees possible in such a huge tangled legacy software system that no one understands well. Not everything in a multi cellar animal is a small cell; there are bones and blood fluids, for example. But most of it is cells.
One big disadvantage of the integrated non-modular brains is that you have to devote an entire brain to do most any task, even very simple tasks. For the last century, we have found humans doing many tasks that could also be done by rather simple and cheap combinations of hardware and software. For obvious reasons, we have automated these tasks first. But eventually we will run out of tasks that can easily be done by computers much smaller than human brains. At that point we will face a less obvious choice: give the task to some variation on a well-integrated human mind, or write a big pile of software to do it. Or a variation on these, such as software written by software.
Humans won’t always win that contest, but it seems plausible that for a long time they may often win. Human-like minds will probably win more often in tasks that are highly tangled with other tasks that such minds do. This includes tasks like law, marketing, regulation, and planning, and meta-tasks, such as in management and governance. When human-like minds are modified, their most highly tangled networks for conscious thought and mind-wandering may change the least, at least for a long time. In this sort of world, minds very different from humans may not often be given tasks with a wide enough scope of action to be very dangerous. While this future could be very strange to actually see, it might still be less strange than many of you feared that it might be.
I'm not sure "the next 100 years" is a useful unit of analysis. I prefer doubling times of the world economy as a unit, and expect the future to be very hard to see after enough doublings. So I'm trying to see into the early AI era where feasible, but not expecting to be able to see the whole thing.
As with cells in large biological organisms, surely during the early AI era there will be a strong temptation to use existing legacy systems that work, even if they are hard to greatly refactor. Sure in the very long run such legacies may perhaps have declining influence, but what reason do we have to think we can talk sensibly about such distant futures? We certainly know now of many legacy designs that have lasted a very long time so far.
I mostly meant that total economic output over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past. This is a disanalogy between the situation of human designers and evolution, suggesting that we may have less need to reuse parts.
I agree that early in history we will want to steal as much from biology as possible, and don't have strong views about when that period ends (but don't think the analogy to cells has much to say about that question).