Tyler posted:
Do I think Robin Hanson’s “Age of Em” actually will happen? A reader has been asking me this question, and my answer is…no! Don’t get me wrong, I still think it is a stimulating and wonderful book. .. But it is best not read as a predictive text, much as Robin might disagree with that assessment. Why not? I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response. Here goes: 1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way.
I titled my response Tyler Says Never Ems, but on twitter he objected:
“no reason to think it will happen” is best summary of my view, not “never will happen.”
…that was one polite way of saying I do not think the scientific consensus is with you on this issue…
I responded:
How does that translate into a probability?
You have to clarify the exact claim you have in mind before we can discuss what the scientific consensus says about it.
But all he would answer is:
Now at GMU econ we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence. Tyler is usually impatient with that, and pushes such visitors to make best estimates. Yet here it is Tyler who shows reluctance. I hypothesize that he is following this common principle:
One does not express serious opinions on topics not yet authorized by the proper prestigious people.
Once a topic has been authorized, then unless a topic has a moral coloring it is usually okay to express a wide range of opinions on it; it is even often expected that clever people will often take contrarian or complex positions, sometimes outside their areas of expertise. But unless the right serious people have authorized a topic, that topic remains “silly”, and can only be discussed in a silly mode.
Now sometimes a topic remains unauthorized because serious people think everything about it has a low probability. But there are many other causes for topics to be seen as silly. For example, sex was long seen as a topic serious people didn’t discuss, even though we were quite sure sex exists. And even though most everyone is pretty sure aliens must exist out there somewhere, aliens remain a relatively silly subject.
In the case of ems, I interpret Tyler above as noting that the people who seem to him the proper authorities have not yet authorized serious discussion of ems. That is what he means by pointing to experts, saying “no reason” and “scientific consensus,” and yet being unwilling to state a probability, or even clarify which claim he rejects, even though I argued a 1% chance is enough. It explains his initial emphasis on treating my book metaphorically. This is less about probabilities, and more about topic authorization.
Compare the topic of ems to the topic of super-intelligence, wherein a single hand-coded AI quickly improves itself so fast that it can take over the world. As this topic seems recently endorsed by Elon Musk, Bill Gates, and Steven Hawking, it is now seen more as an authorized topic. Even though, if you are inclined to be skeptical, we have far more reasons to doubt we will eventually know how to hand-code software as broadly smart as humans, or vastly better than the entire rest of the world put together at improving itself. Our reason for thinking ems eventually feasible is far more solid.
Yet I predict Tyler would more easily accept an invitation to write or speak on super-intelligence, compared to ems. And I conclude many readers see my book primarily as a bid to put ems on the list of serious topics, and they doubt enough proper prestigious people will endorse that bid. And yes, while if we could talk probabilities I think I have a pretty good case, even my list of prestigious book blurters probably aren’t enough. Until someone of the rank of Musk, Gates, or Hawking endorses it, my topic remains silly.
Parts of current machine learning systems are still coded by humans, but my point is that it's no longer the "content" of intelligence that is coded, but just a general learning framework.
For instance, consider the DeepMind system that can play ~50 Atari games. In traditional machine learning, humans would have to define a bunch of features, then the learning algorithm would take those feature values as input. Figuring out the best features was difficult work that involved a lot of human labor and insight. In the DeepMind case, an example feature might be "is any moving object on a course that will collide with my character in the next 2 seconds?" You can train an Atari playing system by defining and manually coding up hundreds or thousands of such features, hoping that the combination is enough that your model can learn how to play the game well.
How DeepMind's Atari system actually works is that the only inputs to its learning algorithm are the pixel values on the screen. It is trivial to write the code to give the learning system the pixel values. The input "features" are identical for every Atari game. So none of the intelligence about how to play the game is hand coded. (I think the only other hand coded part is some function that extracts the score from the screen). The amount of work replaced by not having to manually define features is huge.
This is a continuation of a shift in how AI systems are built. Before machine learning, humans would specify both the "features", and also how the features should interact to produce intelligence. With traditional ML, you let the system learn the interactions and only hand-code the features. Now, we can let the system learn both (instead of defining features, you let the system 'perceive' raw input). This is the distinction that I see you not acknowledging when you talk about non-em AI involving "hand coding" intelligence.
The future can be a lot different than we can know. For example, our quasi intelligent agents will probably remove any need for ems, though I think there would still be great incentive for the challenge of immortality that would outweigh it being non economic. And knowing enough to do it may still leave a biological solution preferred.