While I’m a contrarian in many ways, it think it fair to call my ex-co-blogger Eliezer Yudkowsky even more contrarian than I. And he has just published a book, Inadequate Equilibria, defending his contrarian stance, against what he calls “modesty”, illustrated in these three quotes:
I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall; … to be mistaken about issues on which there is expert disagreement about half of the time. …
On most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment. …
We all ought to [avoid disagreeing with] each other as a matter of course. … You can’t trust the reasoning you use to think you’re more meta-rational than average.
In contrast, Yudkowsky claims that his book readers can realistically hope to become successfully contrarian in these 3 ways:
0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” …
Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself. …
Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify. … [This] is where you get the fuel for many small day-to-day decisions, and much of your ability to do larger things.
Few would disagree with his claim #1 as stated, and it is claim #3 that applies most often to reader lives. Yet most of the book focuses on claim #2, that “for just myself” one might annually improve on the recommendation of our best official experts.
The main reason to accept #2 is that there exist what we economists call “agency costs” and other “market failures” that result in “inefficient equilibria” (which can also be called “inadequate”). Our best experts don’t try with their full efforts to solve your personal problems, but instead try to win the world’s somewhat arbitrary games. Games that individuals just cannot change. Yudkowsky may not be saying anything especially original here about how broken the world can be, but his discussion is excellent, and I hope it will be widely read.
Yudkowsky gives some dramatic personal examples, but simpler examples can also make the point. For example, one can often use maps or a GPS to improve on official road signs saying which highway exits to use for particular destinations, as sign officials often placate local residents seeking less through-traffic. Similarly, official medical advisors tend to advise medical treatment too often relative to doing nothing, official education experts tend to advise education too often as a career strategy, official investment advisors suggest active investment too often relative to index funds, and official religion experts advise religion too often relative to non-religion. In many cases, one can see plausible system-level problems that could lower the quality of official advice, inducing these experts to try harder to impress and help each other than to help clients.
To explain how inadequate are many of our equilibria, Yudkowsky contrasts them with our most adequate institution: competitive speculative financial markets, where it is kind of crazy to expect your beliefs to be much more accurate than are market prices. He also highlights the crucial importance of competitive meta-institutions, for example lamenting that there is no place on Earth where one can pay to try out arbitrary new social institutions. (Alas he doesn’t endorse my call to fix much of the general problem of disagreement via speculative markets, especially on meta topics. Like many others he seems more interested in bets as methods of personal virtue than as institution solutions.)
However, while understanding that systems are often broken can lead us to accept Yudkowsky’s claim #2 above, that doesn’t obviously support his claim #3, nor undercut the modesty that he disputes. After all, reasonable people could just agree that, by acting directly and avoiding broken institutions, individuals can often beat the best institutionally-embedded experts. For example, individuals can gain by investing more in index funds, and by choosing less medicine, school, and religion than experts advise. So the existence of broken institutions can’t by itself explain why disagreement exists, nor why readers of Yudkowsky’s book should reasonably expect to consistently pick who is right among disagreeing experts.
Thus Yudkowsky needs more to argue against modesty, and for his claim #3. Even if it is crazy to disagree with very adequate financial institutions, and not quite so crazy to disagree with less adequate institutions, that doesn’t imply that it is actually reasonable to disagree with anyone about anything.
His book says less on this topic, but it does say some. First, Yudkowsky accepts my summary of the rationality of disagreement, which says that agents who are mutually aware of being meta-rational (i.e., trying to be accurate and getting how disagreement works) should not be able to foresee their disagreements. Even when they have very different concepts, info, analysis, and reasoning errors.
If you and a trusted peer don’t converge on identical beliefs once you have a full understanding of one another’s positions, at least one of you must be making some kind of mistake.
Yudkowsky says he has applied this result, in the sense that he’s learned to avoid disagreeing with two particular associates that he greatly respects. But he isn’t much inclined to apply this toward the other seven billion humans on Earth; his opinion of their meta-rationality seems low. After all, if they were as meta-rational as he and his two great associates, then “the world would look extremely different from how it actually does.” (It would disagree a lot less, for example.)
Furthermore, Yudkowsky thinks that he can infer his own high meta-rationality from his details:
I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments. … [Clues to individual meta-rationality include] using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning.
The possibility that some agents have low meta-rationality is illustrated by these examples:
Those who dream do not know they dream, but when you are awake, you know you are awake. … If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.
Now yes, the meta-rationality of some might be low, that of others might be high, and the high might see real clues allowing them to correctly infer their different condition, clues that the low also have available to them but for some reason neglect to apply, even though the fact of disagreement should call the issue to their attention. And yes, those clues might in principle include knowing about Bayes’ rule, sunk costs, debiasing, experiments, or math. (They might also include many other clues that Yudkowsky lacks, such as relevant experience.)
Alas, Yudkowsky doesn’t offer empirical evidence that these possible clues of meta-rationality are in fact actually clues in practice, that some correctly apply these clues much more reliably than others, nor that the magnitude of these effects are large enough to justify the size of disagreements that Yudkowsky suggests as reasonable. Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population. So to me, these all remain open questions on disagreement.
In an accompanying essay, Yudkowsky notes that while he might seem to be overconfident, in many lab tests of cognitive bias,
around 10% of undergraduates fail to exhibit this or that bias … So the question is whether I can, with some practice, make myself as non-overconfident as the top 10% of college undergrads. This… does not strike me as a particularly harrowing challenge. It does require effort.
Though perhaps Yudkowsky isn’t claiming as much as he seems. He admits that allowing yourself to disagree because you think you see clues of your own superior meta-rationality goes badly for many, perhaps most, people:
For many people, yes, an attempt to identify contrarian experts ends with them trusting faith healers over traditional medicine. But it’s still in the range of things that amateurs can do with a reasonable effort, if they’ve picked up on unusually good epistemology from one source or another.
Even so, Yudkowsky endorses anti-modesty for his book readers, who he sees as better than average, and also too underconfident on average (even though most people are overconfident). His advice is especially targeted at those who aspire to his claim #1:
If you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground.
This seems to me to be a good reason to take a big anti-modest stance. If you are serious about trying hard to make a big advance somewhere, then you must get into the habit of questioning the usual accounts, and thinking through arguments for yourself in detail. If your chance of making a big advance is much higher if you are in fact more meta-rational than average, then you have a better chance of achieving a big advance if you assume your own high meta-rationality within your advance-attempt-thinking. Perhaps you could do even better if you limited this habit to the topic areas near where you have a chance of making a big advance. But maybe that sort of mental separation is just too hard.
So far this discussion of disagreement and meta-rationality has drawn nothing from the previous discussion of inefficient institutions in a broken world. And without such a connection, this book is really two separate books, tied perhaps by a mood affiliation.
Yudkowsky doesn’t directly make a connection, but I can make some guesses. One possible connection applies if official experts tend to deny that they sit in inadequate equilibria, or that their claims and advice are compromised by such inadequacy. When these experts are high status, others might avoid contradicting their claims. In this situation, those who are more willing to make cynical claims about a broken world, or more willing to disagree with high status people, can be on average more correct, relative to those who insist on taking more idealistic stances toward the world and the high in status.
In particular, such cynical contrarians can be correct about when individuals can do better via acting directly than indirectly via institution-embedded experts, and they can be correct when siding with low against high status experts. This doesn’t seem sufficient to me to justify Yudkowsky’s more general anti-modesty, which for example seems to support often picking high status experts against low status ones. But it can at least go part of the way.
We have a few other clues to Yudkowsky’s position. First, while he explains the impulse toward modesty via status effects, he claims to personally care little about status:
Many people seem to be the equivalent of asexual with respect to the emotion of status regulation—myself among them. If you’re blind to status regulation (or even status itself) then you might still see that people with status get respect, and hunger for that respect.
Second, note that if the reason you can beat on our best experts is that you can act directly, while they must win via social institutions, then this shouldn’t help much when you must also act via social institutions. So it is telling that in two examples, Yudkowsky thinks he can do substantially better than the rest of the world, even when he must act via social institutions.
First, he claims that the MIRI research institute he helped found “can do better than academia” because “We were a small research institute that sustains itself on individual donors. … we had deliberately organized ourselves to steer clear of [bad] incentives.” Second, he finds it “conceivable” that the world’s rate of innovation might increase noticeably if another small organization that he helped to found “annual budget grew 20x, and then they spent four years iterating experimentally on techniques, and then a group of promising biotechnology grad students went through a year of CFAR training.”
Putting this all together my best guess is that Yudkowsky sees himself, his associates, and his best readers as only moderately smarter and more knowledgeable than others; what really distinguishes them is that they really care much more about the world and truth. So much so that they are willing to make cynical claims, disagree with the high status, and sacrifice their careers. This is the key element of meta-rationality they see as lacking in the others with whom they feel free to disagree. Those others are mainly trying to win the usual status games, while he and his associates are after truth.
Alas this is a familiar story from a great many sides in a great many disputes. Each says they are right because the others are less sincere and more selfish. While most such sides must be wrong in these claims, no doubt some people do care more about the world and truth than others. Furthermore, those special people may see detailed signs telling them this fact, while others lack those signs but fail to sufficiently attend to that fact.
And we again come back to the core hard question in the rationality of disagreement: how can you tell if you are neglecting key signs about your (lack of) meta-rationality? But alas, other than just claiming that such clues exist, Yudkowsky doesn’t offer much analysis to help us advance on this hard problem.
Eliezer Yudkowsky’s new book Inadequate Equilibria is really two disconnected books, one (larger) book that does an excellent job of explaining how individuals acting directly can often improve on the best advice of experts embedded in broken institutions, and another (smaller) book that largely fails to explain why one can realistically hope to consistently pick the correct side among disputing experts. I highly recommend the first book, even if one has to sometimes skim through the second book to get to it.
Of course, if you are trying hard to make a big advance somewhere, then it can make sense to just assume you are better, at least within the scope of the topic areas where you might make your big advance. But for other topic areas, and for everyone else, you should still wonder how sure you can reasonably be that you have in fact not neglected clues showing that you are less meta-rational than those with whom you feel free to disagree. This remains the big open question in the rationality of disagreement. It is a question to which I hope to return someday.
Yeah... 3 felt like a bit of a side note that Eliezer inserted becuase it fit with 1 and 2 in his initial thoughts, but he explained it less comprehensively.
His social institutions argument seems to at least somewhat rest on problems from inherited problems from older versions, as well as deliberately planning things around the problems.
Other than that, you make excellent points regarding a few areas where Eliezer seems to lack sufficient justification to make assertions about meta-rationality, though it ocassionally feels like 'that is the topic of another series' rather than something that he should nesscarily have adressed there.
EY does keep quoting " a tautology that for every loaf of bread bought there must be a loaf of bread sold, and therefore supply is always and everywhere equal to demand", even though it doesn't demontrate all markets are efficient, and he doesn't believe that all markets are efficient. Why? Is this some kind of standing joke amongst economists that the rest of us are not in on?