Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.
On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)
My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.
Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.
This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.
When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.
The key point here is that "not any kind of objective fact about the world" isn't a coherent category for deciding whether probability estimates will or will not converge. Your objection applies just as easily to "What was the total number of people who visited Canterbury, defined as entering the boundary between midnight and midnight, on July 7, 1832."
That's clearly an objective fact, it's just uncertain. And obviously actual humans will not converge on an answer, but Aumann agreement shows very clearly that rational Bayesians must do so.
Whether they "should" converge is not exactly relevant - if they don't in fact. (Why aren't British bettors bothered by the different odds on American prediction markets?)
But, yes, what I'm saying is a sort of challenge to Bayesianism. Probabiliy estimates only should converge if there is in fact a unique probability attached to the event in question. This is a condition that (I claim) is only approximated sometimes, when we say there is risk rather than when we call it uncertainty. (I am rejecting Bayesian probability when there is no coherent objective probability involved. (See "Epistemological implications of a reduction of theoretical implausibility to cognitive dissonance" - http://juridicalcoherence.b... )
An example. Let's say we have a prediction market on the result of the role of a die. Will "1" come up? However, no one is told, and no one can find out, how many sides the die has. To avoid problems, let's say the die toss is a simulation of randomness, and the die might have any number of sides, from 1 to a trillion.
We set up two distinct and separate prediction markets for this event. Would the two markets converge? No reason they should. (I'd guess that random events in the betting history would determine the end result.) With complete uncertainty there is no convergence. With large uncertainty there is little convergence.