On the last two Sundays I gave examples where disagreement is justified, and where it is unjustified. Today let me try to give a harder example where disagreement is not justified:
Again we have Star Trek style transporter modified to preserve the original while creating an atom-by-atom copy with identical personality, memories, and cognitive styles. But this time the information to create this copy was stored for twenty years, and you find yourself standing outside the transporter as your duplicate emerges.
You explain to your younger self that over the last twenty years you have reconsidered and rejected some of your most precious beliefs. Following common aging trends, you are now more politically conservative and believe less in true love, your future fame, and the nobility of your profession. Your younger duplicate suspects you are biased by self-interest and a need to explain your career failure, while you suspect he suffers youthful overconfidence. How hard should you try to agree with him?
I say you should try hard to satisfy the rationality constraint "we can’t forsee to disagree." You started out with exactly his biases, and you both know you may have overcome some, and some may be worse. He should on average defer to you more than you to he, as he accepts that your belief changes come partly from more information and analysis. But when he hesitates to fully adopt your views, you should accept that he may have good reasons to do so. You should random walk your way to common estimates about your relative biases.
Rcriii, we are just talking about inference; each person listens and then concludes that it is rational to change his mind.
So what is the mechanism for the random walk? Say for example I tell my younger self that he should do more of activity X, and he replies that even after evaluating my 20 years of history he is unwilling to put more time or effort into X. How do we resolve this?
Do we roll dice? Flip coins? Bid with bias ("I say you have a 20% stubborness bias" "I'll see your 20% and raise you 10% for your obvious cynicism" ...)?
With a negotiation, if there is bias to the process (I'm more manipulative, or you have better info, etc), won't Bayesians try to account for that?