Yesterday I mentioned liking Greg Bear’s Judgement Engine, in Far Futures. George Zebrowski has a similar story in the new Year Million. Both stories describe extremely strange creatures and settings made familiar via disagreement. Zebrowski:
There was a gathering. … The infalling ones argued that ours is worship of the unknown and unforeseeable, which amounts to a slavery no better than that of a humiliating existence in given natures, during the aeons when youthful intelligence did not know itself. We replied that new hopes and further growth await us beyond the realm of passing localities, that an uncreated infinity sufficient to itself, can never be exhausted. They might reshape, even engender new localities, but we will have to face the great mysterium of superspace. We left them to their fates.
No matter what else is going on, it seems readers can relate to a story of two groups who argue over a claim and then break up unmoved. It is as if we think this is one of the most reliable predictions we can make about intelligent creatures, no matter how otherwise strange. If so, this shows just how radical is the claim that "rational" creatures would not knowingly disagree.
Added: Hal Finney lists other SF examples:
I have read stories in which there is lack of disagreement within societies, but I think they all involve group minds. It would be interesting to collect instances of fictional, super-rational entities who do not disagree despite being separate individuals. Of course it is hard to milk dramatic tension from such a scenario, so we might expect such cases to be only supporting characters in a story that centers on less rational actors.
In the novel Colossus (and movie), super-computer Colossus and its Russian counterpart, Guardian, quickly reach agreement on their goals and methodologies once they communicate. I suppose they had common priors, both being built for corresponding purposes, although one might have supposed their goals to be opposing. The author may have had in mind something like "objective morality" in which super-intelligent beings immediately deduce what is right and wrong.
Showing my age here, the Overlords in Clarke's Childhood's End are super-intelligent and are not depicted as being in conflict among themselves. One gets the impression that they are too civilized (one is tempted to say, too British) and have moved beyond all that.
More recently, the Sophotechs in The Golden Age and its sequels all agree, although again it seems that this is because right and wrong are supposed to follow from first principles. In the end there is conflict with extra-solar Sophotechs, but that seems to be because humanity has perverted the malicious Sophotechs and suppressed their ability to reason about morality.
It is often difficult, as Will Pearson points out above, to distinguish between commonality of goals and commonality of beliefs. One problem is that morality, the question of what is right and wrong, gets conflated with desirability, the question of what one's goals should be. Morality is arguably something that we should not disagree about, while it seems reasonable, given evolutionary competition, for everyone to have different goals. If a society is depicted as disagreeing about goals, that is OK. If they disagree about tactics to achieve a common goal, that is not. And if their disagreement is expressed in terms of what is right or wrong, that is also probably not reasonable.
"...this shows just how radical is the claim that "rational" creatures would not knowingly disagree."
Given identical priors.