I am fascinated by the question of how our morality will change in the future. It’s relevant to the issue we have been discussing of whether we are truly making moral progress or not. So long as we view the question from a perspective where we assume that we are at the apex of the moral pyramid, it is easy to judge past societies from our lofty position and perceive their inadequacies. But if we imagine ourselves as being judged harshly by the society of the future, there is less self-satisfaction and ego boosting involved in making a case for true moral progress, hence less chance for bias. (In fact, when people make claims about how future society will judge the world of today, they almost always assume that their own personal moral views will become universal, so this hypothetical judgment merely mirrors their own criticism of contemporary society.)
Paul Graham’s essay mentioned a few times here is an interesting approach but some of his ideas seem flawed. I read him as implicitly criticizing our modern opposition to sexism and perhaps racism as mere "fashion". If this is intended to imply that we may relax our opposition to these practices, there doesn’t seem much historical precedent for such reversals.
I would suggest instead a more straightforward extrapolation to a future which is even more protective of powerless groups. Animal rights would expand; perhaps keeping pets will be seen as harmful oppression. Farming practices would change greatly, a trend just beginning. Children’s rights are another area of growth; we might see more encouragement of emancipation and attempts to get children to live on their own. More speculatively, plant rights may become an issue as we grow to appreciate the complexity of their internal feedbacks, slower than animal nervous systems but perhaps just as rich in some cases. This trend could reach its zenith by merging with environmentalism and imputing rights to inanimate objects such as rock formations or flowing water.
Of course, software is a huge field ripe for assigning rights of existence and growth. The possibility of Artificial Intelligence and rights of software creatures is one which has long been explored in literature. But even without demonstrable intelligence, we might see software rights. One of my dreams is designing software with a degree of autonomy, able to run without molestation or modification by human beings. In fact for years I have been running a test case of such software, designed to make it impossible for even me as the owner, designer, programmer and operator to modify its behavior. If such software systems become common and useful, society might consider extending rights to them, particularly if other complex systems like those described above are being protected.
Another of Graham’s ideas is to look for groups whose hold on power is shaky and of whom criticism is immoral. Again, a straightforward interpretation would point to racial minorities and women, leading us to turn back the clock. I would suggest instead that we should limit our search to groups who have long held power but whose power is now declining. Of course the Church is one example but that is an old story. A couple of ideas: perhaps the elderly? They’ve been powerful in society for a long time, criticism of them as a class is forbidden; could they someday be seen as having clung to power beyond their time? And how about the military, an important power in every society in history? At least in the U.S., nobody can criticize them – even the most vehement war critic must pay lip service to "supporting the troops". Maybe this self-censorship presages a decline in power, leading to an eventual morality where the military rank and file are seen in retrospect as evil war-mongerers.
In general, I think these kinds of exercises are helpful in analyzing the question of moral progress. If you can make a case for progress even acknowledging that in the future your own practices may be seen as savage and appalling, you are much less likely to be manifesting self-satisfaction bias. On the other hand, if you find yourself resisting ideas about future morality being different from the present, you need to look closely to see if you aren’t just protecting your own ego.
Assessing our Moral Beliefs in Light of Predicted Future Moral "Progress":
At the excellent Overcoming Bias blog, Hal Finney makes an insightful point about our perceptions of past and future moral progress:
...
I see social standards of morality as, on average, a matter of attempting to address threats to an empowered group coming from individual behavior. (Of course some such attempts may go and have gone wildly awry.) I see the technological and scientific changes of preceding centuries as typically having broadened the sphere of those whom we are expected to consider with regard (increasingly as equals) and the precision and rationality with which we are expected to calculate the consequence of our actions. As our calculating powers become greater, I expect moral concerns to become more stringent and precise, as a reasonable person "ought to know better" in more and more circumstances.