Suppose you are a great moral philosopher and you’ve figured out perfectly how to tell right from wrong. You have some time on your hands, and you want to use it to do good in the world. One good thing you might do would be to try to make people more moral by teaching them to be moral philosophers like you. Another good thing would be to combat one of the specific moral evils you’ve identified in your philosophizing, say drunk driving. You could achieve this by embarking on a campaign of persuasion in which you portray drunk driving as something that stupid losers do, as groups like SADD and MADD have done with what seems to be great success (it’s remarkable how fast drunk driving has gone from being cool to being powerfully uncool).
The socially optimal division of your time between moral education and manipulative persuasion will depend on a lot of things: how good you are at each activity, how many other people are doing each of them, how effective each of them are, and so on. But you may have private incentives to engage in too little moral education. The persuasion campaign is likely to have observable results, whereas you won’t easily be able to see the good effects of having more moral philosophers running around. Also, the benefits of persuasion are likely to be more immediate, whereas a lot of the benefit of moral education may not be realized until you are gone from the scene.
What brought all this on is the observation that there seems to be almost none of what could be called moral education. No one buys airtime on TV and uses it to encourage people to universalize their maxims; even philosophically sophisticated advocates of good causes almost invariably go with some version of the SADD/MADD persuasion approach. It may be that the socially optimal amount of moral education is just very low, but I have a hard time believing that. I am inclined to believe that under-investment is a serious problem. If I’m right about this, then it may be a big source of bias: people have too little skill at purging bias from their moral judgments because they’ve gotten too little moral education in the first place; there aren’t that many philosophers out there, and even the ones there are don’t spend their time teaching philosophy.
I never thought of moral philosophy as "hard" before, but it would be placed on that end of the continuum in terms of Jared Diamond's "difficult/soft science" vs "easy/hard science". I would place it much farther than, sociology for example, and more near palm-reading or dowsing (however those at least entail falsifiability, though it has had little effect on the field). It is very hard to successfully do palm-reading or dowsing, so many people concentrate their efforts elsewhere. A better example might be theology, which has often been intertwined with moral philosophy. If I told someone I had created a machine to assist people with theological calculations, I would be laughed at. I don't know what it would mean to "operationalize" a theological concept. There is never going to be a theology machine and I am similarly confident that there will never be one for moral philosophy. That would be a great loss for those who are less adept about moral philosophy if there were some way to demonstrate some people were better at it than others, which I also do not believe will ever happen. Just as they currently have nothing to rely on but their own subjective impressions when deciding what the best name is for their cutest-newborn-in-the-world they will have to decide for themselves how to "do the right thing" rather than relying on the latest findings in the science of moral philosophy. If I am wrong and such a device is created, I declare myself in advance to be eating crow. I'd like to hear a time by which you think one will have been created.
Matthew, There is nothing wrong with being curious about people, it can be both fun and useful. The ax-murderer point wasn't meant as an insult, I just meant that at a certain level of misbehavior interestedness is not likely to be your or anyone else's primary reaction. Nor, in my view, would it be a virtue if it were.
TGGP, The main point of your comment, as I see it, is that philosophy is hard. Even if you bought into the results of the dimly recalled philosopher I mentioned above, it certainly wouldn't equip you to answer every moral question. The whole project may eventually run out of rope. So there may be more than one thing that counts as moral, but that doesn't mean that everything does.
As far as your machine example is concerned, here's my best shot. Whenever you sincerely ask yourself "what should I do?" you are a morality machine. The very fact that you've asked yourself the question means that you think that thinking about it will lead to an answer that's more right than the alternatives. What else is it if not that? So I guess my best answer is that the machine would do what you at least aspire to do, but hopefully better, it would try to get to a conclusion that really does follow from the axioms and the evidence. The computer may not identify a single answer, either because there is residual undertainty (which, if resolved, would point to a single answer), or because there really is more than one choice that follows from the axioms. But that's still a whole lot better than nothing. I think I would be happy to live in a world where everyone had bought into the axioms, exhausted what moral philosophy could teach them (eliminating the objectively immoral options), and then choose among the remaining (moral) options according to taste or custom or whatever.