Often in chess, at least among novices, one player doesn’t know that they’ve been checkmated. When the other player declares “checkmate”, this first player is surprised; that claim contradicts their intuitive impression of the board. So they have to check each of their possible moves, one by one, to see that none allow an escape.
The same thing sometimes happens in analysis of social policy. Many people intuitively want to support policy X, and they usually want to believe that this is due to the good practical consequences of X. But if the policy is simple enough, one may be able iterate through all the possible consequential arguments for X and find that they all fail. Or perhaps more realistically, iterate through hundreds of the most promising actual consequential arguments that have been publicly offered so far, and both find them all wanting, and find that almost all of them are repetitions, suggesting that few new arguments are to be found.
That is, it is sometimes possible with substantial effort to say that policy X has been checkmated, at least in terms of known consequentialist supporting arguments. Yes, many social policy chess boards are big, and so it can take a lot of time and expertise to check all the moves. But sometimes a person has done that checking on policy X, and then frequently encounters others who have not so checked. Many of these others will defend X, basically randomly sampling from the many failed arguments that have been offered so far.
In chess, when someone says “checkmate”, you tend to believe them, even if you have enough doubt that you still check. But in public debates on social policy, few people accept a claim of “checkmate”, as few such debates ever go into enough depth to go through all the possibilities. Typically many people are willing to argue for X, even if they haven’t studied in great detail the many arguments for and against X, and even when they know they are arguing with someone who has studied such detail. Because X just feels right. When such a supporter makes a particular argument, and is then shown how that doesn’t work, they usually just switch to another argument, and then repeat that process until the debate clock runs out. Which feels pretty frustrating to the person who has taken the time to see that X is in fact checkmated.
We need a better social process for together identifying such checkmated policies X. Perhaps a way that a person can claim such a checkmate status, be tested sufficiently thoroughly on that claim, and then win a reward if they are right, and lose a stake if they are wrong. I’d be willing to help to create such a process. Of course we could still keep policies X on our books; we’d just have to admit we don’t have good consequential arguments for them.
As an example, let me offer blackmail. I’ve posted seven times on this blog on the topic, and in one of my posts I review twenty related papers that I’d read. I’ve argued many times with people on the topic, and I consistently hear them repeat the same arguments, which all fail. So I’ll defend the claim that not only don’t we have good strong consequential arguments against blackmail, but that this fact can be clearly demonstrated to smart reasonable people willing to walk through all the previously offered arguments.
To review and clarify, blackmail is a threat that you might gossip about someone on a particular topic, if they don’t do something else you want. The usual context is that you are allowed to gossip or not on this topic, and if you just mention that you know something, they are allowed to offer to compensate you to keep quiet, and you are allowed to accept that offer. You just can’t be the person who makes the first offer. In almost all other cases where you are allowed to do or not do something, at your discretion, you are allowed to make and accept offers that compensate you for one of these choices. And if a deal is legal, it rarely matters who proposes the deal. Blackmail is a puzzling exception to these general rules.
Most ancient societies simply banned salacious gossip against elites, but modern societies have deviated and allowed gossip. People today already have substantial incentives to learn embarrassing secrets about associates, in order to gain social rewards from gossiping about those to others. Most people suffer substantial harm from such gossip; it makes them wary about who they let get close to them, and induces them to conform more to social pressures regarding acceptable behaviors.
For most people, the main effect of allowing blackmail is to mildly increase the incentives to learn embarrassing secrets, and to not behave in ways that result in such secrets. This small effect makes it pretty hard to argue that for gossip incentives the social gains out weigh the losses, but for the slightly stronger blackmail incentives, the losses out weight the gains. However, for elites these incentive increases are far stronger, making elite dislike plausibly the main consequentialist force pushing to keep blackmail illegal.
In a few recent twitter surveys, I found that respondents declared themselves against blackmail at a 3-1 rate, evenly split between consequential and other reasons for this position. However, they said blackmail should be legal in many particular cases I asked about, depending on what exactly you sought in exchange for your keeping someone’s secret. For example, they 12-1 supported getting your own secret kept, 3-2 getting someone to treat you fairly, and 1-1 getting help with child care in a medical crisis.
These survey results are pretty hard to square with consequential justifications, as the consequential harm from blackmail should mainly depend on the secrets being kept, not on the kind of compensation gained by the blackmailer. Which suggests that non-elite opposition to blackmail is mainly because blackmailers look like they have bad motives, not because of social consequences to others. This seems supported by the observation that women who trash each other’s reputations via gossip tend to consciously believe that they are acting helpfully, out of concern for their target.
As examples of weak arguments, Tyler Cowen just offered four. First, he says even if blackmail has good consequences, given current world opinion it would look bad to legalize it. (We should typically not do the right thing if that looks bad?) Second, he says negotiating big important deals can be stressful. (Should most big deals be banned?) Third, it is bad to have social mechanisms (like gossip?) that help enforce common social norms on sex, gender and drugs, as those are mistaken. Fourth, making blackmail illegal somehow makes it easier for your immediate family to blackmail you, and that’s somehow better (both somehows are unexplained).
I’d say the fact that Tyler is pushed to such weak tortured arguments supports my checkmate claim: we don’t have good strong consequential arguments for making gossiper-initiated blackmail offers illegal, relative to making gossip illegal or allowing all offers.
Added 18Feb: Some say a law against negative gossip is unworkable. But note, not only did the Romans manage it, we now have slander/libel laws that do the same thing except we add an extra complexity that the gossip must be false, which makes those laws harder to enforce. We can and do make laws against posting nude pictures of a person who disapproves, or stealing info such as via hidden bugs or hacking into someone’s computer.
There is a difference between being an intelligence officer and and being an agent. An agent can have many different motives, from idealism through mercenary and being a blackmail victim.
Ordinarily, we would use a market test to determine relative value. In an auction, we generally presume that the winner valued the item more than either the seller or other bidders. But when the auction is between newspapers and the blackmailed, the newspapers cannot capture the full value of the information to everyone who eventually learned the information directly or indirectly. The presumption when it comes to such information is that the benefits to the public outweigh the cost to the individual.
So we have a different test from cost/benefit: Is the information printed by the newspapers true or libelous?
When it comes to providing information about building atomic bombs, the opposite presumption is held. The cost of disseminating the information to the public (or enemy) is presumed greater than the benefit.