We choose “shoulds” over “wants” more often in far mode:
[Of] various programs, some were public policies (e.g., gas price) and some were personal plans (e.g., exercising). These programs presented a conflict between serving the ‘‘should” self and the ‘‘want” self. Participants were first asked to evaluate how much they thought they should support the program and how much they wanted to support the program. Then, they were asked to indicate how strongly they would oppose or support the program. Half of the participants were told that the program would be implemented in the distant future (e.g., in two years) and the other half were told the program would be implemented in the near future (as soon as possible). The results indicate that support for these ‘‘should” programs was greater among participants in the distant future implementation condition than among participants in the near future implementation condition. Further examination of the ‘‘gas price” policy revealed that the construal level of the policy mediated the relationship between the implementation time and the support for the policy. Participants were more likely to choose what they should do in the distant future as opposed to the near future. … [This] has an important implication: … policy-makers could increase support for ‘‘should” policies by emphasizing that the policies would go into effect in the distant future. (more)
All animals need different ways to reason about things up close vs. far away. And because humans are especially social, our forager ancestors evolved especially divergent near and far minds. Far minds could emphasize presenting an idealized image to others, while near minds could focus on managing our less visible actions. Homo hypocritus could see himself from afar, and sincerely tell himself and others that when it mattered he would do the honorable thing. Even if in fact he’d probably act less honorably.
One reason this was possible was that foragers had pretty weak commitment mechanisms. Yes, they could promise future actions, but they rarely coordinated to track others’ promises and violations, or to organize consistent responses. So forager far minds could usually wax idealistic without much concern for expensive consequences.
In contrast, farmer norms and social institutions could better enforce commitments. But instead of generically enforcing all contacts, to give far minds more control over farmer lives, farmers were careful to only enforce a limited range of commitments. Cultural selection evolved a set of approved standard commitments that better supported a farmer way of life.
Even today, our legal systems forbid many sorts of contracts, and we generally distrust handling social relations via explicit flexible contracts, rather than via more intuitive social interactions and standard traditional commitments. We are even reluctant to use contracts to give ourselves incentives to lose weight, etc.
The usual near-far question is: what decisions do we make when in near vs. far mode? But there is also a key meta decision: which mode do we prefer to be in when making particular decisions?
Speechifiers through the ages, including policy makers today, usually talk as if they want decisions to be made in far mode. We should try to live up to our ideals, they preach, at least regarding far-away decisions. But our reluctance to use contracts to enable more far mode control over our actions suggests that while we tend to talk as if we want more far mode control, we usually act to achieve more near mode control. (Ordinary positive interest rates, where we trade more tomorrow for less today, also suggest we prefer to move resources from far into near.)
We thus seem to be roughly meta-consistent on our near and far minds. Not only are we designed to talk a good idealistic talk from afar while taking selfish practical actions up close, we also seem to be designed to direct our less visible actions into contexts where our near minds rule, and direct grand idealistic talk to contexts where our far minds do the talking. We talk an idealistic talk, but walk a practical walk, and try to avoid walking our talk or talking our walk.
So yes, encouraging folks to commit more to decisions ahead of time should result in actions being driven more by our more idealistic far minds. In your far mind, you might think you’d like this consequence. But when you take concrete actions, your near mind will be in more control, making you more wary of this grand idealistic plan to get more grand idealism. Our hypocritical minds are a delicate balance, a intricate compromise between conflicting near and far tendencies. Beware upsetting that balance, via crude attempts to get one side to win big over the other.
Longtime readers may recall that my ex-co-blogger Eliezer Yudkowsky focuses on a scenario where a single future machine intelligence suddenly becomes super powerful and takes over the world. Considering this scenario near inevitable, he seeks ways to first endow such a machine with an immutable summary of our best ideals, so it will forevermore make what we consider good decisions. This seems to me an extreme example of hoping for a strong way to commit to gain a far-mind-ideal world. And I am wary.
Added 8a: Michael Vassar objects to my saying Eliezer Yudkowsky wants to “endow such a machine with an immutable summary of our best ideals”, since Yudkowsky is well aware of the danger of using “Ten Commandments or Three Laws.” Actually, one could argue that Yudkowsky has an air-tight argument that his proposal won’t overemphasize far over near mode, because his CEV proposal is by definition to not make any mistakes:
Coherent extrapolated volition is our choices and the actions we would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”
Now I hear a far mode mood in the second “wished we were” clause, but the first clause taken alone suggests a “no mistakes” definition. However, it seems to me one must add lots of quite consequential qualifying detail to a “no mistakes” vision statement to get an actual implementation. It is only in a quite far mode that one could even imagine there wouldn’t be lots of such detail. And it is such detail that I fear would be infused with excessively far mode attitudes.
This is the most brilliant post of the archives I've somehow missed. But it lends weight to the idea I had one day that Robin Hanson sees Eliezer Yudkowsky as sort of an embarrassing younger brother who should be kept at a distance so as to avoid unfortunate association. I think that is unfair if I'm right.
I really think a sufficient answer is that construal level theory is still very new and just not well-known enough for that to happen, and where it is known it's confounded by how difficult it is to get people to follow up on commitments. C.f. the loan market and its many problems.
But in any case doesn't the credit card industry do exactly this? "Buy now, spend later!"?