Humans communicate through many channels, including words, tone of voice, body position, facial expressions, etc. An “innocent” view is that these channels say similar compatible things; added channels mainly help us to say more things faster. A “hypocrisy” view, however, is that we say more socially-acceptable things via words, which can be more easily quoted, and via other channels we more say things we’d rather were not quoted, things often in conflict with our words.
These contrasting views suggest differing predictions about how we will react to new rapidly-improving techs for reading face/body/voice tones. Such techs watch and listen to the people around us, and tell us explicitly what their face/body/voice tones are saying. (Quotes from an article on such tech below.)
The innocent view suggests that we will welcome such techs as ways to help us read each other more clearly, helping especially those handicapped in reading such signals. The hypocrisy view, in contrast, suggests that we will resist and regulate such tech, to preserve familiar capacities for and habits of hypocrisy.
Many familiar regulations can be seen as attempts to preserve our habits of hypocrisy. For example, audio recording techs threatened to make our words reliably quotable, and our tone of voice as well, making it harder to say different things in private than we say in public. So we prohibited recording people’s voice without their permission. Similarly, new techs allowing cheap video recording of police activities threaten to expose deviations between how police often behave and how we say they are supposed to behave. So we are starting to ban them .(We may have police internal affairs groups report to police chiefs for similar reasons.)
Older examples are laws against blackmail and gambling, and our reluctance to enforce most long term promises. Blackmail threatens to punish and thus discourage activities we like, even though we denounce them, and challenges to bet show that we like to say things we don’t believe enough to support with a bet. Most long term promises are based on ideals we espouse but don’t actually want to act on.
I lean toward the hypocrisy view of human communication. Thus I suspect expression readers will be widely banned, especially recording or publishing their outputs, as an “invasion of privacy.” Though we may make sure the wording and/or enforcement of such laws is weak enough to allow their common use on ordinary people by firms and governments.
Anyone disagree? What odds will you give?
That article on expression reader tech:
The glasses … [have] a built-in camera linked to software that analyses … facial expressions. … But are we ready to broadcast feelings we might rather keep private? … During a face-to-face conversation, thousands of tiny indicators on a person’s face – arching the brow, puckering or parting the lips – add up to a series of non-verbal hints that augment our verbal communication. Blink, and you’ll miss them. … The average person only managed to interpret, correctly, 54 per cent of Baron-Cohen’s expressions on real, non-acted faces. … The software, by contrast, correctly identifies 64 per cent of the expressions. … They have been tuning their algorithms to pick up ever more subtle differences between expressions, such as smiles of delight and frustration. … Their algorithm does a better job of detecting the faint differences between those two smiles than people do. … It’s hard to fool the machine for long. As soon as I became engaged in the conversation, my concentration broke and my true feelings revealed themselves again. …
In addition to facial expressions, we radiate a panoply of involuntary “honest signals.” … They include body language such as gesture mirroring, and cues such as variations in the tone and pitch of the voice. … [Researchers] develop[ed] a small electronic badge that hangs around the neck. Its audio sensors record how aggressive the wearer is being, the pitch, volume and clip of their voice, and other factors. … In a 10-day experiment in 2008, Japanese and American college students were given the task of building a complex contraption while wearing the … “sociometric badge”. … being able to see their role in a group made people behave differently, and caused the group dynamics to become more even. …
[Researchers] showed that it was possible to measure heart rate without any surface contact with the body. They used software linked to an ordinary webcam to read information about heart rate, blood pressure and skin temperature based on, among other things, colour changes in the subject’s face … It’s not too much of a stretch to imagine that these sensors could combine to populate the ultimate emotion-reading device. How would the world change if we could all read each other’s social signals accurately … Picard is keen to stress that her technologies should not be used covertly, and that people should always be asked whether they wish to use them, rather than being forced to do so. Use of her gloves is by their very nature voluntary – you have to choose to wear them – but remote heart-rate monitoring does not require consent. Pentland takes a similar view on the need for privacy. Data generated by the sociometric badge data should only be visible to an employee, he says, and not be shared with an employer without the employee’s consent.
I got a taste of how it can feel to have my most private thoughts exposed when I slipped on one of Picard’s Q Sensor gloves to measure my skin conductance. A purple neoprene band pressed two electrodes into the palm of my hand, measuring subtle moisture changes on my skin when my stress levels changed. I watched a trace on Picard’s screen, reminiscent of a seismogram. “OK, now just think about anything that will make your heart beat faster,” she told me. I immediately suppressed my first intrusive thought because I found it just too embarrassing – and stared in horror as the scribble nevertheless exploded into a vertical spike. “Wow,” Picard said, her eyes widening. “What was that?” I felt my face go beetroot red.
Picard considered my reaction for a second. She didn’t need a headset to know that if I aired this particular thought it might make our conversation rather awkward. “Never mind,” she said, “I don’t want to know.”
It's not an argument; it's a statement about how morals actually work.
In philosophy, it's often important to make a distinction between descriptive statements, which are about how things actually are, and prescriptive statements, which are about how things should be. When we discuss society's reasons for adopting a certain moral, this distinction becomes vital. For example, there's a huge difference between saying, "People believe stealing is immoral because they're taught so by their elders", and saying "Stealing is immoral because we're taught so by our elders".
I think there are a number of reasons police are not considered immoral, including: they're considered necessary, they're mandated by authority, and their primary purpose is to stop crime, not make a quick buck.
I don't think society's morals are determined by simple utility considerations (although they're affected by them, especially long-term).
If you think devices like tone readers will be banned, then you need to work to ensure that they are not banned. Likewise, any legislation that banned video of the police also needs to be opposed.
All it takes for bad people to take over is for good people to do nothing.