Having read a huge number of studies on "happiness research" over the past year or so, I have concluded that the data is not very good and tells us little about happiness as most of us intuitively understand it. In fact, some of the problems with the data seem so damning, and so daunting, that it has become a matter of some surprise to me that more researchers don’t see the alleged problems as damning or daunting at all, and just proceed pretty much as usual.
Now, maybe my analysis of the difficulties in measuring happiness with surveys (which I would be happy to share at some other time) is wrong. But even if I and other critics of the data are wrong, it appears that many of the best criticisms aren’t taken very seriously, even when they are duly noted. Indeed, I’ve noticed a tendency to bristle defensively at mention of problems with the data, or even at requests simply to be more precise in what it is that is being measured. "Don’t tell us we’re only really measuring dispositions to say certain things about happiness under various conditions! We don’t call it the Journal of Saying Things About Happiness Studies, now do we!" seems to be a fairly widespread attitude. And there also seems to be a willingness to cite just about anything that superficially seems to support the validity of the measurement instrument — a sign of a kind of confirmation bias.
Now this is just my cumulative impression from reading a boatload of papers, and I’m not prepared to press this any further, or more specifically, with respect to happiness research, which isn’t the point of this post, anyway. The general question I want to raise concerns the the possible biases of social scientists when it comes to the quality of sets of data they have come to depend upon.
Here’s a plausible fictional narrative on a topic other than happiness. Let’s do it in the second person:
You take a grad course on some aspect of income inequality in which you are introduced to a certain data set with information about household income. You write a paper using this data, get a good grade, and are invited by your professor to co-author something in the same vein. You agree, you’re paper is published in a good journal, you develop a reputation as an expert on some corner of the inequality literature, and you are offered a decent job. You publish a few more decent journal articles and have high hopes for tenure. Now, suppose someone comes along and argues that this particular survey of household income upon which you have been relying is shot through with problems, implying that everything that you have developed a reputation for having demonstrated may simply be gibberish.
What do you do?
(a) Sigh, open-mindedly dig into the claims about the data, and if they are right, reassess everything you have done?
(b) Latch on to any bit of reasoning that confirms the reliability of the data and dismiss the criticism?
(c) Fight dirty and attack the motives, credentials, etc. of the critic with anything you can lay your hands on?
My bet is that most human beings — even scientists! — will go for some combination (b) and (c). It is probably an inevitability for humans who have written a moralizing book using their potentially debunked data source. Now, this may in fact be a necessary part of "normal science," since most researchers would go crazy if they didn’t mostly ignore and/or dismiss manifestations of the fact of the underdetemination of theory by data — especially when it comes to the auxiliary hypotheses upon which their day-to-day work depends implicitly.
My worry is that whole fields of inquiry can get stuck in bad path-dependent channels due simply to a practically sensible but epistemically irrational disposition to affirm the reliability of one’s data sources. It seems that a poor-quality, but widely accepted body of data could impede the progress of human knowledge by decades!
I’m a newcomer around here, so maybe this has been discussed at length. If so, sorry! But I wanted to raise the issue, and ask if others have thoughts about it, or if there are any good studies that address it.
Unhappy about happiness research
The always readable Cato Institute gadfly Will Wilkinson has not one, but two, long posts about the supposed evils of trying to measure happiness. I only provide brief excerpts - so read the whol thing. In the first, Effective Policy and the Measuremen...
On these happiness studies, it looks like both sides can be accused of this sort of bias. I've seen many economists in particlar dismiss the entire line of inquiry outright. For example, Tyler Cowen from the popular econ blog "Marginal Revolution" wrote "It's a good thing I don't believe in that nasty happiness research". A quick glimpse at the data by country shows that the numbers are at least plausible: USA's a 7.4; China's a 6.3; India's a 5.4; Zimbabwe's a 3.3. Of course the data's not perfect, but neither is GDP as a measure of national well-being, and that's used much more frequently. But economics as we know it seems to have a methodological bias towards observing behavior, and these surveys threaten that, whereas GDP, for all its flaws, is derived from such behavior.
This could easily just be a proxy for an ideological/cultural debate between those preferring continued general economic growth and those who are nervous about that, whether for environmental reasons, social justice reasons, or lifestyle "take back your time" reasons. I know I find myself tempted to endorse the survey research whenever I find myself on the nervous side.
My background is as an engineering researcher (electromagnetics). We often dodge this problem because we can back our work up by fairly well-defined experiments, etc: either it works or it doesn't. The social sciences doesn't have this luxury so much. But we also have a culture where it's more OK to be wrong, and where being unbiased is paramount. I think we should all be in the habbit of stating our own biases (as I did above) and in turn should commend others for doing so and scold them when they don't. This won't cure all ills, but I think it would help.
As an example of this from politics, any time a politician is accused of "flip-flopping", she should attack back and say "you're darn right I changed my mind when I got a better understanding of the situation, and it's a big problem that you don't too!"
...Off-topic: Felicifia is an online utilitarianism community, which anyone can join/write for. We enjoy Overcoming Bias (and overcoming bias).