[Researchers] obtained data from the National Science Foundation on the number of researchers per capita in each state, and then randomly selected research papers that contained the phrase “test* the hypothes*”. Those papers were characterized as either confirming (positive result) or rejecting (negative result) the hypothesis. …
“Those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline.” In other words, as local competition increases, the fraction of papers that confirmed a hypothesis went up. The authors looked at a number of factors that could confound the effect—the total number of PhDs per capita, total publication output per state, and R&D expenditure per state—and found no correlation. …
It’s possible that the most competitive research environments produce more perceptive scientists, who are better at choosing the correct hypothesis to test. … An alternate hypothesis: researchers in competitive environments are better at presenting their results in a [positive] way that’s likely to get them published.
More here (study here). You might interpret “more papers per person” as either a higher personal ability, or as a higher investment per paper. The post above gives the example:
Knocking out a gene and finding a severely altered mouse (and thereby confirming the gene’s importance) can net you a paper in a high-profile journal; knocking it out and seeing nothing can make it really difficult to publish anything.
If this searching-in-a-big-space is the typical case, then a natural interpretation is that more “able” researchers can either “see truth” better (less likely), or know better how to twist their data to look positive (more likely).
On the other hand, if the typical hypothesis is a standard expected result, like “smoking causes cancer,” then a natural interpretation is that it takes more work to overturn a standard result than to confirm it. Perhaps “mainstream” researchers tend to find expected standard results, while “backwater” researchers tend more often to overturn them. This would be like how meeting talk is biased toward repeating shared info that many have, instead of exposing unique info that only one person has.
In either case this seems an endorsement of the social value of those supposedly “non-competitive” researchers.
In my own experience, "proofs" that something can't be done or is wrong are much less reliable than "proofs" that something can be done or is right. As such, I have little respect for negative papers. As the bar above which a paper is attended to at all by others, I would expect a shift towards positive results. We have limited time and attention, and we may believe learning more things that are true and work is a more valuable use of our neuronal limited calories.
So while we can tell ourselves and our colleagues that there is no such thing as a failed experiment, we rightly strive for ourselves and our colleagues to find areas where our results will be positive rather than negative.
Vladimir,I think you've raised some good points. In my own work, which is not for an academic institution, I have virtually no incentive to publish my failures. I am not evaluated on the basis of how many publications I produce, but rather whether the science/technology that I produce or promote appears to solve some customer's (or potential customer's) need and (especially) if it makes money for my employer. Along those lines, I have an incentive to publish my successes, since I can use them to strengthen my claims to expertise, as well as to raise the status of my company. Maybe, if peer review (as you've pointed out) doesn't always work so well, it is because the peer reviewers are not being asked to put their money where their mouths are! This suggests to me a (Robin Hanson-like?) concept: What if peer reviewers had to invest in some tangible manner in the authors and/or papers that they recommended for publication? Papers with results that stood the test of time (yes, I don't know how this would be judged, but perhaps by positive-citation frequency?) would yield positive returns to peer reviewers who recommended them, but negative returns if a published paper was later found to be mostly-useless drivel.