The May 17 Nature suggests that papers in less prestigious journals are less likely to be checked by readers for errors:
Murat Cokol and his colleagues at … Columbia ..identified 596 retracted articles – flaggged as such by PubMed – and found … Journals with high impact factors retract more papers, and low-impact journals are more likely not to retract the, the study finds. It also suggests that high and low-impact journals differ little in detecting flawed articles before they are published. … Cokol argues that the larger number of retractions in high impact journals reflects the fact that they receive more scrutiny.
Not surprising, but pregnant with implications.
It would cost me $30 plus time to scrutinize Cokol's article. Would be interesting to know whether open access has any correlation with retraction rates independent of impact factor.
Alex Tsakiris, a friend of mine, just kicked off a project called Open Source Science. I've taken on a subsection entitled Open Source Peer Review.
The idea of open source science and open source peer review is to enable anyone with relevant information and something useful to say the ability to contribute ideas, suggestions, or criticism with essentially no cost and no delays. We are using wiki software to enable this.
There is a saying from the open source software community that "given enough eyeballs, all bugs are shallow". This same principle should apply just as well to open source peer review.
Alex's website is focusing on parapsychology research for now, but the same principles would apply to any area of scientific investigation. By lowering the cost of making contributions and collaborating and increasing exposure, I suspect this will markedly improve the quality of research, at least research that anybody actually gives a damn about. . .