This morning I saw a new blog post by Peter Coles dealing with a topic I have written about before: the problems with the current system of scientific peer review. He too has strong opinions on the subject, which he has expressed several times on his blog. For the most part, I think our shared disgruntlement with the system stems from the same source — academic publishers trumpeting Peer Review (capitals intentional) as some sort of absolute certificate of quality that only they are able to provide, and provide at an exorbitant cost at that, whereas in fact not only it can probably be organised far more cheaply, the current system isn't all that hot at sorting out the wheat from the chaff anyway.
Coles takes a particular example of a paper recently published in the Astrophysical Journal in order to illustrate his point. The authors make a strong claim — the sort of strong claim that requires watertight evidence to back it up — and yet a key figure in the paper relies on interpreting a curve, which is supposed to be a cubic spline interpolation of the data, that passes through only two data points. Where are the other data points? Presumably they exist, but they aren't displayed in the figure. At the very least this is sloppy and unhelpful to anyone trying to reproduce the results, and the referee should have asked for a modification. But referees are often sloppy and distracted themselves; when there is only one referee to pass judgement on the merits of a paper the outcome of Peer Review should really be taken with a giant pinch of salt.
(I should point out that I have no idea whether this paper was reviewed by only one referee; most of the journals I am familiar with tend to run a default single-referee policy, but the Astrophysical Journal may be different. However, I have noticed several papers accepted to the Astrophysical Journal recently that in my opinion suffer from similar sloppy faults, one of which I blogged about here.)
Anyway, if the system of peer review is faulty in this manner, what should be done about it? I argued that at a minimum, journals should publish the correspondence between the referee and the editor, and the authors' responses to criticisms. This would help to highlight the cases of sloppy negative reviews, such as the instances, pointed out in a comment by Sarah, of referees simply requesting changes that are already included in the manuscript (sadly this happens quite a lot, in my experience). It would put some pressure on journal editors to improve the standard of refereeing in order to protect reputations, and make it obvious when a paper has only been reviewed by one individual. It wouldn't do a great deal to deal with sloppy positive reviews, leading to the acceptance of papers of the sort that Coles is complaining about today.
Coles' own solution is to do away with Peer Review by one or two individuals altogether, and instead have a sort of community rating system on the arXiv itself, along the lines of reddit:
Secondly, simple votes up or down are not particularly useful to a reader: if I am to trust the judgement of the people voting on the quality of a paper I'd really need to read a justification from them of why they voted in a particular way. This would amount to a mini-review of the paper and its strong (or weak) points. Such reviews would need to be provided anonymously — by the registered users of the site — in order ensure that people felt secure enough to express honest opinions. Non-anonymous review systems would I think inevitably dwindle into relative irrelevance like Cosmo Coffee has done, but completely anonymous systems would probably develop into flame wars in the classic tradition of the internet (at least with the current system the identity of the referee is known to the journal editor if no one else!).
I'd like to hear further comments from interested and informed readers. Perhaps a perfect system isn't possible, but I'm sure there are improvements which can be made to the current model. And even if that isn't the case, I think the moral you should draw from the story is that not all that is published is gold, not all those who aren't are lost. Or something more poetically phrased than that!
Coles takes a particular example of a paper recently published in the Astrophysical Journal in order to illustrate his point. The authors make a strong claim — the sort of strong claim that requires watertight evidence to back it up — and yet a key figure in the paper relies on interpreting a curve, which is supposed to be a cubic spline interpolation of the data, that passes through only two data points. Where are the other data points? Presumably they exist, but they aren't displayed in the figure. At the very least this is sloppy and unhelpful to anyone trying to reproduce the results, and the referee should have asked for a modification. But referees are often sloppy and distracted themselves; when there is only one referee to pass judgement on the merits of a paper the outcome of Peer Review should really be taken with a giant pinch of salt.
(I should point out that I have no idea whether this paper was reviewed by only one referee; most of the journals I am familiar with tend to run a default single-referee policy, but the Astrophysical Journal may be different. However, I have noticed several papers accepted to the Astrophysical Journal recently that in my opinion suffer from similar sloppy faults, one of which I blogged about here.)
Anyway, if the system of peer review is faulty in this manner, what should be done about it? I argued that at a minimum, journals should publish the correspondence between the referee and the editor, and the authors' responses to criticisms. This would help to highlight the cases of sloppy negative reviews, such as the instances, pointed out in a comment by Sarah, of referees simply requesting changes that are already included in the manuscript (sadly this happens quite a lot, in my experience). It would put some pressure on journal editors to improve the standard of refereeing in order to protect reputations, and make it obvious when a paper has only been reviewed by one individual. It wouldn't do a great deal to deal with sloppy positive reviews, leading to the acceptance of papers of the sort that Coles is complaining about today.
Coles' own solution is to do away with Peer Review by one or two individuals altogether, and instead have a sort of community rating system on the arXiv itself, along the lines of reddit:
I’m not saying the arXiv is perfect but, unlike traditional journals, it is, in my field anyway, indispensable. A little more investment, adding a comment facilities or a rating system along the lines of, e.g. reddit, and it would be better than anything we get academic publishers at a fraction of the cost. Reddit, in case you don’t know the site, allows readers to vote articles up or down according to their reaction to it. Restrict voting to registered users only and you have the core of a peer review system that involves en entire community rather than relying on the whim of one or two referees. Citations provide another measure in the longer term. Nowadays astronomical papers attract citations on the arXiv even before they appear in journals, but it still takes time for new research to incorporate older ideas.I agree with the sentiment, but in the interest of fairness I should point out a couple of objections. Firstly, clearly only some papers will get reviewed this way. Many perfectly correct papers will be ignored simply because they are not on currently fashionable topics. Although writing papers that are of current interest to your fellow scientists is of some importance, it shouldn't be the sole criterion for judging merit!
Secondly, simple votes up or down are not particularly useful to a reader: if I am to trust the judgement of the people voting on the quality of a paper I'd really need to read a justification from them of why they voted in a particular way. This would amount to a mini-review of the paper and its strong (or weak) points. Such reviews would need to be provided anonymously — by the registered users of the site — in order ensure that people felt secure enough to express honest opinions. Non-anonymous review systems would I think inevitably dwindle into relative irrelevance like Cosmo Coffee has done, but completely anonymous systems would probably develop into flame wars in the classic tradition of the internet (at least with the current system the identity of the referee is known to the journal editor if no one else!).
I'd like to hear further comments from interested and informed readers. Perhaps a perfect system isn't possible, but I'm sure there are improvements which can be made to the current model. And even if that isn't the case, I think the moral you should draw from the story is that not all that is published is gold, not all those who aren't are lost. Or something more poetically phrased than that!