September, 15 2017
David Steinberg
The principle of peer review is central to the evaluation of research, by ensuring that only high-quality items are funded or published. But peer review has also received criticism, as the selection of reviewers may introduce biases in the system. In 2014, the organizers of the Neural Information Processing
Systems conference conducted an experiment in which 10% of submitted manuscripts (166 items) went through the review process twice. Arbitrariness was measured as the conditional probability for an accepted submission to get rejected if examined by the second committee. This number was equal to 60%, for a total acceptance rate equal to 22.5%. This article presents a Bayesian analysis of those two numbers, by introducing a hidden parameter that measures the probability that a submission meets basic quality criteria. The standard quality criteria usually include novelty, clarity, reproducibility, correctness and no form of misconduct, and are met by a large proportion of submitted items. The Bayesian estimate for the hidden parameter was equal to 56% (95% CI: (0.34, 0.83)) and had a clear interpretation. The result suggested the total acceptance rate should be increased in order to decrease
arbitrariness estimates in future review processes.
Read the paper:
Arbitrariness of peer review: A Bayesian analysis of the NIPS experiment. Olivier Francois.