Original articles
Development of the Review Quality Instrument (RQI) for Assessing Peer Reviews of Manuscripts

https://doi.org/10.1016/S0895-4356(99)00047-5Get rights and content

Abstract

Research on the value of peer review is limited by the lack of a validated instrument to measure the quality of reviews. The aim of this study was to develop a simple, reliable, and valid scale that could be used in studies of peer review. A Review Quality Instrument (RQI) that assesses the extent to which a reviewer has commented on five aspects of a manuscript (importance of the research question, originality of the paper, strengths and weaknesses of the method, presentation, interpretation of results) and on two aspects of the review (constructiveness and substantiation of comments) was devised and tested. Its internal consistency was high (Cronbach’s alpha 0.84). The mean total score (based on the seven items each scored on a 5-point Likert scale from 1 to 5) had good test-retest (Kw = 1.00) and inter-rater (Kw = 0.83) reliability. There was no evidence of floor or ceiling effects, construct validity was evident, and the respondent burden was acceptable (2–10 minutes). Although improvements to the RQI should be pursued, the instrument can be recommended for use in the study of peer review.

Introduction

The use of peers to review manuscripts submitted to biomedical journals for publication is well established. Indeed, peer review is seen as an essential requirement for any journal wanting to be regarded as scientifically sound and rigorous. Peer reviewers are seen as contributing in two ways—assisting editorial decisions as to whether to accept a paper or not, and helping to improve the quality of papers.

Research on the benefits of peer review, the relative merits of different methods of peer review, and the effectiveness of methods for improving peer review has been limited by the lack of a validated instrument for assessing the quality of a review. One such instrument has been developed and used, but little information on its acceptability, reliability, validity and responsiveness has been published [1]. Our aim was to develop and assess the psychometric properties of a new instrument 2, 3.

Section snippets

Version 1

The first version of the instrument was based on the one used in an earlier study [1] but modified by drawing on the experiences and opinions of members of a consensus development group of four researchers and three editors. The instrument consisted of eight items, each scored on a 5-point Likert scale (1 = poor, 5 = excellent). Each of the first seven items reflected a different aspect of the review (importance of the research question, originality of the paper, strengths and weaknesses of the

Internal Consistency

The internal consistency was high (Cronbach’s alpha 0.84). It varied between the 11 raters from 0.65 to 0.91. Small improvements resulted from the removal of the items on originality (0.87) and importance (0.87) (Table 1). However, given the importance placed on these attributes by the editors carrying out the ratings (content validity), they were retained. In addition, given the diverse aspects of the quality of a review that are being considered, it is plausible that differences between items

Discussion

For the first time, a psychometrically sound instrument for rating the quality of peer reviews of research manuscripts is available. Experts (editors) believed that the face and content validity were fine. Field testing demonstrated that its internal consistency and construct validity were satisfactory, as were the test–retest and inter-rater reliability of the mean total score. The distribution of mean total scores showed no evidence of floor or ceiling effects, and the respondent burden was

Acknowledgements

We thank Joanne Griffiths, Tom Jefferson, Mike Launer, David McNamee, Kathy Rowan, and Alison Tonks for help with devising and testing Version 1; Donna Lamping and Chris McManus for advice on the validation process; BMJ editors Tony Delamothe, Luisa Dillner, Sandra Goldbeck-Wood, Trish Groves, John Rees, Tessa Richards, Roger Robinson, Jane Smith, Richard Smith, Tony Smith and Alison Tonks for rating reviews; Stephen Evans for statistical advice; and the NHSE North Thames Regional Office,

References (14)

  • D. Moher et al.

    Assessing the quality of randomized controlled trialsan annotated bibliography of scales and checklists

    Control Clin Trials

    (1995)
  • R.A. McNutt et al.

    The effects of blinding on the quality of peer review

    JAMA

    (1990)
  • D.L. Streiner et al.

    Health Measurement Scales. A Practical Guide to Their Development and Use

    (1989)
  • I. McDowell et al.

    Development standards for health measures

    J Health Serv Res Policy

    (1996)
  • L.J. Cronbach

    Essentials of Psychological Testing

    (1960)
  • J. Cohen

    Weighted kappaNominal scale agreement with provision for scaled disagreement or partial credit

    Psychol Bull

    (1968)
  • M. Friedman

    A comparison of alternative tests of significance for the problem of m rankings

    Ann Math Stat

    (1940)
There are more references available in the full text version of this article.

Cited by (128)

  • Tips and guidelines for being a good peer reviewer

    2023, Gastroenterologia y Hepatologia
  • Can Sex and Seniority Predict the Quality of a Journal Reviewer's Manuscript Critique?

    2021, International Journal of Radiation Oncology Biology Physics
  • Quality of peer review reports submitted to a specialty psychiatry journal

    2021, Asian Journal of Psychiatry
    Citation Excerpt :

    Since the manuscript management portal of the journal has a separate section for marking final recommendation (reassess/accept/reject), this did not figure in the text of the vast majority of review reports; hence, this item was not formally assessed. Subsequently, we performed a content analysis of the PRRs and rated each report on the Review Quality Instrument (RQI) (van Rooyen et al., 1999a). This is a simple, popular, 8-item instrument to measure quality of peer reviews.

View all citing articles on Scopus
View full text