Intended for healthcare professionals

Research Methods & Reporting

Ten steps towards improving prognosis research

BMJ 2009; 339 doi: https://doi.org/10.1136/bmj.b4184 (Published 31 December 2009) Cite this as: BMJ 2009;339:b4184
  1. Harry Hemingway, professor of clinical epidemiology1,
  2. Richard D Riley, senior lecturer in medical statistics2,
  3. Douglas G Altman, professor3
  1. 1University College London, London WC1 6BT
  2. 2Department of Public Health, Epidemiology and Biostatistics, University of Birmingham, Birmingham B15 2TT
  3. 3Centre for Statistics in Medicine, Oxford
  1. Correspondence to: h.hemingway{at}ucl.ac.uk
  • Accepted 21 August 2009

Prognosis research should be a basic science in translational medicine, but methodological problems mean systematic reviews are unable to reach firm conclusions. Harry Hemingway and colleagues recommend action to improve the quality

Stemming the tide of low quality, low impact, prognosis research is an urgent priority for the medical and research community. Diverting currently wasted research resources into high quality prognosis research will require major changes, one of which is the implicit collusion between researchers, medical journal editors, and conference organisers: “If you agree to inflate the importance of your research, we will agree to showcase it.” We outline challenges facing prognosis research, and possible next steps, drawing on recent evidence from different clinical specialties and study designs.

Problems with prognosis research

Prognosis research has been defined as the study of relations between occurrences of outcomes and predictors in defined populations of people with disease.1 It encompasses (ideally) prospective, observational research evaluating three broad questions—causes of disease progression, prediction of risk in individuals, and individual response to treatment. High quality prognosis research results in better understanding of disease progression, offers improved opportunities for mitigating that progression, and allows more reliable communication of outcome risk to patients.1 2 Prognosis research should be a basic science in translational medicine.

Analysing 168 reports, Malats and colleagues concluded that “after 10 years of research [including over 10 000 patients], evidence is not sufficient to conclude whether changes in P53 act as markers of outcome in patients with bladder cancer.”3 This is not an isolated example. Such concerns have been identified in systematic reviews of different types of prognostic biomarkers4 5 and across different clinical specialties and major global burdens of disease including cancer,6 coronary disease,7 stroke,8 trauma,9 and musculoskeletal disorders.10 11 Although some systematic reviews and meta-analyses of prognostic studies do reach clear conclusions, not all pay attention to the quality of the primary studies.12 13

It is inconceivable that 168 randomised controlled trials could fail to reach an answer on the effectiveness of an intervention. Why does the scientific community generate, and apparently tolerate, prognosis research with such limitations? Here we identify 10 areas where specific actions (table) might make investments in prognosis research more effective (in terms of generating reliable new knowledge with benefits for patient outcomes) and more efficient (less redundant or misleading research).

Table 1

 Ten challenges facing prognostic research

View this table:

Purpose

In the absence of an accepted classification used across clinical specialties, we need to clarify the goals of prognosis research and thereby provide a framework with which to assess progress. Standard nomenclature is urgently needed. Broadly, three aims can be recognised:

  • Identification of single biomarkers that have independent associations with outcome (relating to the causal pathway)

  • Development of multivariable risk prediction models (risk score or prognostic index) that predict an individual’s outcome, and

Such a taxonomy could identify different goals at different stages in the translation of emerging putative prognostic biomarkers from the laboratory to the bedside. For example, early prognostic studies may aim at discovering possible prognostic biomarkers and will tolerate false positive results; later studies may evaluate the probability that such biomarkers are useful (in risk prediction models) and seek to minimise false positive results.14 Existing systematic reviews of prognostic biomarkers suggest that current prognosis research concentrates on the first goal. For example, in a systematic review of the prognostic value of C reactive protein on the prognosis of stable coronary disease,7 only three of the 77 studies reported a measure of its ability to discriminate risk in individuals.

A greater appreciation of the distinction between the three goals is required. For instance, it is wrong to assume that a biomarker that is (causally) related to incidence of disease (aetiology) is necessarily (causally) related to progression (prognosis). For example, body mass index is associated in aetiological studies with onset of coronary disease but not with subsequent fatal and non-fatal events among people with coronary disease.15 Risk prediction models are easy to produce, hard to validate,16 and harder still to implement in clinical practice. And, thus far, evidence of impact on decision making or prognosis is nearly always lacking.17 The next generation of such models needs to tackle these problems.

Funding

Prognosis research has attracted much less funding than diagnostic and therapeutic research. As a crude marker of this, a search of the website of the US National Institutes of Health, globally one of the largest funding bodies, returns about 132 000 hits for the term “diagnostic,” 76 000 for “trial,” and only 4000 for “prognostic.” Indeed one reason for the large number of small, poor quality prognostic studies may be that many are conducted without peer reviewed external sources of funding. A “what’s in the freezer?” approach has been too common,18 in which the investigator apparently argues: given the data we already have, what abstract can be produced to allow a junior colleague to present at a conference. For example, in cancer biomarker studies, Kyzas and colleagues suggest “investigators may tend to conduct opportunistic studies on the basis of specimen availability rather than on thoughtful design.”19 Such an approach perpetuates poor quality research.

Funders need a strategic framework to guide investment across complementary study designs. This will enable them to judge when it is best to set up bespoke investigator-led prognostic cohorts, to add biomarker or other measurements to existing clinical cohort collections (including registry data), to exploit linkages between different electronic health records (such as in primary care and disease registries), or to stimulate meta-analysis of data on individual participants.20

Protocols

All research on humans should have a protocol,21 22 yet many current prognosis research studies seem not to be protocol driven. Most prognostic studies are retrospective in the sense that the investigator decides which analyses to do after the data have been collected. Just four of the 77 studies in the C reactive protein systematic review referred to a previously written study protocol.7 Thus the reader does not know whether the analyses were part of the rationale for entering patients into the study or were prespecified in a statistical analysis plan and there is large potential for selective and biased reporting. It should become mandatory for prognosis research to have a registered study protocol outlining the aims and detailing the methods of data collection and statistical analysis that will be used.23 Study registration and publication of analytical and study protocols should also help improve the quality of studies.

Predictors

Given the wide range of factors that may influence prognosis—the social and healthcare environment, psychosocial factors, health behaviours, and biological factorswhy is the focus of prognosis research so uneven? The “mile wide, inch deep” focus on circulating biomarkers is illustrated in a systematic review of 130 different factors in neuroblastoma in which the median number of publications per biomarker was 1 (fig 1).24 By contrast, the prognostic importance of history, examination, and simple investigations has been relatively neglected.25 For example, whereas meta-analyses have examined the relation between alcohol consumption in initially healthy populations and subsequent death from coronary disease,26 there has been little research into the relation between alcohol and prognosis among people with cardiovascular disease,27 and no meta-analyses. This is a clinically important question because doctors need evidence on which to base advice to patients and a framework to evaluate new prognostic biomarkers in the context of existing knowledge. There is a need for clarity over the strength of evidence required for prognostic biomarkers to be considered “established” or “useful.”

Figure1

Fig 1 Mile wide inch deep focus of research shown by systematic review of studies of genetic and other circulating biomarkers for recurrence or death from neuroblastoma.24 130 different biomarkers were studied with a median of one study per marker

Outcomes

Most prognosis research in cancer and cardiovascular disease fails to report suffering from symptoms, functional status, and quality of life. Mortality may not be the most important outcome to the patient, nor is it necessarily a good proxy for other outcomes. Patient values are a constituent, not a contingent, property of a full understanding of prognosis. Assessments of the impact of a particular disease on a patient’s life vary widely among patients, and are commonly discordant with the severity assessed by doctors.28

As most prognostic studies examine multiple outcomes, selective reporting, where only those outcomes found to be statistically or clinically significant are reported, is a concern. Selective reporting is a problem in cancer prognostic studies,29 but is likely to be prevalent in other fields too. This problem underscores the need for study and protocol registration, with pre-specification of the primary outcomes of interest.

Methods

Prognosis research must catch up with the standards of high quality randomised trials or observational aetiological research, in terms of design, conduct, analysis, and reporting.20 Many studies are simply too small to provide reliable evidence—for example, a meta-analysis of 47 studies among patients with Barrett’s oesophagus reported a total of just 209 incident cases of oesophageal cancer (fig 2).30 Prognosis research needs to be seen as a distinct field in order to foster scientifically justified, rather than idiosyncratic, methods. For example, in cancer research continuous biomarkers are almost always dichotomised, whereas in cardiovascular research this is much less common.

Figure2

Fig 2 Systematic review of 47 studies investigating incidence of oesophageal cancer in patients with Barrett’s oesophagus found that most were too small to provide reliable evidence.30 Larger studies tended to show lower incidence of cancer

Hayden and colleagues list six stages in the design, conduct, and analysis of prognosis studies where bias may operate,31 but most primary studies inadequately protect against these threats to validity.10

Publication

A prudent default position would be to assume that prognosis research is seriously afflicted by publication bias, until there is evidence to the contrary. Evaluating 1575 articles on different prognostic biomarkers for cancer, Kyzas and colleagues found that almost all report significant results,6 signalling a major problem of publication bias. The C reactive protein systematic review found that publication bias was so large that different methods to adjust for its effect either substantially attenuated, or abolished, the apparent association between C reactive protein and outcome.7 Study and protocol registration would help with this problem because it would make it easier to identify unpublished studies.

Reporting

Authors of prognosis research articles often omit key details, outcomes, and analyses and inflate the importance of their findings.32 Currently there are no generic reporting guidelines for prognosis research, which means that journal editors, peer reviewers, authors, and readers do not have a framework for distinguishing reliable observations from the merely new. An important start has been made by the REMARK guidelines for biomarkers in cancer,33 though lack of adherence to these guidelines has recently been noted.34 Prognostic studies share many methodological features with healthy population studies, but require reporting of additional items, such as the initial medical condition, its stage, and duration since onset; the translational clinical question examined; absolute risks; and the clinical outcomes that are more varied than the singular end points used in aetiological studies.

Importantly, there are currently no reporting standards for risk prediction scores,9 nor any central register where clinicians and researchers can access and compare these rapidly expanding technologies.35 We propose that reporting guidelines are developed that span the scope of prognosis research (perhaps using REMARK as a starting point). As a related but distinct exercise a checklist of quality criteria should be developed.

Synthesis

Given the concerns about the quality of primary prognosis research, efforts at evidence synthesis should be viewed with caution. Evaluating 17 systematic reviews in the prognosis of low back pain, Hayden and colleagues concluded that “because of the methodological shortcomings . . . there remains uncertainty about the reliability of conclusions regarding prognostic factors.”10 Such a conclusion is common for systematic reviews of prognostic studies. The Cochrane Collaboration Prognosis Methods Group, established in 2008, aims to facilitate and improve the quality of systematic reviews of prognosis research.36

Developments are required at multiple stages, starting with improvements in primary studies and working towards improving methodology for synthesis. Remarkably, electronic searches of publications on PubMed cannot distinguish studies among people with disease from studies among healthy people who go on to develop disease. We therefore need a standardised nomenclature for describing the results of studies. The term prognosis is used variably, with at least three meanings: any outcome study including those in initially healthy populations; synonymous with mortality; and as risk prediction (or prognostication).

When high quality primary research studies exist, meta-analysis of individual participant data 20 is the most reliable method synthesis and is achievable.20 37 38 An emerging challenge is the synthesis of different types of evidence relating to one prognosis research question. For example, studies assessing whether a new prognostic biomarker is causally related to disease progression use different, and potentially complementary, methods for dealing with confounding (observational study designs use statistical adjustment and genetic study designs use mendelian randomisation).39

Impact of research

However well prognosis research comes to protect against this range of biases, the “so what?” question demands answering. Prognosis research is not having the effect it should have both at early stages in the translational spectrum (for example, on informing the design and development of drug or other targets for patient management) or at later stages supporting clinical decision making (for example, in facilitating individualised or stratified medicine). Since 1991 there have been 102 risk prediction models reported for traumatic brain injury in 53 articles; in only five articles were models externally validated, and none has been widely implemented in clinical practice (fig 3.9 From the perspective of a clinician and a patient, effectiveness means altered clinical decisions and consequent patient outcomes.17 The psychosocial effect of prognostic information on patients and their families also warrants consideration in such effectiveness criteria.

Figure3

Fig 3 Illustration of lack of clinical impact of some prognostic research. Systematic review identified 53 papers on 102 risk prediction models for death and disability in patients with traumatic brain injury published during 1991-2005, but only five were externally validated and none of the models is used in clinical practice9

From the perspective of a policy maker the impact of prognosis research should be made explicit in a cost effectiveness framework. For example, a recent study showed the value of cost effectiveness decision models for evaluating different prognostic risk scores to prioritise the waiting list for coronary surgery.40 From the perspective of a research funder, the cost of investing in new prognosis research and the impact of the resulting reduction in scientific uncertainty can be formally modelled.41 42

Conclusion

Prognosis research across multiple disease areas faces challenges at each stage of the research process. We acknowledge that our backgrounds (in cardiovascular epidemiology and cancer biostatistics) both inform, and limit, our views. A systematic comparison of the state of the art of prognosis research across several clinical conditions would clarify the need for action and help prioritise our proposed 10 steps. Progress in prognosis research should be empirically demonstrated. One marker of progress is the emphasis that prognosis research commands in evidence based medicine; an influential book currently includes only 14 pages out of a total of 809.43 This needs to change.

Summary points

  • The quality of much prognosis research is poor

  • Systematic reviews can often reach only limited conclusions because of variation in methods, poor reporting, and publication bias

  • Ten steps towards improving prognosis research are outlined

  • Study and protocol registration and guidelines for reporting are urgently required

Notes

Cite this as: BMJ 2009;339:b4184

Footnotes

  • Contributors: Discussions among the authors were facilitated by John Scadding and David Misselbrook (Royal Society of Medicine) and Trish Groves (BMJ). Each author contributed examples and critically commented on the text. HH wrote the first draft and is the guarantor.

  • Competing interests: None declared.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References