Intended for healthcare professionals

Papers

Fate of biomedical research protocols and publication bias in France: retrospective cohort study

BMJ 2005; 331 doi: https://doi.org/10.1136/bmj.38488.385995.8F (Published 30 June 2005) Cite this as: BMJ 2005;331:19
  1. Evelyne Decullier, research fellow1,
  2. Véronique Lhéritier, research assistant2,
  3. François Chapuis (francois.chapuis{at}chu-lyon.fr), senior researcher3
  1. 1 Clinical Research Unit, DIM des Hospices Civils de Lyon, 162 avenue Lacassagne, 69424 Lyon cedex 03, France
  2. 2 CCPPRB Lyon B -Hôpital Hotel-Dieu, place de l'Hopital, 69002 Lyon
  3. 3 French National Confederation of Research Ethics Committees - Hôpital Hotel-Dieu, 69002 Lyon
  1. Correspondence to: F Chapuis
  • Accepted 9 May 2005

Abstract

Objectives To describe the fate of protocols approved by the French research ethics committees, a national system created by the French 1988 Huriet-Sérusclat Act; to assess publication bias at a national level.

Design Retrospective cohort study.

Setting Representative sample of 25/48 French research ethics committees in 1994.

Protocols 649 research protocols approved by committees, with follow-up information.

Main outcome measures Protocols' initial characteristics (design, study size, investigator) abstracted from committees' archives; follow-up information (rates of initiation, completion, and publication) obtained from mailed questionnaire to principal investigators.

Results Completed questionnaires were available for 649/976 (69%) protocols. Of these, 581 (90%) studies were initiated, 501/581 (86%) were completed, and 190/501 (38%) were published. Studies with confirmatory results were more likely to be published as scientific papers than were studies with inconclusive results (adjusted odds ratio 4.59, 95% confidence interval 2.21 to 9.54). Moreover, studies with confirmatory results were published more quickly than studies with inconclusive results (hazard ratio 2.48, 1.36 to 4.55).

Conclusion At a national level, too many research studies are not completed, and among those completed too many are not published. We suggest capitalising on research ethics committees to register and follow all authorised research on human participants on a systematic and prospective basis.

Introduction

Biomedical research protocols, once approved by a research ethics committee, do not have one typical fate. Some protocols have a linear course—approval, initiation, completion, and publication—whereas others may fail at any step. Information about the fate of studies is useful for funders, society, the scientific community, and patients.1 2 Whether publication is influenced by characteristics of the study such as the direction and strength of findings is of particular interest. Publication bias—defined as the tendency on the parts of investigators, editors, and others to favour publication of research with confirmatory results over research with inconclusive or invalidating results3—threatens the reliability of reviews focusing on the published literature.4

Four papers have reported on follow-up of protocols approved by research ethics committees: in Barcelona, Oxford, Sydney, and Baltimore.58 In these studies, 79-93% of approved protocols were initiated and 64-74% of the initiated studies proceeded to completion. Two other studies have reported on follow-up of trials funded by the US National Institutes of Health.9 10

Three of the studies based on research ethics committees also assessed publication bias and showed that confirmatory results are associated with publication.68 Odds ratios were highly consistent, ranging from 2.32 to 2.93, and main reason for non-publication was that investigators considered their results not interesting. A survey of authors publishing in psychology in 1973 showed that in the case of statistically non-significant results, the probability of submission was only 6%.11

In France, the 1988 Huriet-Sérusclat Act created a national system of 48 research ethics committees (committees for protection of human beings involved in biomedical research), which contribute to a national confederation of research ethics committees.12 Every protocol involving humans in France must be approved by one of the French committees. The network of structured committees provides prospective and exhaustive recording, but this information had not previously been used for research purposes. Our objective was to describe the fate of clinical protocols after approval and to assess publication bias.

Methods

We surveyed a sample of 25/48 (54%) committees, randomly chosen to ensure a geographical cross section representative of the French administrative areas (the number of committees in each area depends on population size). All invited committees agreed to participate in the study. We assessed three main outcomes: study initiation, study completion, and publication as a scientific paper (table 1). Our main hypothesis was that studies with confirmatory results were more likely to be published than those either inconclusive or invalidating results. All protocols newly approved between 1 January 1994 and 31 December 1994 by any of the 25 participating French committees were eligible.

Table 1.

Data collected for analysis

View this table:

Definitions

We refer to “protocols” up until the time of initiation, from which time we refer to “studies.” We collected data either from the committee files or from questionnaires mailed to the principal investigator. We classified study results as “confirmatory,” “invalidating,” or “inconclusive” (table 1). When the investigator did not respond to questions about publication status, we considered this as missing data.

We classified studies published in formats other than scientific papers as “grey literature”—that is, not generally accessible through libraries (internal reports, theses, abstracts, posters).13 We classified as “confidential” protocols describing research that the investigator reported was not intended to be published.

Data collection

Research assistants attended a formal training session on abstraction of study characteristics in June 2000. We assigned an identification number to each protocol to ensure anonymity of the investigator, and completed forms were sent to the coordinating centre.

Research assistants were also locally responsible for obtaining follow-up data from the principal investigator of each protocol by using a mailed questionnaire. In the case of non-response, principal investigators were contacted up to six times by mail or phone. When no answer could be obtained, the local committee contacted the sponsor in summer 2002. When no follow-up response was obtained at all, we classified the reason (refusal, investigator retired, deceased, moved away).

Ethical considerations

We conducted this study according to the French law on epidemiological and descriptive studies. We collected data anonymously, and no consent was needed as we retrieved no individual information. For research confidentiality, we assigned an identification number and the researchers' names were not mentioned. Therefore, we did not check publication status on any bibliographic database.

Statistical methods

We obtained frequency distributions for all variables (means, percentages, and 95% confidence intervals). When assessment of association was needed, we used χ2 tests.

To build explicative models for the three outcomes (initiation, completion, and publication), we introduced variables significant at the 0.25 level in univariate analysis in a forward stepwise logistic regression (P value for entry = 0.25, P value for remaining = 0.15).14 We restricted analysis of publication to the cohort of completed studies. We excluded studies from the analysis when their results were not known by the investigator and when they were declared to be not aimed at publication (confidential results or phase I studies).

We calculated the time between the date of approval by the committee and the date of first publication and did a Kaplan-Meier survival analysis.15 We used a log-rank test to compare survival curves. We excluded studies with an unknown date of first publication. We censored unpublished studies at the date when the questionnaire was completed; we analysed studies described as “in press” as if published at the date of the completion of the questionnaire. We used a Cox univariate analysis to obtain hazard ratios.16

We used SAS software for all analyses. We considered associations to be statistically significant when P values were less than 0.05.

Results

In 1994 the 25 committees evaluated a total of 1143 protocols. We did not include protocols that were approved in 1993 (n = 19, 2%) or 1995 (n = 82, 7%), that were dropped by the investigator before formal approval (n = 48, 4%), or for which committee submission was not required by the law (n = 12, 1%). Among the 982 protocols included, initial characteristics (available on request) were fully described for 976 (99.4%) protocols.

We did not receive investigators' follow-up answers to the mailed questionnaire for 305 (31%), and 22 (2%) were not suitable for statistical analysis (empty questionnaires or empty pages). This left 649 approved protocols to be included.

Study of non-responses

Seventeen volunteer committees provided complementary information on reasons for investigator's non-response and gathered data for 185/305 studies (61% of non-respondents). The reasons were refusal to fill in a follow-up form (n = 74, 40%), unable to find the original file (n = 56, 30%), and investigator not located because he or she had moved (n = 42, 23%) or had retired or died and nobody could locate the protocol archives (n = 13, 7%).

Protocols with missing follow-up data (n = 305) did not differ from the included protocols (n = 649) by either type of sponsor or study design, but they more often needed modifications to gain approval (relative risk 1.25, 95% confidence interval 1.01 to 1.55), were more often multicentre (2.04, 1.66 to 2.50), and were more often international (1.45, 1.18 to 1.78).

Characteristics of approved protocols

The most common characteristics of the 649 approved protocols were drug testing topic (68%), private funding (73%), and conducted nationally only (82%) (table 2). Experimental designs were most frequent, and 62% of them were randomised. Planned study size was less than 20 patients in 34% of studies, and expected duration of study was less than 18 months in 56% of the studies

Table 2.

Characteristics of 649 protocols, by status at follow-up. Values are numbers (percentages)

View this table:

Fate of approved protocols

Figure 1 shows the fate of biomedical research protocols for the three study outcomes. Ninety per cent (581/649) of approved protocols were initiated at the time of our study, and 86% (501/581) of these were completed. Protocols not initiated tended more often to be national (91% v 82%), to be testing medical devices (9% v 5%), and to have no funding (21% v 8%) (table 2).

Figure1

Fate of biomedical research protocols

Initiation of protocols

Table 3 shows the factors associated with study initiation. Phase I protocols were about three times more likely to begin than others. Protocols with mixed funding were also most likely to be initiated, as were multinational ones.

Table 3.

Multivariate analysis of study initiation, completion, and publication*

View this table:

Among the 68 (10%) protocols that were not initiated, reasons given for non-initiation were refusal of the legal sponsor (n = 21, 31%), problems with recruitment of patients (n = 15, 22%), technical aspects and feasibility (n = 9, 13%), absence of funding (n = 8, 12%), decision of the investigator (n = 8, 12%), and a similar study having been published (n = 2, 3%). No reason was given for five studies (7%).

Completion of studies

Among the 581 protocols initiated, 16 were ongoing. We found in the logistic regression that phase I studies and studies without adverse effects were more likely to be completed (table 3). Investigators gave several reasons for stopping 64 studies before their planned completion, including patient recruitment problems (n = 28, 44%), results found in the interim analysis (n = 13, 20%), incidence of adverse effects (n = 8, 12%), sponsor's decision (n = 8; 13%), and other (n = 7; 11%).

Publication of results

Results were published in a scientific paper for 190/501 (38%) of completed studies, for 7/16 (44%) of ongoing studies, and for 8/64 (12%) of stopped studies. Among stopped studies, publication rates varied from 0% for studies with recruitment difficulties to 3/8 (37%) for studies with adverse events. Among the 501 completed studies, the publication rate was also heterogeneous; it was lower for the subgroup of phase I studies—21/127 (17%) compared with 169/374 (45%) for others.

Publication bias

Among the 501 completed studies, 127 (25%) phase I studies and 54 (11%) other studies were deemed confidential and were not included in our analyses of publication bias. We also excluded those protocols in which no hypothesis was tested (n = 32) and those for which the investigator did not know the study results (n = 20) or did not provide information about the direction of results (n = 8) or whether the results were published (n = 12). Thus 248 completed studies were included.

Four variables remained in the final model (table 3): direction of results, international versus national scope of the study, study design, and presence of an interim analysis. The stepwise regression confirmed the existence of publication bias; studies with confirmatory results were significantly more likely to be published (odds ratio 4.59, 95% confidence interval 2.21 to 9.54).

Investigators' reasons for non-publication

The main reason for non-publication given by the investigator was invalidating results (table 4). Some studies had manuscripts still in the writing or submission stage. Rejection of manuscript was cited for only 5% of unpublished studies. The reasons given by the investigator for non-publication corroborate the logistic regression results (confirmatory results were the strongest predictor of publication).

Table 4.

Reasons given by investigators for not publishing (n=102)

View this table:

Delayed publication of invalidating results

We estimated the effect of direction of results on time to publication for the 248 completed studies. For this analysis, we excluded 61 more protocols because of missing date of first publication. Mean time to publication was significantly associated with direction of results (P < 0.001; fig 2): 5.2 years (n = 139, 95% confidence interval 4.8 to 5.6) for confirmatory results compared with 6.9 years (n = 13; 5.9 to 7.9) for invalidating results, and 6.5 years (n = 35, 5.8 to 7.2) for inconclusive results. Cox univariate analysis yielded hazard ratios of 2.48 (1.36 to 4.55) for confirmatory results versus inconclusive results and 0.64 (0.18 to 2.27) for invalidating results versus inconclusive results.

Dissemination of results

Among the 248 protocols used for the analysis of publication bias, 146 (59%) led to scientific papers. Only 26% of these resulted in more than one paper (table 5), and 92% of studies with multiple publications had confirmatory results. However, the association between multiple publication and direction of results was not significant.

Table 5.

Reporting of 248 completed studies

View this table:

Ninety one per cent of studies were reported to be published in international journals. Moreover, 55% of the studies reported in scientific papers were also presented orally. The 102 remaining studies were not published as scientific papers. Forty (39%) resulted in neither publication nor oral presentation, 13 (13%) resulted in an oral presentation only, 23 (23%) appeared in the grey literature only, and 26 (25%) were reported in both an oral presentation and the grey literature. In total, 49 (48%) studies resulted at least in grey literature (table 5).

Discussion

Only 38% of completed studies were published. We found evidence for publication bias, favouring publication of confirmatory results (odds ratio 4.59, 95% confidence interval 2.21 to 9.54). The data collected also showed that 90% of approved protocols were initiated, and 86% of these were completed. Such information has been unavailable until now for interventional research conducted on humans and contributes to the literature on publication bias. Previous studies used similar methods: retrospective cohort of protocols approved by a research ethics committee, with a follow-up questionnaire to the investigator,68 17 but focused on one or two local committees, whereas we collected data on a sample of half the committees over a whole country. This may explain the lower publication rate seen in our study. Moreover, we added information on major steps in a protocol's life—from approval and initiation to completion and publication. Phase I studies were more likely to be initiated and completed than were others, probably because they are shorter and smaller.

Publication bias

The estimated odds ratio for the association between results and publication in our study was similar to, although higher than, those found in the other studies (range 2.32-2.93). This may be because our study population included 22% descriptive non-experimental protocols, which may be easier to do and more likely to be published. We also excluded the stopped studies and those considered to be confidential, which are less likelyto be published. The combined effect is toward the null hypothesis; the true odds ratio is therefore at least as high as the estimated odds ratio. Investigators' decisions to declare a study as confidential were not linked to invalidating or inconclusive results: among non-phase I studies for which direction of results was known (n = 292) results were confirmatory for 36/40 (90%) confidential studies and 190/252 (75%) non-confidential studies. In our study, the leading reason declared for failure to publish was that the investigator did not find the results interesting (26%), and this is similar to other studies (range 27-43%).7 9 17 We also found that only 5% of studies were not published because of rejection by a journal, again similar to the findings of other studies (range 5-10%).

What is already known on this topic

Three observational studies have shown evidence of publication bias in biomedical research approved by research ethics committees, but all were done at a local level

What this study adds

The fate of biomedical research, from acceptance to publication, has been shown at a national level

Publication bias has been confirmed; confirmatory results were 4.59 times more likely to be published than inconclusive results

Another kind of bias linked to statistical significance was recently reported in a follow-up of randomised controlled trials approved by two Danish research ethics committees18: investigators were more likely to report statistically significant outcomes and failed to report others (outcome reporting bias). The reasons given were similar to those explaining non-publication: 30% were not reported owing to the lack of statistical significance.

Non-response

The major limitation of our study was the non-response rate (31%), which was similar to those of other studies (range 22-30%),7 8 17 confirming how difficult it is to obtain answers from investigators, especially in a nationwide survey. In our study, the characteristics of protocols lost to follow-up were similar to those never initiated (multicentre and international studies). Non-response may thus be associated with never initiated protocols.

Registering trials

As publication bias is a major problem for science and for any type of review of available knowledge, we strongly support prospective registration of protocols—proposed in 1986 and supported by many authors4 19 20—for example, with a unique protocol identifier. In 1999 the editors of the BMJ and the Lancet affirmed the need to create a trial registry.21 In 2004 the International Committee of Medical Journal Editors decided to require prior recording in a protocol registry.22 23

We propose to take advantage of the work done by research ethics committees worldwide, as registers of human research protocols implicitly exist at this level, and we suggest capitalising on this. Moreover, the European EC/2001-20 guideline tends towards standardising clinical trial files and procedures across Europe.24 Ethical review processes will almost certainly be standardised in the future.

Acknowledgments

We thank Kay Dickersin and Hervé Maisonneuve for their advice on this paper and Yves Matillon and Christian Hervé for their advice on drafting the protocol. We also thank Marie-Pierre Rochette and Françoise Leclet, administrative staff of research ethics committee of Lyon; Patricia Darnand and William Banga, members of the logistic staff; and the chairpersons and members of the participating French research ethics committees: Alsace, Aulnay, Auvergne, Bordeaux, Boulogne, Brest, Dijon, Loire, Lyon A, Lyon B, Lyon C, Marseille1, Marseille2, Montpellier, Nice, Normandie, Paris-Bicetre, Paris-Creteil, Paris-Hotel dieu, Paris-Necker, Paris-Versailles, Poitou, Toulouse1, Toulouse2, and Tours.

Footnotes

  • Contributors ED coordinated the study, managed the data, did the statistical analysis, and drafted the manuscript. VL participated in the design of the study, coordinated the study, and managed the data. FC designed, submitted, and coordinated the study, interpreted data, and helped to draft the manuscript.

  • Funding French Ministry of Health (Programme Hospitalier de Recherche Clinique 1998 −065) and Hospices Civils de Lyon. French Ministry of Research and Higher Education and Claude Bernard University. Neither of the funding sources was involved at any stage.

  • Competing interests None declared.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
View Abstract