To examine the registration of noninferiority trials, with a focus on the reporting of study design and noninferiority margins.
Study Design and Setting
Cross-sectional study of registry records of noninferiority trials published from 2005 to 2009 and records of noninferiority trials in the International Standard Randomized Controlled Trial Number (ISRCTN) or ClinicalTrials.gov trial registries. The main outcome was the proportion of records that reported the noninferiority design and margin.
Results
We analyzed 87 registry records of published noninferiority trials and 149 registry records describing noninferiority trials. Thirty-five (40%) of 87 records from published trials described the trial as a noninferiority trial; only two (2%) reported the noninferiority margin. Reporting of the noninferiority design was more frequent in the ISRCTN registry (13 of 18 records, 72%) compared with ClinicalTrials.gov (22 of 69 records, 32%; P = 0.002). Among the 149 records identified in the registries, 13 (9%) reported the noninferiority margin. Only one of the industry-sponsored trial compared with 11 of the publicly funded trials reported the margin (P = 0.001).
Conclusion
Most registry records of noninferiority trials do not mention the noninferiority design and do not include the noninferiority margin. The registration of noninferiority trials is unsatisfactory and must be improved.
Introduction
What is new?
•
In registry records of published noninferiority trials, the noninferiority design was not mentioned in 60% of studies.
•
The noninferiority margin was not reported in most registry records of noninferiority trials.
•
This is in line with the finding that records from publicly accessible registries are often incomplete.
•
Trial registries should search for ways to improve the reporting of noninferiority trials.
The International Committee of Medical Journal Editors (ICMJE) and others have been promoting the registration of clinical trials to prevent selective publication of clinical trials depending on their results, that is, publication bias and increase the transparency of performing and reporting clinical trials [1]. Since 2005, the journals that are members of the ICMJE require registration in a public trials registry as a condition of consideration for publication [2], which was followed by a substantial increase in the number of clinical trial registrations [3]. At present, authors are required to complete a web-based form, which must cover 20 key items, including a description of the study design [1].
Noninferiority trials are increasingly used in the licensing of new drugs and published in the medical literature [4]. The choice of a noninferiority margin, which determines the boundary for deciding whether the result of a trial demonstrates noninferiority of an intervention, is crucial for noninferiority trials [5], [6], [7]. There are no fixed rules that determine the choice of these margins [8], [9]. However, it is essential that it is known which margins were used a priori in the planning phase of a trial and that they are not chosen or modified post hoc at analysis stage [10]. We examined the reporting of noninferiority studies in publicly accessible trial registries.
Section snippets
Methods
We used a dual and complementary search strategy and identified registry records by starting both from publications and registry:
1.
We examined the registry records of published noninferiority trials.
2.
We also directly identified registry records of noninferiority trials in two large publicly accessible databases.
Results
The search strategy identified 133 noninferiority trials published from 2005 onward (Fig. 1). For 88 published noninferiority trials, an accompanying protocol in a trial registry was found. One study was registered in EudraCT, the clinical trial database of the European Medicines Agency, which is not publicly accessible. We identified 153 records from searching the two trial registries, of which one record did not relate to a noninferiority trial. Three records were excluded because they were
Discussion
Trial registration serves the goal of creating a record of all clinical trials, independently of publication, and thus to prevent publication bias. Moreover, the recording of essential methodological details early on allows the scientific community to detect and judge later changes in key aspects of the study design [12], [13]. The study design is one of 20 items covered by the minimal registration data set [1]; however, we found that the noninferiority design and margin were often not included
A survey of 21 clinical trial registries found that only 11 provided guidelines for registration, and the compliance of trial registries with the 20-item minimum data set suggested by the World Health Organization (WHO) was a median of 14 [9]. Other studies found that noninferiority design [5] and subgroup analyses [10] are frequently not reported in registry records. Our study adds that trial status information is frequently misleading in case of discontinued trials.
One quarter of randomized clinical trials (RCTs) are prematurely discontinued and frequently remain unpublished. Trial registries can document whether a trial is ongoing, suspended, discontinued, or completed and therefore represent an important source for trial status information. The accuracy of this information is unclear.
To examine the accuracy of completion status and reasons for discontinuation documented in trial registries as compared to corresponding publications of discontinued RCTs and to investigate potential predictors for accurate trial status information in registries.
We conducted a cross-sectional study comparing information provided in publications (reference standard) to corresponding registry entries. First, we reviewed publications of RCTs providing information on both discontinuation and registration. We identified eligible publications through systematic searches of MEDLINE and EMBASE (2010–2014) and an international cohort of 1,017 RCTs initiated between 2000 and 2003. Second, pairs of investigators independently and in duplicate extracted data from publications and corresponding registry records. Third, for each discontinued RCT, we compared publication information to registry information. We used multivariable regression to examine whether accurate labeling of trials as discontinued (vs. other status) in the registry was associated with recent initiation of RCT, industry sponsorship, multicenter design, or larger sample size.
We identified 173 publications of RCTs that were discontinued due to slow recruitment (55%), harm (16%), futility (11%), benefit (5%), other reasons (3%), or multiple reasons (9%). Trials were registered with clinicaltrials.gov (77%), isrctn.com (14%), or other registries (8%). Of the 173 corresponding registry records, 77 (45%) trials were labeled as discontinued and 57 (33%) provided a reason for discontinuation (of which 53, 93%, provided the same reason as in the publication). Labeling of discontinued trials as discontinued (vs. other label) in corresponding trial registry records improved over time (adjusted odds ratio 1.16 per year, confidence interval 1.04–1.30) and was possibly associated with industry sponsorship (2.01, 0.99–4.07) but unlikely with multicenter status (0.81, 0.32–2.04) or sample size (1.07, 0.89–1.29).
Less than half of published discontinued RCTs were accurately labelled as discontinued in corresponding registry records. One-third of registry records provided a reason for discontinuation. Current trial status information in registries should be viewed with caution.
Fig. 2 documents the disposition of the review process. We located 38 articles that analyzed the effectiveness of interventions to prevent or reduce publication bias [20–57]. The included research provides evidence regarding: prospective trial registration; the peer review process; disclosure of conflicts of interest (CoIs); and open-access publishing.
To determine the effectiveness of interventions designed to prevent or reduce publication and related biases.
We searched multiple databases and performed manual searches using terms related to publication bias and known interventions against publication bias. We dually reviewed citations and assessed risk of bias. We synthesized results by intervention and outcomes measured and graded the quality of the evidence (QoE).
We located 38 eligible studies. The use of prospective trial registries (PTR) has increased since 2005 (seven studies, moderate QoE); however, positive outcome-reporting bias is prevalent (14 studies, low QoE), and information in nonmandatory fields is vague (10 studies, low QoE). Disclosure of financial conflict of interest (CoI) is inadequate (five studies, low QoE). Blinding peer reviewers may reduce geographical bias (two studies, very low QoE), and open-access publishing does not discriminate against authors from low-income countries (two studies, very low QoE).
The use of PTR and CoI disclosures is increasing; however, the adequacy of their use requires improvement. The effect of open-access publication and blinding of peer reviewers on publication bias is unclear, as is the effect of other interventions such as electronic publication and authors' rights to publish their results.
The Consolidated Standards of Reporting Trials (CONSORT) statement has been extended to improve the reporting of such trials [7]. The integrity of the NIT cannot be affirmed if authors do not accurately report the prespecified noninferiority margins and the relevant confidence intervals [11]. Authors must document the margins selected during the planning phase, and ensure that these margins are not chosen or modified post hoc, during analysis [4].
To compare noninferiority margins defined in study protocols and trial registry records with margins reported in subsequent publications.
Comparison of protocols of noninferiority trials submitted 2001 to 2005 to ethics committees in Switzerland and The Netherlands with corresponding publications and registry records. We searched MEDLINE via PubMed, the Cochrane Controlled Trials Register (Cochrane Library issue 01/2012), and Google Scholar in September 2013 to identify published reports, and the International Clinical Trials Registry Platform of the World Health Organization in March 2013 to identify registry records. Two readers recorded the noninferiority margin and other data using a standardized data-abstraction form.
The margin was identical in study protocol and publication in 43 (80%) of 54 pairs of study protocols and articles. In the remaining pairs, reporting was inconsistent (five pairs, 9%), or the noninferiority margin was either not reported in the publication (five pairs, 9%) or not defined in the study protocol (one pair). The confidence interval or the exact P-value required to judge whether the result was compatible with noninferior, inferior, or superior efficacy was reported in 43 (80%) publications. Complete and consistent reporting of both noninferiority margin and confidence interval (or exact P-value) was present in 39 (72%) protocol-publication pairs. Twenty-nine trials (54%) were registered in trial registries, but only one registry record included the noninferiority margin.
The reporting of noninferiority margins was incomplete and inconsistent with study protocols in a substantial proportion of published trials, and margins were rarely reported in trial registries.
The complete and detailed registration of primary and secondary outcomes in trial registries would greatly support the appropriate interpretation of results reported in journal articles and help prevent outcome-reporting bias. Records from publicly accessible registries are, however, often incomplete [10,22]. A study of randomized clinical trials from cardiology, rheumatology, and gastroenterology indexed in MEDLINE in 2008 found that less than half of trials (147 of 323, 46%) were registered before the end of the trial, with the primary outcome clearly specified [10].
To identify factors associated with discrepant outcome reporting in randomized drug trials.
Cohort study of protocols submitted to a Swiss ethics committee 1988–1998: 227 protocols and amendments were compared with 333 matching articles published during 1990–2008. Discrepant reporting was defined as addition, omission, or reclassification of outcomes.
Overall, 870 of 2,966 unique outcomes were reported discrepantly (29.3%). Among protocol-defined primary outcomes, 6.9% were not reported (19 of 274), whereas 10.4% of reported outcomes (30 of 288) were not defined in the protocol. Corresponding percentages for secondary outcomes were 19.0% (284 of 1,495) and 14.1% (334 of 2,375). Discrepant reporting was more likely if P values were <0.05 compared with P ≥ 0.05 [adjusted odds ratio (aOR): 1.38; 95% confidence interval (CI): 1.07, 1.78], more likely for efficacy compared with harm outcomes (aOR: 2.99; 95% CI: 2.08, 4.30) and more likely for composite than for single outcomes (aOR: 1.48; 95% CI: 1.00, 2.20). Cardiology (aOR: 2.34; 95% CI: 1.44, 3.79) and infectious diseases (aOR: 1.77; 95% CI: 1.01, 3.13) had more discrepancies compared with all specialties combined.
Discrepant reporting was associated with statistical significance of results, type of outcome, and specialty area. Trial protocols should be made freely available, and the publications should describe and justify any changes made to protocol-defined outcomes.
A concern that noninferiority (NI) trials pose a risk of degradation of the treatment effects is prevalent. Thus, we aimed to determine the fraction of positive true effects (superiority rate) and the average true effect of current NI trials based on data from registered NI trials.
All NI trials carried out between 2000 and 2007 analyzing the NI of efficacy as the primary objective and registered in one of the two major clinical trials registers were studied. Having retrieved results from these trials, random effects modeling of the effect estimates was performed to determine the distribution of true effects.
Effect estimates were available for 79 of 99 eligible trials identified. For trials with binary outcome, we estimated a superiority rate of 49% (95% confidence interval = 27–70%) and a mean true log odds ratio of −0.005 (−0.112, 0.102). For trials with continuous outcome, the superiority rate was 58% (41–74%) and the mean true effect as Cohen's d of 0.06 (−0.064, 0.192).
The unanticipated finding of a positive average true effect and superiority of the new treatment in most NI trials suggest that the current practice of choosing NI designs in clinical trials makes degradation on average unlikely. However, the distribution of true treatment effects demonstrates that, in some NI trials, the new treatment is distinctly inferior.