General considerations about adaptive trials

Reporting

The need for specific reporting demands for adaptive designs

Every trial should be transparently and adequately reported so that consumers of research findings can make informed judgements about the validity and trustworthiness of the results. The fact that adaptive designs allow prespecified changes to be made to elements of an ongoing trial based on interim results also creates the potential for biases to be introduced at different stages of the trial. This brings additional demands for transparency and adequate reporting compared to traditional fixed trial designs.

Like fixed trial designs, the conduct of an adaptive trial should be consistent throughout the trial (with the important exception of those aspects of the trial that the adaptive design allows to be modified following interim analyses). This includes clinical management of patients and assessment of outcomes before and after trial adaptations. Specifically, additional biases in adaptive trials can be introduced in several ways 1, 2, 3, 4. First, this can occur when inappropriate statistical methods that do not account for trial adaptations are used in the design, monitoring, and analysis (see Analysis). Second, there is potential for researchers to act differently, consciously or otherwise, if they are aware of emerging data. For instance, researchers may be willing to approach different types of patients or make decisions that favour or disadvantage certain patients or treatment arms if emerging data are suggesting a treatment effect in a particular direction 5, 6. Access to interim information such as data and results must be restricted in order to reduce the impact of this. Those making adaptation recommendations in light of interim data must be independent of the conduct of the trial.

Finally, the potential bias that is most challenging to control and difficult to quantify occurs when interim results can be inferred by researchers and patients as a result of the adaptations made to the trial such as stopping a treatment arm or patient subpopulation 6. This can be detrimental in open-label trials where differential management of patients due to knowledge of interim information can be substantial as researchers and patients are not masked to treatment allocation.

References

1. Chow et al. Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis. 2008;3:11.
2. FDA. Adaptive designs for clinical trials of drugs and biologics guidance for industry. 2019.
3. Dimairo et al. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ. 2020;369:m115.
4. Pallmann et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.
5. Ventz et al. The effects of releasing early results from ongoing clinical trials. Nat Commun. 2021;12:801.
6. Huskins et al. Adaptive designs for clinical trials: Application to healthcare epidemiology research. Clin Infect Dis. 2018;66:1140–6.

Issues with reporting of adaptive trials

Suboptimal and inconsistent reporting of adaptive trials is widely reported 1, 2, 3, 4. Some issues include a lack of details on:

  • trial adaptations making it difficult to distinguish between planned and unplanned changes and to assess the appropriateness of the statistical methods used;
  • planned decision-making rules and when adaptation decisions were planned to be made; 
  • non-disclosure of interim results and the lack of clarity of adaptations that were triggered at interim analyses; 
  • measures that were put in place to maintain the confidentiality of interim information and minimise operational bias during the trial, and; 
  • methods used to estimate treatment effects and related quantities such as confidence intervals and p-values, where appropriate, making it difficult to assess the validity of trial results (see Analysis).
The lack of transparency and inadequate reporting make it difficult to interpret results from adaptive trials and to reproduce methods, results, and inference. This can create controversy and severely undermine confidence in research findings and limit their influence on clinical practice. As a result, further trials addressing the same research questions may be unnecessarily conducted. Also, results from adaptive trials should be easily interpretable to reliably inform evidence synthesis.

Finally, transparently and completely reported adaptive trials are a useful resource for other researchers to learn from, and they can act as case studies to bridge the existing knowledge gap of innovative and efficient trial designs.

References

1. Dimairo et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16:210.
2. Stevely et al. An investigation of the shortcomings of the CONSORT 2010 statement for the reporting of group sequential randomised controlled trials: A methodological systematic review. PLoS One. 2015;10:e0141104.
3. Edwards et al. A systematic review of the “promising zone” design. Trials. 2020;21:1000.
4. Lin et al. CBER’s experience with adaptive design clinical trials. Ther Innov Regul Sci. 2015;50:195–203.

What has been done to address the reporting of adaptive trials?

An extension to the CONSORT 2010 guideline for the reporting of parallel-group randomised clinical trials was developed to address the issues highlighted above and to improve the reporting of randomised trials that use adaptive designs 1. This Adaptive designs CONSORT Extension (ACE) guidance was developed through the input of multidisciplinary research stakeholders in the private and public sectors including regulators and funders, with the oversight of the CONSORT Executive Group

Two Delphi surveys were conducted to gather views about the importance of drafted essential items researchers should report compiled from a scoping review and expert opinions. Participants from 21 countries took part in the Delphi surveys. Response rates from Delphi surveys were 66% (94/143), 73% (114/156), and 55% (79/143) in rounds 1, 2, and across both rounds, respectively. This was followed by a consensus meeting to review Delphi survey results, make recommendations on minimum essential items that should be retained in the final checklist, and to discuss what should be addressed in the explanation and elaboration text of the ACE guideline. This consensus meeting was attended by 27 delegates from the UK, European Union, the USA, and Asia.

PANDA users may wish to read more on the whole development process 1, which resulted in the ACE guidance 2, 3.

References

1. Dimairo et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16:210.
2. Dimairo et al. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ. 2020;369:m115.
3. Dimairo et al. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials. 2020;21:528.

The Adaptive designs CONSORT Extension (ACE) guidance

Researchers are strongly encouraged to use the ACE guidance when reporting their trial results 1, 2. The ACE guidance is freely accessible via several websites: 

The ACE guideline has two checklists that include key items covering topics relating to the abstract and the main content of trial reports. 

 The abstract checklist covers the: 


  • inclusion of the term “adaptive” in the abstract or at least as a keyword. Where possible, this can be included in the title; 
  • description of the outcome(s) that informs trial adaptations when relevant; 
  • description of trial adaptation decisions made and their basis.

The main checklist covers aspects that include:  


  • description of the type of adaptive design and trial adaptations, as well as statistical information used to inform adaptations; 
  • description of unplanned changes to the design or methods (outside the scope of adaptive features) with explanations; 
  • description of outcome(s) used to inform trial adaptations including how and when they were assessed; 
  • description of how the sample size and operating characteristics or statistical properties were determined. Referencing relevant literature on methods and statistical software package, program, or code is encouraged; 
  • description of prespecified decision-making criteria at interim analyses to guide trial adaptations. This covers decision rules, which decision rules were non-binding, when decisions were expected to be made, and when decisions were actually made; 
  • description of measures to safeguard the confidentiality of interim information such as data and interim results, and minimise operational bias during the trial; 
  • description of statistical methods used for interim and final analyses; 
  • description of statistical methods used to estimate treatment effects and for making inferences; 
  • specification of trial adaptations that were made in light of the prespecified decision rules and interim results. Any deviation from the planned decision-making criteria should be explained; 
  • disclosure of summary data such as on patient characteristics and demographics to enable the similarity of the trial population to be assessed across interim stages; 
  • presentation of interim results that informed trial adaptations; 
  • discussion of any limitations as a consequence of trial adaptations; 
  • clarity on whom the results or conclusions pertain to. For example, in some adaptive designs such as adaptive population enrichment (APE), the final results may only apply to the selected subpopulation rather than the full population the trial started with; 
  • access to other essential trial documents such as statistical analysis plans and statistical simulation report. 
Some authors have also discussed the reporting aspects of adaptive trials that are covered in the ACE guidance 3, 4, 5, 6, 7, 8. PANDA users should note that reporting issues for specific types of adaptive designs are discussed and reiterated under the reporting section of that specific adaptive design. For example, specific points to consider when reporting group sequential trials are reiterated here.

Finally, there are additional specific requirements for adaptive design clinical trials that use Bayesian methods (see explanation and elaboration text of item 12b). Additional considerations when constructing flowcharts are discussed under item 13a. 

References

 1. Dimairo et al. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ. 2020;369:m115.
2. Dimairo et al. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials. 2020;21:528.
3. Pallmann et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.
4. Huskins et al. Adaptive designs for clinical trials: Application to healthcare epidemiology research. Clin Infect Dis. 2018;66:1140–6.
5. Stevely et al. An Investigation of the shortcomings of the CONSORT 2010 statement for the reporting of group sequential randomised controlled trials: A methodological systematic review. PLoS One. 2015;10:e0141104.
6. Detry et al. Standards for the design, conduct, and evaluation of adaptive randomized clinical trials. Patient-Centered Outcomes Res Inst. 2012.
7. Angus et al. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019;18:797–807.
8. Park et al. An overview of platform trials with a checklist for clinical readers. J Clin Epidemiol. 2020;125:1–8. 

Tips for researchers when applying the ACE guidance

  1. The ACE guideline is generic so it applies across adaptive designs and trial adaptations regardless of the statistical framework used for the design, monitoring, and analysis such as using frequentist or Bayesian methods;
  2. Researchers are encouraged to use the guideline from the planning stage throughout the trial and not to wait until the trial is completed;
  3. The guideline consists of “minimum essential items” so researchers should be flexible to include other reporting elements if they think that their inclusion in a trial report enhances the interpretation of results;
  4. Researchers should use the guidance alongside the latest version of the CONSORT 2010 and other relevant extensions depending on the other features of the adaptive trial. For example, if a adaptive cluster randomised trial was conducted (e.g. 1), then an extension for cluster randomised trials should also be used alongside;
  5. Researchers are encouraged to read the explanation and elaboration text and examples in the guideline to help them when completing the checklist;
  6. Remember to complete both the abstract and main checklists downloadable here;
  7. Transparency is about access to information so it is sufficient to cross-reference details in appendices such as protocols and statistical analysis plans.

References

1. Choko et al. HIV self-testing alone or with additional interventions, including financial incentives, and linkage to care or prevention among male partners of antenatal care clinic attendees in Malawi: An adaptive multi-arm, multi-stage cluster randomised trial. Trials. 2019;16(1):e1002719. 

Case studies that have used the ACE guidance

This section provides adaptive trials that have adhered to the ACE guideline and addressed most of the checklist items satisfactorily which other researchers can learn from:

  • A phase 2 group sequential trial that stopped early for efficacy at the second interim analysis 1.

References

1. Barbui et al. Ropeginterferon alfa-2b versus phlebotomy in low-risk patients with polycythaemia vera (Low-PV study): a multicentre, randomised phase 2 trial. Lancet Haematol. 2021;8(3):e175–84.