Stepped-wedge trial

From HandWiki

In medicine, a stepped-wedge trial (or SWT) is a type of randomised controlled trial (RCT). An RCT is a scientific experiment that is designed to reduce bias when testing a new medical treatment, a social intervention, or another testable hypothesis.

In a traditional RCT, the researcher randomly divides the experiment participants into two groups at the same time:

In a SWT, a logistic constraint typically prevents the simultaneous treatment of some participants, and instead, all or most participants receive the treatment in waves or "steps".

For instance, a researcher wants to measure whether teaching college students how to make several meals increased their propensity to cook at home instead of eating out.

  • In a traditional RCT, a sample of students would be selected and some would be trained on how to cook these meals, whereas the others would not. Both groups would be monitored to see how frequently they ate out. In the end, the number of times the treatment group ate out would be compared to the number of times the control group ate out, most likely with a t-test or some variant.
  • If, however, the researcher could only train a limited number of students each week, then the researcher could employ an SWT, randomly assigning students to which week they would be trained.

The term "stepped wedge" was coined by The Gambia Hepatitis Intervention Study due to the stepped-wedge shape that is apparent from a schematic illustration of the design.[1][2] The crossover is in one direction, typically from control to intervention, with the intervention not removed once implemented. The stepped-wedge design can be used for individually randomized trials,[3][4] i.e., trials where each individual is treated sequentially, but is more commonly used as a cluster randomized trial (CRT).[5]

Experiment design

The stepped-wedge design involves the collection of observations during a baseline period in which no clusters are exposed to the intervention. Following this, at regular intervals, or steps, a cluster (or group of clusters) is randomized to receive the intervention[5][6] and all participants are once again measured.[7] This process continues until all clusters have received the intervention. Finally, one more measurement is made after all clusters have received the intervention.[8]

Appropriateness

Hargreaves and colleagues offer a series of five questions that researchers should answer to decide whether SWT is indeed the optimal design, and how to proceed in every step of the study.[9] Specifically, researchers should be able to identify:

The reasons SWT is the preferred design
If measuring a treatment effect is the primary goal of research, SWT may not be the optimal design. SWTs are appropriate when the research focus is on the effectiveness of the treatment rather than on its mere existence. Overall, if the study is pragmatic (i.e. seeks primarily to implement a certain policy), logistical and other practical concerns are considered to be the best reasons to turn to a stepped wedge design. Also, if the treatment is expected to be beneficial, and it would not be ethical to deny it to some participants, then SWT allows all participants to have the treatment while still allowing a comparison with a control group. By the end of the study, all participants will have the opportunity to try the treatment. Note there may still be ethical issues raised by delaying access to the treatment for some participants.[citation needed]
Which SWT design is more suitable
SWTs can feature three main designs employing a closed cohort, an open cohort, and a continuous recruitment with short exposure.[10] :In the closed cohort, all subjects participate in the experiment from beginning to end. All the outcomes are measured repeatedly at fixed time points which may or may not be related to each step.[citation needed]
In the open cohort design, outcomes are measured similarly to the former design, but new subjects can enter the study, and some participants from an early stage can leave before the completion. Only a part of the subjects are exposed from the start, and more are gradually exposed in subsequent steps. Thus, the time of exposure varies for each subject.
In continuous recruitment design with short exposure, very few or no subjects participate in the beginning of the experiment but more become eligible, and are exposed to short intervention gradually. In this design, each subject is assigned to either the treatment or the control condition. Since participants are assigned to either the treatment or the control group, the risk of carry-over effects, which may be a challenge for closed and open cohort designs, is minimal.[citation needed]
Which analysis strategy is appropriate
Linear Mixed Models (LMM), Generalized Linear Mixed Models (GLMM), and Generalized Estimating Equations (GEE) are the principal estimators recommended for analyzing the results. While LMM offers higher power than GLMM and GEE, it can be inefficient if the size of clusters vary, and the response is not continuous and normally distributed. If any of those assumptions are violated, GLMM and GEE are preferred.[citation needed]
How big the sample should be
Power analysis and sample size calculation are available. Generally, SWTs require smaller sample size to detect effects since they leverage both between and within-cluster comparisons.[11][12]
Best practices for reporting the design and results of the trial
Reporting the design, sample profile, and results can be challenging, since no Consolidated Standards Of Reporting Trials (CONSORT) have been designated for SWTs. However, some studies have provided both formalizations and flow charts that help reporting results, and sustaining a balanced sample across the waves.[13]

Model

While there are several other potential methods for modeling outcomes in an SWT,[14] the work of Hussey and Hughes[7] "first described methods to determine statistical power available when using a stepped wedge design."[14] What follows is their design.

Suppose there are [math]\displaystyle{ N }[/math] samples divided into [math]\displaystyle{ C }[/math] clusters. At each time point [math]\displaystyle{ t = 1, \ldots, T }[/math], preferably equally spaced in actual time, some number of clusters are treated. Let [math]\displaystyle{ Z_{ct} }[/math] be [math]\displaystyle{ 1 }[/math] if cluster [math]\displaystyle{ c }[/math] has been treated at time [math]\displaystyle{ t }[/math] and [math]\displaystyle{ 0 }[/math] otherwise. In particular, note that if [math]\displaystyle{ Z_{ct} = 1 }[/math] then [math]\displaystyle{ Z_{c, t+1} = 1 }[/math].

For each participant [math]\displaystyle{ i }[/math] in cluster [math]\displaystyle{ c }[/math], measure the outcome to be studied [math]\displaystyle{ y_{ict} }[/math] at time [math]\displaystyle{ t }[/math]. Note that the notation allows for clustering by including [math]\displaystyle{ c }[/math] in the subscript of [math]\displaystyle{ y_{ict} }[/math], [math]\displaystyle{ \alpha_{c} }[/math], [math]\displaystyle{ Z_{ct} }[/math], and [math]\displaystyle{ \epsilon_{ict} }[/math]. We model these outcomes as: [math]\displaystyle{ y_{ict} = \mu + \alpha_c + \beta_t + Z_{ct}\theta + \epsilon_{ict} }[/math]where:

  • [math]\displaystyle{ \mu }[/math] is a grand mean,
  • [math]\displaystyle{ \alpha_c \sim N(0, \tau^2) }[/math] is a random, cluster-level effect on the outcome,
  • [math]\displaystyle{ \beta_t }[/math] is a time point-specific fixed effect,
  • [math]\displaystyle{ \theta }[/math] is the measured effect of the treatment, and
  • [math]\displaystyle{ \epsilon_{ict} \sim N(0, \sigma^2) }[/math] is the residual noise.

This model can be viewed as a Hierarchical linear model where at the lowest level [math]\displaystyle{ y_{ict} \sim N(\mu_{ct}, \sigma^2) }[/math] where [math]\displaystyle{ \mu_{ct} }[/math] is the mean of a given cluster at a given time, and at the cluster level, each cluster mean [math]\displaystyle{ \mu_{ct} \sim N(\mu + \beta_t, \tau^2) }[/math].

Estimate of variance

The design effect (estimate of unit variance) of a stepped wedge design is given by the formula:[11]

[math]\displaystyle{ DE_{SW}=\dfrac{1+ \rho(ktn + bn -1)}{1+ \rho \left(\frac{1}{2}ktn + bn -1\right)}*\dfrac{3(1-\rho)}{2t\left( k-\frac{1}{k}\right)} }[/math]where:

  • ρ is the intra-cluster correlation (ICC),
  • n is the number of subjects within a cluster (which is assumed to be constant),
  • k is the number of steps,
  • t is the number of measurements after each step, and
  • b is the number of baseline measurements.

To calculate the sample size it is needed to apply the simple formula:[11]

[math]\displaystyle{ N_{SW}=N_u * DE_{SW} }[/math]

where:

  • Nsw is the required sample size for the SWT
  • Nu is the total unadjusted sample size that would be required for a traditional RCT.

Note that increasing either k, t, or b will result to decreasing the required sample size for an SWT.

Further, the required cluster c size is given by:[11]

[math]\displaystyle{ c = N_{SW}/ n }[/math]

To calculate how many clusters cs need to switch from the control to the treatment condition, the following formula is available:[11]

[math]\displaystyle{ c_s= c / k }[/math]

If c and cs are not integers, they need to be rounded to the next larger integer and distributed as evenly as possible among k.

Advantages

Stepped wedge design features many comparative advantages to traditional RCTs (Randomized controlled trials).

  • First, SWTs are most appropriate both ethically and practically when the intervention is expected to produce a positive outcome. Since all subjects will eventually receive the benefits of the intervention, ethical concerns can be appeased, and the recruitment of participants may become easier.[11]
  • Secondly, SWTs "can reconcile the need for robust evaluations with political or logistical constraints."[14] Specifically, it can be used to measure the effects of treatment when resources for performing an intervention are scarce.
  • Thirdly, since each cluster receives both the control and the treatment condition by the end of the trial, both between and within-cluster comparisons are possible. This way statistical power increases while keeping the sample significantly smaller than it would be needed in a traditional RCT.[11]
  • Fourth, a design effect (used to inflate the sample size of an individually randomized trial to that required in a cluster trial) has been established,[11] which has shown that the stepped wedge CRT could reduce the number of patients required in the trial compared to other designs.[11][15]
  • Finally, because each cluster switches randomly from control to treatment condition in different time points, it is possible to examine time effects.[11] For example, it is possible to study how repeated or long-term exposure to experimental stimuli affects the efficiency of the treatment. Repeated measurements in regular time frames can average the noise out, which in turn increases the precision of estimates. This advantage becomes most apparent when measurement is noisy, and outcome autocorrelation is low.[16]

Disadvantages

SWT may suffer from certain drawbacks.

  • First, since in SWTs the study period lasts longer and all the subjects eventually receive the treatment, costs may increase significantly.[11] Because the design can be expensive, SWTs may not be the optimal solution when measurement precision and outcome autocorrelation are high.[16] Moreover, since everyone is eventually treated, SWTs do not facilitate downstream analysis.
  • Secondly, in an SWT, more clusters are exposed to the intervention at later than earlier time periods. As such, it is possible that an underlying temporal trend may confound the intervention effect, and so the confounding effect of time must be accounted for in both pre-trial power calculations and post-trial analysis.[5][17][14] Specifically, in post-trial analysis, the use of generalized linear mixed models or generalized estimating equations is recommended.[11]
  • Finally, the design and analysis of stepped-wedge trials is therefore more complex than for other types of randomized trials. Previous systematic reviews highlighted the poor reporting of sample size calculations and a lack of consistency in the analysis of such trials.[5][6] Hussey and Hughes were the first authors to suggest a structure and formula for estimating power in stepped-wedge studies in which data was collected at each and every step.[7] This has now been expanded for designs in which observations are not made at each step as well as multiple layers of clustering.[18]

Ongoing work

The number of studies using the design have been on the increase. In 2015, a thematic series was published in the journal Trials.[19] In 2016, the first international conference dedicated to the topic was held at the University of York.[20][21]

References

  1. Wang, Mei; Jin, Yanling; Hu, Zheng Jing; Thabane, Alex; Dennis, Brittany; Gajic-Veljanoski, Olga; Paul, James; Thabane, Lehana (2017-12-01). "The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: A systematic survey of the literature" (in en). Contemporary Clinical Trials Communications 8: 1–10. doi:10.1016/j.conctc.2017.08.009. ISSN 2451-8654. PMID 29696191. 
  2. The Gambia Hepatitis Study Group (November 1987). "The Gambia Hepatitis Intervention Study". Cancer Research 47 (21): 5782–7. PMID 2822233. http://cancerres.aacrjournals.org/content/47/21/5782.long. 
  3. "Quasi-experimental trial of diabetes Self-Management Automated and Real-Time Telephonic Support (SMARTSteps) in a Medicaid managed care plan: study protocol". BMC Health Services Research 12: 22. January 2012. doi:10.1186/1472-6963-12-22. PMID 22280514. 
  4. "Do children with cerebral palsy benefit from computerized working memory training? Study protocol for a randomized controlled trial". Trials 15: 269. July 2014. doi:10.1186/1745-6215-15-269. PMID 24998242. 
  5. 5.0 5.1 5.2 5.3 "The stepped wedge trial design: a systematic review". BMC Medical Research Methodology 6: 54. November 2006. doi:10.1186/1471-2288-6-54. PMID 17092344. 
  6. 6.0 6.1 "Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation". Journal of Clinical Epidemiology 64 (9): 936–48. September 2011. doi:10.1016/j.jclinepi.2010.12.003. PMID 21411284. 
  7. 7.0 7.1 7.2 "Design and analysis of stepped wedge cluster randomized trials". Contemporary Clinical Trials 28 (2): 182–91. February 2007. doi:10.1016/j.cct.2006.05.007. PMID 16829207. 
  8. "Cluster-randomised trial evaluating a complex intervention to improve mental health and well-being of employees working in hospital - a protocol for the SEEGEN trial". BMC Public Health 19 (1): 1694. December 2019. doi:10.1186/s12889-019-7909-4. PMID 31847898. 
  9. "Five questions to consider before conducting a stepped wedge trial" (in En). Trials 16 (1): 350. August 2015. doi:10.1186/s13063-015-0841-8. PMID 26279013. 
  10. "Designing a stepped wedge trial: three main designs, carry-over effects and randomisation approaches" (in En). Trials 16 (1): 352. August 2015. doi:10.1186/s13063-015-0842-7. PMID 26279154. 
  11. 11.00 11.01 11.02 11.03 11.04 11.05 11.06 11.07 11.08 11.09 11.10 11.11 "Stepped wedge designs could reduce the required sample size in cluster randomized trials". Journal of Clinical Epidemiology 66 (7): 752–8. July 2013. doi:10.1016/j.jclinepi.2013.01.009. PMID 23523551. 
  12. "Sample size calculation for a stepped wedge trial" (in En). Trials 16 (1): 354. August 2015. doi:10.1186/s13063-015-0840-9. PMID 26282553. 
  13. "A stepped wedge, cluster-randomized trial of a household UV-disinfection and safe storage drinking water intervention in rural Baja California Sur, Mexico". The American Journal of Tropical Medicine and Hygiene 89 (2): 238–45. August 2013. doi:10.4269/ajtmh.13-0017. PMID 23732255. 
  14. 14.0 14.1 14.2 14.3 "The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting". BMJ 350: h391. February 2015. doi:10.1136/bmj.h391. PMID 25662947. 
  15. "A stepped wedge cluster randomized trial is preferable for assessing complex health interventions". Journal of Clinical Epidemiology 67 (7): 831–3. July 2014. doi:10.1016/j.jclinepi.2014.02.016. PMID 24774471. 
  16. 16.0 16.1 "Beyond baseline and follow-up: The case for more T in experiments Author links open overlay panel". Journal of Development Economics 99 (2): 210–221. November 2012. doi:10.1016/j.jdeveco.2012.01.002. http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2011/04/25/000158349_20110425104143/Rendered/PDF/WPS5639.pdf. 
  17. "A stepped wedge design for testing an effect of intranasal insulin on cognitive development of children with Phelan-McDermid syndrome: A comparison of different designs". Statistical Methods in Medical Research 26 (2): 766–775. April 2017. doi:10.1177/0962280214558864. PMID 25411323. 
  18. "Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs". Statistics in Medicine 34 (2): 181–96. January 2015. doi:10.1002/sim.6325. PMID 25346484. 
  19. "Stepped Wedge Randomized Controlled Trials". Trials 16: 350. 2015. http://www.biomedcentral.com/collections/SteppedWedge. Retrieved 17 February 2017. 
  20. "First International Conference on Stepped Wedge Trial Design". University of York. https://www.york.ac.uk/healthsciences/research/trials/sw_conf/. 
  21. Kanaan, M. et al. (July 2016). "Proceedings of the First International Conference on Stepped Wedge Trial Design : York, UK, 10 March 2016". Trials 17 (Suppl 1): 311. doi:10.1186/s13063-016-1436-8. PMID 27454562.