What is a double-baseline design?

30-Jul-18, 20:43
Avatar for Tudor_Queen
posted about 2 years ago

I am having one of those days where nothing makes sense to me anymore. Could anyone who knows about intervention research and is familiar with different intervention designs please explain in layman's terms (I can't handle anything else) what a double baseline design is? I've googled but it isn't making sense to me. I can see that it isn't like an RCT. There isn't a control group and treatment group to which participants are randomly assigned. And it looks like all the participants undergo the (same) treatment and are measured at baseline. So why is it called a "double" baseline design? Are outcomes compared from before and after treatment? And if so can placebo or other effects (e.g., spontaneous recovery) be ruled out, or can the design not deal with those things?

Thanks so much in advance to anyone who can answer!

31-Jul-18, 03:47
by abababa
Avatar for abababa
posted about 2 years ago
Treatment is started & baselined at different times. Because there are two baselines, we can attempt to infer the treatment is the cause of the effect. This assumes a hypothesis that the treatment's benefit is exposure time related (a reasonable assumption in most forms of behavioural intervention).

e.g. - You do the same depression intervention on two cohorts, which start 1 month apart. You baseline both at start point for respective cohort. Both cohorts report monthly depression-inventory scales. Cohort 1's response markedly improves vs their baseline 3 months later. Cohort 2's markedly improves vs their baseline 4 months later, with them having started the intervention a month later. The fact both baselines improved within a similar duration of exposure to the intervention supports the hypothesis the intervention is the causal factor.

Benefit - all participants are exposed to intervention. This can be important both ethically and pragmatically. But the same could sometimes be achieved with a crossover design.

Drawback - you may still fail to accommodate extraneous factors. For example, if both interventions are led by the same facilitator - is it the content of the intervention, or the skill of the facilitator? Similarly, depending on the temporal aspect, and the power of your stats, something extraneous could still happen in week 2.5 (like a TV documentary on depression), with sufficiently strong an effect to influence both sets of responses.
31-Jul-18, 10:38
edited about 24 seconds later
Avatar for Tudor_Queen
posted about 2 years ago
Thank you for the super helpful explanation!

Is the example you've provided just one variation of how it can be done? I've skim read two papers that say they used a double baseline design, and both of them just had one group of individuals who were measured twice at baseline (t1 and t2) and then once after baseline (t3). Then change from t1 - t2 was compared with change from t2 - t3, so see if treatment induced change (t2 - t3) was greater.

Your example, by contrast (if I've understood it correctly), doesn't involve taking two baseline measurements from the same group, but instead, one from two separate cohorts.

So both our examples take more than one baseline measurement, but the designs are actually quite different. Hmm!
01-Aug-18, 19:24
edited about 24 seconds later
Avatar for Thesisfun
posted about 2 years ago
I am not sure I understand the point of either of these study designs.

The example by tudor_queen sounds like something between a glorified before-after study and a rather pathetic interrupted time series.

The example by abababa sounds like an extremely poor/ basic stepped wedge design.

Both approaches seem to have extremely significant limitations!!
01-Aug-18, 19:44
Avatar for Tudor_Queen
posted about 2 years ago
They do indeed have limitations. The most important I can think of is that there is no attention control to rule out placebo effects. But, I think their use is justified in some scenarios, such as in a) feasibility research and in b) populations where it would be unethical to assign individuals to a control condition (or make them wait for treatment), or c )where the population are so few and far between that it is hard to get the numbers to be adequately powered for an RCT.

The example I gave was from a paper where they were testing the feasibility and efficacy of a treatment for children with a certain genetic phenotype, and they justified the design on those grounds (a and c) but did say that a larger trial would be needed before the treatment could be rolled out.

What is a wedge design? Sounds uncomfortable!


Copyright ©2018
All rights reserved

Postgraduate Forum

Masters Degrees

PhD Opportunities

PostgraduateForum is a trading name of FindAUniversity Ltd
FindAUniversity Ltd, 77 Sidney St, Sheffield, S1 4RG, UK. Tel +44 (0) 114 268 4940 Fax: +44 (0) 114 268 5766