Peter Simons has an excellent blog
post on
Mad in the UK. He makes the case that the apparent benefit of antidepressants in clinical trials is an artefact and that any apparent improvement in depression is due to the placebo effect, not any active effect of the antidepressant.
The effectiveness of treatment is assessed in randomised controlled trials comparing active drug with placebo (see my
OpenMind article). Participants are randomly assigned to two groups in the basic design, in which they are given either the active drug or given a placebo, which is supposed to be inert and indistinguishable from the active drug. In other words, participants should not know to which group they have been allocated, so that they are blinded to which allocation they have been given. This is important as if they did know whether they had been given active drug or placebo then their expectation of the implication of that allocation could affect outcome. If they think, for example, the drug being tested is likely to be effective, then merely knowing the allocation may produce or exaggerate any difference between the two groups. If there is a significant difference between the groups, the finding may merely be a self-fulfilling prophecy that the active drug is better than placebo.
As Justin Karter said in a recent
Mad in America blog
post, an individual’s subjective belief about receiving active or placebo treatment in a clinical trial can significantly influence the outcome of treatment. If participants are unblinded, or unmasked - another word for unblinded - as to which treatment they have received, then these expectations could still be affecting outcome. Justin reviews a paper by Fassi et al (
2023) that demonstrated in neurostimulation studies (see eg.
previous post about neurostimulation) that the belief of receiving the active or the placebo condition during a trial can explain the research outcome better than the actual treatment to which the participants are assigned. In other words, the individual differences in subjective treatment explained the variability in outcomes better than the objective treatment.
I participated in a
BMJ rapid response
discussion following what I think is the most definitive
paper about whether trials should obtain data about participants’ guesses as to which treatment they were given at the beginning of a trial. As explained above, trials are expected to be blinded/masked, so that neither patients or doctors, including raters, do not know whether participants are in the treatment or placebo group. As I also said in my last
contribution to that rapid response discussion, there is a general positive gloss put on the problem of unblinding/unmasking in clinical trials. I went on:-
The general thought seems to be that measuring unblinding is difficult, so we may as well give up and carry on with our pretence [that trials are blinded]. This may be to continue "turning a blind eye", as used in the phrase in the title of the original paper.
I suggested:-
I think it may be possible to measure what the degree of unblinding should be from correct hunches from efficacy based on effect size, and if the actual degree of unblinding with correct guesses is significantly greater than this, it would surely imply that bias had been introduced. … I am reluctant to … be as negative about the implications [of measuring unblinding/unmasking] as some of my fellow rapid responders.
I emphasised that in clinical trials that it is the raters that can detect unblinding, not just participants and that raters guesses matter even more than patient guesses, certainly if those patient guesses are not directly communicated to the rater when the assessment by the rater is undertaken.
I concluded:-
If raters are able to be cued in to whether patients are receiving active or placebo treatment, their wish fulfilling expectancies could be affecting outcome ratings. How do we know that small effect sizes [as in antidepressant trials, for example] in particular are not due to this amplified placebo effect? I think we should stop turning a blind eye to this legitimate question. It does need to be answered to give confidence about the use of many medications that are endorsed in clinical practice.
Thankfully, the article by Jureidini et al (2023) reviewed by Peter Simons in his blog post did produce data on unblinding/unmasking in the treatment for adolescents with depression (TADS) study - see my BMJ letter. As I say in that BMJ letter, “Fluoxetine was not in fact statistically better than placebo in this study and only became so when added to cognitive behaviour therapy in an unblinded arm”. It’s therefore wrong to conclusively conclude, as do many people, that the TADS study demonstrated that antidepressants, or at least fluoxetine, improve depression.
There were four treatment arms in the TADS study, which included fluoxetine (Prozac) only; cognitive-behavioral therapy (CBT) only; fluoxetine and CBT; and placebo. In psychotherapy trials it is not possible to blind participants as to whether they are given psychotherapy, such as CBT, from whether they were in the control treatment (see my BMJ letter). They have to be told which group they are in, unless they are deceived about the nature of the trial, which is generally regarded as unethical to do. So it has to be explained to participants that they will be allocated to the experimental therapy in the active condition, or allocated to a control group, which could merely be being put on a waiting list. Obviously they might be hoping that they would be allocated the psychotherapy treatment, rather than continuing on the waiting list for treatment, and are likely to be disappointed if not given that allocation, which could well affect how they rate their degree of improvement or otherwise during the trial. Anyway, psychotherapy trials cannot be conducted double blind and there is always the methodological issue of the adequacy of control groups in clinical trials of such treatment in terms of being able to interpret the effectiveness of psychotherapy.
All four groups in the TADS study guessed treatment allocation more accurately than the 50% that would be expected by chance. Treatment guess had a substantial and statistically significant effect on outcome. The treatment effect was actually then not significant, even though it wasn’t anyway. Removing guesses from the analysis still did not make the treatment effect really significant (p=0.06, when standardly p<0.05 is the significance level used in clinical trials). As Jureidini et al conclude for the TADS study, “treatment guesses strongly predicted outcomes and may have led to the exaggeration of drug effectiveness in the absence of actual effects”. Unblinding, which amplifies the placebo effect, may well be the reason for the small difference in clinical trials between antidepressants and placebo.
Participants in TADS improved more if they believed they had received the drug rather than placebo. Those that guessed placebo even though they had received fluoxetine actually improved more than those on the drug who guessed correctly. Although this was reversed for those that received placebo ie. that those who guessed correctly when on placebo did worse than those actually on the drug, these findings merely highlight the importance of belief about treatment rather than necessarily the treatment itself in outcome of treatment for depression. In fact, those who were more confident of their guess reinforced this effect more than those who were less confident.
Interestingly, Jureidini et al did not find much association with side effects of medication as the reason for unblinding. This has been the typical reasoning of Irving Kirsch, for example, that it is side effects of medication that cue people in to their allocation in a clinical trial (see eg. previous post). There is evidence for this hypothesis in that active placebos, which mimic the side effects of the trial drug, generally reduce the effect size. But it’s not the only reason for unmasking in clinical trials, which can even include fraud by the raters, who actually somehow break the blind before they do their assessment. Holding up to the light sealed envelopes which contain the coded allocation of patients, so that the allocation shows through has been highlighted. Anyway, the guesses of trial participants can easily be communicated to raters in the assessment interview. Participants in antidepressants trials do seem to be significantly unblinded, even if they may not be in lithium trials (see my BJPsych 1996 letter).
TADS should never have been used to recommend fluoxetine for adolescent depression. Analysis of the guesses and subjective beliefs of participants merely reinforces this conclusion and highlights the obvious influence of placebo factors in any response to antidepressants. This placebo effect must not be ignored and the pretence that it has been eliminated in randomised controlled trials must stop. The fear that antidepressants may not be effective and that the modern basis of psychiatric practice in medication may collapse does not justify not taking the issue of bias in clinical trials seriously (see eg. previous post).