1978) called meta-analysis an exercise in mega-silliness. To quote: “A mass of reports - good, bad, and indifferent - are fed into the computer in the hope that people will cease caring about the quality of the material on which their conclusions are based.” Cipriani et al (2018) in their recent network meta-analysis of 21 antidepressant drugs rated the risk of bias of the trials they put into their analysis. Only 18% were seen as low risk. Yet they hoped the results would compare and rank antidepressants for acute treatment in adults.
The article does list some winners and losers, although accepting that there were "few differences between antidepressants when all data were considered". Parikh & Kennedy (2018) add vortioxetine to their list of winners, which must please the manufacturers, as it is not yet off patent. Amitripyline actually had the highest efficacy but didn’t reach the ‘winners’ list, I think because of poor acceptability, defined as dropout rates, in the head-to-head trials, and low certainty of evidence (and maybe some bias against a traditional tricyclic). Unlike Parikh & Kennedy, Cipriani et al don't make any recommendations about antidepressant choice, merely hoping that their "results will assist in shared decision making between patients, carers, and their clinicians".
Such a weak conclusion to their main study may help to explain why in their publicity, which made the Sun, Guardian and front page of The Times, Cipriani et al concentrated on the statistically significant results for antidepressant efficacy, which actually aren't news (see my tweet), although may be for reboxetine (see previous post). I suppose it's not seen as being ideological to create publicity to increase the citation index of a paper! Or, to mislead and avoid dealing with the challenge of the placebo amplification hypothesis (eg. see previous post). To engage with this issue would actually be a more scientific way of proceeding, but the study by Cipriani et al doesn't have any bearing on it (even if they would like it to).
Actually the review paper itself (as opposed to the publicity) does recognise that the short-term benefits of antidepressants are "on average, modest" and that the "long-term balance of benefits and harms is often understudied". Several aspects of their findings do reinforce that there are biases in the data eg. smaller and older studies have larger effect sizes against placebo; novel or experimental drugs of comparison are more effective than when that same treatment was older (which they term the 'novelty effect'). I also wasn't sure whether they had got replies to all of their requests to the pharmaceutical companies for their data. Let's have a more measured debate about the evidence for antidepressant efficacy.