The first antipsychotic, chlorpromazine was invented in 1950. Although it was still being used when I started my psychiatric training in the early 1990s, I don’t think I have prescribed it in a decade. Things have moved on, as they ought. But since it is over 60 years since the first antipsychotic was marketed, one can expect meta analyses such as this.
The analysis included 167 double-blind randomized controlled trials with 28,102 mainly chronic participants. The standardized mean difference (SMD) for overall efficacy was 0.47 (95% credible interval 0.42, 0.51), but accounting for small-trial effects and publication bias reduced the SMD to 0.38. At least a “minimal” response occurred in 51% of the antipsychotic group versus 30% in the placebo group, and 23% versus 14% had a “good” response. Positive symptoms (SMD 0.45) improved more than negative symptoms (SMD 0.35) and depression (SMD 0.27). Quality of life (SMD 0.35) and functioning (SMD 0.34) improved even in the short term. Antipsychotics differed substantially in side effects. Of the response predictors analyzed, 16 trial characteristics changed over the decades. However, in a multivariable meta-regression, only industry sponsorship and increasing placebo response were significant moderators of effect sizes. Drug response remained stable over time.
There is always the question of publication bias when dealing with any dataset of papers. Drug companies tend to publish the studies they submit to regulatory boards, and those are the studies that show their medication is effective. Editors generally reject papers with negative results. The move to open access, open data and trials registers is helping ameliorate this. The funnel plot for this paper suggests that there indeed is some publication bias.
However, it is the improvement with placebo that interests me. This happens with antidepressant trials, and anti anxiety trials. Theories suggested include:
- A decrease in severity of population. The older studies were done on inpatients, with generally higher baseline scores on outcome scales.
- An increase in monitoring, driven by a need to reduce bias, and increasingly standardised clinical trial methodology
- Changes in the standard of clinical care, relating to the move from the hospital to the community and more recently the reintroduction of talking therapies for psychosis
The authors note.
…industry sponsorship has not inflated effect sizes. But there was publication bias, because companies do not always publish inconclusive studies. Increasing placebo response, but not decreasing drug response, contributed to the decreasing effect sizes over time. Finally, sample size and related measures arose several times as significant moderators, and these are modifiable design features for drug development. There could be a vicious circle. Sample sizes have increased continually over the years (see online Figure S4). Companies conduct large trials to assure statistical significance. The inclusion of many patients and sites leads to more recruitment pressure and variability, which, by definition, reduces effect sizes (SMD=mean difference/standard deviation). The next sample size estimation will suggest an even larger sample. We recommend somewhat smaller studies, but with better selected patients, to reverse this trend.
This may leave us with a bleak conclusion. Our designs, which do not account for variations in population with increased sample size, may be making clinical trials not only far more expensive but futile.