More than half of psychiatry studies spin conclusions

Online daters likely aren’t the only ones guilty of making themselves look better through a curated profile––authors of clinical studies spin their findings in certain sections of their reports in more than half of psychiatry trials, according to research published in BMJ Evidence-Based Medicine.

Researchers from Oklahoma State University Center for Health Sciences wanted to evaluate the prevalence of spin in the abstracts of randomized controlled studies (RCTs)––which are considered the “gold standard” in psychiatry––and explore the association with industry funding.

Authors of studies and reports “are free to choose how to report or interpret study results,” first author Samuel Jellison, of the College of Osteopathic Medicine at Oklahoma State, et al. wrote. Often, authors will want to use sections of their report, such as the abstract, to highlight conclusions from the study that might not actually represent the findings. That action of misrepresenting study results is known as spin.

Spin takes on many forms in clinical studies, from selectively reporting outcomes, inappropriate application of measures, and the manipulation of figures or graphs. In industry-funded studies, there tends to be more spin that offers favorable conclusions, the authors wrote.

To assess spin in RCTs, the research team used psychiatry and behavioral treatment studies published in key industry journals from 2012 to 2017. The researchers considered spin if the authors of a study:

  • focused on statistically significant results
  • interpreted statistically nonsignificant results as equivalent or noninferior
  • used favorable rhetoric in the interpretation of nonsignificant results
  • claimed benefit of an intervention despite statistically nonsignificant results

Overall, 56% of RCTs included in the study had identifiable spin. In titles, 2% of RCTs had evidence of spin, while the prevalence in abstracts was 21% and 49% of abstract conclusion sections. In addition, evidence of spin appeared in 15% of both results and conclusion sections of RCTs.

The high prevalence of spin in the majority of RCTs was concerning, the authors said, noting that most physicians only read the abstract of a study anyway.

“Researchers have an ethical obligation to honestly and clearly report the results of their research,” Jellison and colleagues wrote. “Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients.”

However, researchers found no link between spin and industry funding, and spin was more common in RCTs with public funding.

As a result of the findings, the researchers urged more investigation into the issue. In fact, they said they found only one other study that looked at the impact of spin on clinical decision-making or the funding of other trials.

“While definitive conclusions about the effect of spin on real-life clinical practice are difficult to ascertain based off one trial of spin, the results of the Boutron trial suggest that there is significant potential for misinterpretation of results when spin is introduced to article abstracts,” Jellison et al. wrote about the findings of that impact study.

To combat the issue, journal editors could invite other reviewers to comment on the issue of spin, helping ensure abstracts are accurate and clear based on significant study results.