Saturday, January 8, 2011

This article in the Atlantic mentions some numbers that skeptics should be aware of:
“80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials."

The latter two numbers deserve discussion at a later point, but for now, I just want to highlight the first number: 80% of non-randomized studies -- i.e. observational studies -- turn out to be wrong.

This should not be surprising. Unlike a randomized trial, an observational study is not necessarily proof of anything, which is why it is not part of the scientific method. Statisticians typically ignore observational studies unless the relative risks they find are very large, e.g. at least a 50% or 100% increase in risk between the two treatments compared. For a point of comparison, observational studies showed smokers having a 1000% increased risk of cancer.

However, the media often reports the results of observational studies that found a 10-20% (or even smaller) increase in risk. Yet such small effects can easily turn out to be other sorts of correlations. I.e., if a study finds a 5% decrease in risk of cancer in people who eat brussel sprouts, there is no reason to think it is the brussel sprouts: those people probably take better care of themselves in lots of other ways as well.

The numbers cited above back this up. 80% of observational results report effects that turn out to be wrong. Hence, it is certainly fair to be legitimately skeptical of observational studies, and I would say, particularly those with small effect sizes.

No comments: