Wednesday, January 12, 2011

Tax Realism

The numbers shown in this graph (from [1]) are appealing to my skeptical mind. While democrats believe that taxes (particularly on the rich) can always be increased to get more revenue and republicans try to argue that increasing taxes (particularly on the rich) actually reduces revenue, the reality is that... tax rates don't have much effect at all:



Over the 60 years shown in the graph, tax rates fluctuated enormously (between ~90% and ~25%). Yet tax revenue to the government as a percentage of GDP stayed almost always at 15-20%.

One idea I like, at least to focus the debate properly, would be to mandate that tax revenues are to be fixed at, say, 18% of GDP. Some advantages of doing so would be the following.

1. It reminds everyone that we don't have a limitless supply of money. There is a fixed percentage of GDP that can be spent on services, and so the only question is how we will divvy that up.

2. On the other hand, we can increase the amount of tax revenue in absolute terms by increasing GDP. Hence, this gives even big-government types an incentive to be pro-growth.

3. One other approach to increasing tax revenue is to switch to more efficient taxes. Hence, this gives big-government types a reason to like some of the tax proposals typically supported by pro-growth types. It's worth pointing out that European countries tend to have more efficient tax schemes than the US already, while they also tend to have more government services, so this scenario is not as outlandish as it may sound.

[1] http://www.deptofnumbers.com/blog/2010/08/tax-revenue-as-a-fraction-of-gdp/

Saturday, January 8, 2011

This article in the Atlantic mentions some numbers that skeptics should be aware of:
“80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials."

The latter two numbers deserve discussion at a later point, but for now, I just want to highlight the first number: 80% of non-randomized studies -- i.e. observational studies -- turn out to be wrong.

This should not be surprising. Unlike a randomized trial, an observational study is not necessarily proof of anything, which is why it is not part of the scientific method. Statisticians typically ignore observational studies unless the relative risks they find are very large, e.g. at least a 50% or 100% increase in risk between the two treatments compared. For a point of comparison, observational studies showed smokers having a 1000% increased risk of cancer.

However, the media often reports the results of observational studies that found a 10-20% (or even smaller) increase in risk. Yet such small effects can easily turn out to be other sorts of correlations. I.e., if a study finds a 5% decrease in risk of cancer in people who eat brussel sprouts, there is no reason to think it is the brussel sprouts: those people probably take better care of themselves in lots of other ways as well.

The numbers cited above back this up. 80% of observational results report effects that turn out to be wrong. Hence, it is certainly fair to be legitimately skeptical of observational studies, and I would say, particularly those with small effect sizes.