Why the FT is talking nonsense (again)

Yesterday, I read this article: Why the BoE is talking nonsense (Google). It’s all about a graph from the Bank of England and how the axes are defined. According to the author, the graph is complete bullshit because the observations are graphed in terms of the number of standard deviations from the mean.

BoE-Nov-labour-market-slackSay you have time-series for three variables. All these variables could have different means and all these variables could have different degrees of variability. One variable might go up and down all over the place, while another may vary just a small amount. You can put them all in one graph, but you’ll have the risk that the graph will look like shit. If one variable has a mean of 100 and another a mean of 0, you can imagine you need a very large graph to show both variables. Or one variable might vary in such a way that another variable with smaller variability will look like a straight line.

A handy way to overcome this issue is to standardize all observations. For each variable, you first substract the mean from all observations. This way, all observations will be centered around zero and all variables will have the same mean: zero. But now there is still the issue of variability. To overcome this issue, we divide all the demeaned observations by the standard deviation of the variable. Now, each observation is expressed in the number of standard deviations from their mean. Standardizing data does not change the information content of the data. It is still the same data, with the same meaning.

So, what’s wrong with this article? The author’s claim is that some observations on the graph are six standard deviations away from the mean, which would make such events extremely unlikely (once in 254 million years). Therefore, the BoE is talking nonsense. So what’s wrong here? It’s really quite simple. In order to express observations in terms of probabilities, you need a probability distribution. The author implicitly assumed a normal distribution for the underlying data. I don’t know why, but he did. Different probability distributions have different parameters. For example, a normal distribution has two parameters: mean and standard deviation. So, if you know the mean and the standard deviation of a normal distribution, you can make statements about probabilities for normally distributed variables. But let’s take a look at other distributions. For example, a Beta distribution has four parameters and a Poisson distribution has only one. They also have a mean and a standard deviation, but they don’t necessarily tell you anything about the probabilities. Just remember that you need to know the distribution and its parameters in order to calculate probabilities. In the rare case of the normal distribution, the parameters just happen to be the mean and the standard deviation. But don’t think you can calculate a mean and a standard deviation from any kind of dataset and think you now have magical powers to infer probabilities. You don’t.

I could show you a graph of standardized daily stock returns through history, and you would see a huge drop somewhere around October 1987, on Black Monday. An observation about 22 standard deviations below the mean. Would that graph be silly? Of course not. If you assume a normal distribution, you wouldn’t expect a 22 standard deviation observation anywhere in the history of the universe. But we know daily returns do not follow a normal distribution. In fact, they follow a distribution with much heavier tails (i.e. higher probability of extreme events). This distribution still has a mean and a standard deviation and you can still standardize observations, but you cannot use the mean and the standard deviation to infer probabilities. If you’d like to talk about probabilities, first find out which distribution describes the random process (this is not always easy!), then estimate the parameters (this is not always easy!) and then talk about probabilities (okay, this is pretty easy).

It disturbs me a bit that a quality newspaper publishes these kinds of harsh articles. Don’t they have somebody over there that has elementary knowledge of statistics? By the way, I’ve seen this kind of stuff before at the FT. They claimed that markets are not efficient because the Capital Asset Pricing Model (CAPM) does not hold in reality. They even proclaimed Gene Fama is crazy because of this. But market efficiency doesn’t require the CAPM to hold. So they’re really quite funny, those harsh articles. Personally, I don’t write about stuff if I don’t understand it. Criticizing others for making errors when you don’t really know what you are talking about is just horrible. It’s too bad that some journalists can get away with this stuff, because surely some people out there will read their articles, believe them, and move on. And that’s just a shame.

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit / Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit / Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit / Bijwerken )

Google+ photo

Je reageert onder je Google+ account. Log uit / Bijwerken )

Verbinden met %s