Implied Volatility: What the Central Limit Theorem Tells You

“VIX has reached an all-time high of $48.36,” said every financial broadcast station on September 29, 2008 as the market continued its avalanche. But what the heck is this VIX?? Is it that thing that you rub on your chest when you’re sick, and you forget that you have it on your hand when you rub your eye, and it burns so much that the next day your eye is more swollen than your hot neighbors new implants? Well…not really… In short, VIX is the implied volatility (IV) measurement of the S&P500 index.

So what is IV? It’s an expected value of standardized variation from the sample mean of a particular future time period (in this case 30-days). It’s a predictive measurement based upon a number of factors placed into the Black-Scholes Model, such that we can reasonably predict the expected value of the standard error of the 30-day sample. In general, the interpretive value for traders is that the wider the variation, the more inclination that there will be a drop in underlying prices (S&P500 prices), and vice versa.  But how do we determine accuracy of the IV?  How well does the IV dictate the next 30-days of price % variation for you?

That’s where we can validate the past IV values (call it IV_past) based upon current population statistics using the Central Limit Theorem (CLT).  CLT is based upon the assumptions of finite population mean and variance, and that observations within this period are independent and identically distributed (i.i.d.).  Basically, it means that population size must be finite, have the same distribution and have no sample-to-sample correlation. So, if we look at IV_past based on these assumptions, population size is not necessarily finite if we’re talking about the entire past, present and future of the S&P500, but since IV_past measurements are only useful for the short-term, we can assume that the population parameter is a sliding window of 12 months or whatever population value that keeps (a) sample distribution the same (b) no sample-to-sample correlation.

Once these assumptions are fulfilled, we fulfill the next part…the Law of Large Numbers (LLN). The LLN states that with a large number of samples (N>29), the sample mean approaches the population mean with probability 1. In my eyes, this poses a potential problem for IV because this is only a 30-day sample (30 samples)…the minimum amount to suffice the LLN. In theory, it should work. But in practice, the aforementioned assumptions are not 100% true (sample-to-sample correlation will always exist to some small degree, and sample distributions for each point within whatever sliding window that you pick for population will never be completely random and have the same distribution).  Thus, more samples would most likely be a necessity rather than a luxury to hopefully minimize these slight modifications to our assumptions.

Nonetheless, assuming LLN is sufficed by N=30 days, we can fulfill our CLT, and the notion that the sample variance will be the standard error of the sample mean (sigma/sqrt(n)).  Knowing this, we can setup a Z-test for the standard error to see if the results are statistically significant, and thus if the IV_past is valid.

To summarize…remember the following:

1. IV is an important measurement for variation from the population mean

2. In practice, choose your sliding window (population) such that i.i.d. is upheld as much as possible before estimating the standard error of the population.

3. Understand that IV is a theoretical calculation, and thus, if we compare to the standard error of the particular population, it could differ.  Know how to interpret that difference and to determine which measurement is valid for the future.

Visit www.marketheist.com for more tips on investing, trading and life.

This entry was posted in Gibberish. Bookmark the permalink.

Leave a reply