Monday, May 7, 2012

VaR She Blows!

It would be VaR-y funny if it weren't for the fact it's so VaRy dangerous






How do we know if an asset is risky, and how do we measure it?  One of the most common metrics is something called VaR, or value at risk.  From RiskMetrics:
VaR (Value-at-Risk) is the loss in a future period associated with a given quantile or confidence interval. For example, if there is a 5% chance our portfolio will lose more than USD 1mm over the next day, then we would say the one day 5% quantile (or 95% confidence) VaR is USD 1mm.
At first glance, this measure might seem reasonable.  The most common confidence level for VaR is actually 99%, which could naturally lead one to wonder why that wouldn't be enough.  But look at the definition carefully.  VaR only tells you the probability that the cost is more than a certain number; it gives you no idea how much money could be loss.  Of course, there's a wide variety of criticisms of VaR based on it's Gaussian methodology, but what I want to discuss here is how VaR functionally ignores high impact events!  VaR tells you nothing about the nature of a portfolio because everything in the tails becomes opaque and unknowable.  This creates an incentive to collect pennies on the train track because the impact of the train isn't in the confidence interval.  We get blowup strategies, like the one shown below:



VaR breaks down in a particularly tragic manner when the market under stress.  Systemic crises are rare, but due to the nature of our financial system, they're definitely something we need to worry about.  As an alternative, the Basel commission is considering an old alternative: Expected Shortfall (ES).  The difference, again from RiskMetrics:
VaR as a measure of the quantile of the P&L distribution has a history that extends back to at least the 1980s. The publication of the RiskMetrics Technical Document in 1994 established VaR's dominance over standard deviation as a measure of portfolio risk, particularly for portfolios with optionality. 
Expected shortfall incorporates more information than VaR. VaR tells you the loss at a particular quantile q. It therefore tells you nothing about what the distribution looks like below q. Expected shortfall gives the average loss in the tail below q. 
This is particularly important for portfolios that are short optionality. For such portfolios, as the market falls, losses accelerate. So VaR may look mild, but the average loss given that at least VaR is lost may be very large. 
Another major reason for preferring expected shortfall to VaR has to do with portfolio optimization. Portfolios with optimal VaR often exploit a VaR defect: its lack of subadditivity. In effect, an optimal VaR portfolio is likely to find a low VaR, high expected shortfall portfolio.
In short, VaR tells you the risk that you lose big.  ES tells you how much you are likely to lose if you lose big.

This is definitely an improvement, but how the designers know what the ES is?  How would they even calculate it?  The methodology behind the metrics are still eerily similar: they both rely on a normal world and reasonably predictable events.  But even in that world, a slight miscalibration in your standard deviation can create huge gaps in your perceptions of risk.  Considering how sensitive probability calculations are in the tails, why do we even try?  In the testimony of Richard Bookstaber, he is optimistic about the future of metrics like VaR:

I remember a cartoon that showed a man sitting behind a desk with a name plate that read ‘Risk Manager’. The man sitting in front of the desk said, “Be careful? That’s all you can tell me, is to be careful?” Stopping with the observation that extreme events can occur in the markets and redrawing the distribution accordingly is about as useful as saying “be careful.”  A better approach is to accept the limitations of VaR, and then try to understand the nature of the extreme events, the market crises where VaR fails. If we understand the dynamics of market crisis, we may be able to improve risk management to make it work when it is of the greatest importance.      

Perhaps you may one day be able to understand those dynamics, but given the fragility of your tools, wouldn't it be better to create robustness?  Small probabilities are near impossible in Gaussian worlds, much less worlds shaped by low probability, high impact Black Swans.

No comments:

Post a Comment