The Man AHL Academic Advisory Board met in September 2014 to address two key questions: What is volatility? Is risk volatility?
- Volatility is not the same as risk - Volatility as a measure of risk is a good place to start, but is not valid if investors have a preference for skew or are concerned about tail risk.
- Volatility is too high to be explained by rational factors only - Rational factors such as news related to expected future cash flows, future interest rates, news about the amount of risk in the future, and changes in risk aversion cannot fully explain volatility. Behavioural effects, such as over-extrapolation, can help explain excess volatility.
- Statistical leverage can help explain why volatility rises as asset prices fall - Statistical leverage means that a fall in prices of equities tends to lead to higher future volatility, which, in turn, generates asymmetry in long term returns. The concept is not well understood fundamentally – none of the current economic theories seems satisfactory.
- Low volatility can be a risky time - At a macro level, one concern currently is that investors take excessive risks as volatility has been rather low, resulting in higher potential tail risks.
- Using high frequency data to measure volatility has been a big focus of academic research - Significant progress has been made in volatility forecasting over the last 20 years using statistical models. We have moved from ARCH and GARCH type of models to models using higher frequency data to measure past volatility and extrapolate into the future what volatility may be. Fundamental models have made less progress recently.
The Man AHL Academic Advisory Board met in September 2014 to address two key questions: What is volatility? Is risk volatility?
The Board, whose members bring diverse perspectives and deep expertise, consists of:
- Nick Barberis - Professor of Finance at the Yale School of Management – one of the world’s leading experts in behavioural finance.
- Campbell Harvey - Professor of Finance at the Fuqua School of Business at Duke University and Editor of the Journal of Finance from 2006-2012 – a leading financial economist with a focus on the dynamics and pricing of risk.
- Neil Shephard - Professor of Economics and of Statistics at Harvard University. He was the founding director of the Oxford-Man Institute of Quantitative Finance in Oxford and directed it from 2007-2011 – one of the top theoretical and applied econometricians.
These distinguished academics * were joined by Tim Wong, Chairman of Man AHL, Sandy Rattray, CEO, Matthew Sargaison, CIO, Darrel Yawitch, CRO, Anthony Ledford, Chief Scientist at the Man Research Laboratory in Oxford and Egon Ruetsche, a Quantitative Analyst in the Man AHL Volatility team.
Before launching straight into the discussion, we point the interested reader to the appendix for a very brief survey of academic research on volatility and risk.
1. Man AHL: Is volatility the same as risk?
Nick Barberis (NB): Volatility or standard deviation as a measure of risk is a good place to start, as Markowitz did in the 1950s (). We know that it can be justified under the expected utility framework. To evaluate whether it is a good measure of risk, we have to look at its predictions. Some of its predictions are good; in particular, volatility as a measure of risk predicts that investors want to diversify because they are worried about portfolio volatility.
We also know that volatility is a rather crude measure of risk by penalizing upside moves as much as downside moves. This led to the introduction of downside risk measures. Even though the approach is appealing, what really matters is the predictions of these theories. If downside risk measures were correct, then models based on them would make good predictions. In particular, other factors than market beta should be priced in the cross-section, and we should find this confirmed in the data. In reality, there is not a lot of academic work on this. One paper () shows that stocks that covary a lot with the market when the market performs poorly have higher average returns. This can be viewed as a validation of some downside risk measures.
Egon Ruetsche (ER): Isn’t part of the issue that investors focus too much on volatility as risk and the Sharpe ratio and care too little about other risk measures like skewness?
Cam Harvey (CH): It is rather naïve to assume that volatility is a total measure of risk. Markowitz emphasizes that the mean-variance approach is not valid if investors have a preference for skew.
Anthony Ledford (AL): You mentioned skewness, and I think it is important to take it into account. But even if we ignore skewness for a moment, volatility is not the same as risk because of the potential for heavy tails. In practice, it is also hard to estimate volatility because of these rare tail events. We all know that financial assets have heavy tails, and we have to keep reminding ourselves that just looking at realized volatility over a given period of time can be misleading.
Darrel Yawitch (DY): I agree that volatility is not the same as risk. There are several risk measures we should use and volatility is only one of them. Other measures include expected shortfall, drawdowns as well as measures focusing more on tail risk scenario analysis, e.g. looking at what the impact of an event like the collapse of Lehman Brothers can have on a portfolio. NB: The framework of prospect theory (, ) says that people do pay a lot of attention to tail risks, and there is evidence in asset markets to support this. For example, the aggregate market has earned a high risk premium historically. It is also negatively skewed, so one interpretation of the high premium is that it is compensation for the negative skewness, which people find unappealing. On the other hand, IPOs and out-of-the-money options earn low average returns. These are positively skewed assets, so one interpretation of their low returns is that investors really like the positive skewness and are willing to pay a high price for it.
Tim Wong (TW): We also see this when we talk to investors. A lot of them care not only about volatility but also about drawdowns or state that they cannot accept a drawdown that is bigger than a given percentage.
Neil Shephard (NS): There is active research that looks at these asymmetries and on how to price left-hand tail risk. It is useful to introduce the concept of statistical leverage. By this we mean that a fall in prices of equities tends to lead to higher future volatility which, in turn, generates asymmetry in long term returns. In the very short term, however, investment opportunities are rather symmetrical. The link between getting from symmetrical short term opportunities to asymmetrical long term opportunities is very important for making investment decisions. Many investors, Man AHL included, take volatility into account when scaling their positions and can in that way get rid of some of the asymmetries. Matthew Sargaison (MS): Scaling by volatility results in more symmetric returns. Moreover, as investors do care about skewness, trend-following is appealing as it results in better skewness and drawdown characteristics than simple passive strategies.
2. Man AHL: What drives volatility?
NB: In a rational framework, if we think of prices as discounted expected cash flows, we have four sources of volatility. On the one hand, volatility can be caused by news related to expected future cash flows. On the other hand, there are three sources of volatility linked to the discount rate: news about future interest rates, news about the amount of risk in the future which comes into the risk premium, and changes in risk aversion. However, in reality, we do not really understand volatility, and we have difficulty linking volatility and movements in prices to these four sources of volatility. This was analysed in the classic papers by Schwert () where he tried to link volatility in prices to news about economic fundamentals but only found weak links. Overall, it seems that we need other explanations for causes of volatility, for example behavioural factors.
CH: It is useful to decompose volatility into the part that can be explained by economic forces (such as variation in the common factors that move with the economy) and the part that is not explained by economic forces (in other words, idiosyncratic volatility). On the factor side, we have, for example, business-cycle variation, liquidity and sentiment or mispricing. Volatility in these common factors will drive the volatility of an asset via the asset’s exposures to these factors.
NS: I like Cam’s idea of decomposing volatility into the part that is explained by economic factors as well as the idiosyncratic part. I also think that understanding the driving factors is important. However, econometrically, those models do not seem to work very well – their explanatory power does not seem to be high. Recently, Nick Bloom and others () have constructed an uncertainty index based on newspapers, trying to measure how frequently policy changes and policy uncertainty are mentioned in the news. But it is difficult to covary levels and changes of volatility with macroeconomic uncertainty and policy uncertainty. The general approach would be to map policy uncertainty to uncertainty in financial markets which causes volatility.
CH: If you look at policy uncertainty, one question is whether it is positively or negatively related to volatility. On the one hand, one might think that policy uncertainty would obviously result in more volatility. On the other hand, uncertainty can lead companies to take less risk and hold more cash, thereby reducing volatility going forward.
NS: Looking at what Nick said about the discounted expected cash flow model, I would expect policy uncertainty to result in higher volatility. I agree that, in the short term, cash flows might be less volatile in periods of uncertainty as companies are not going to invest, but longer term cash flows will be much more uncertain as we don’t know what the discount rate or the growth rate a few years ahead are going to be.
AL: When thinking about what drives volatility, we should not only focus on what drives volatility up but also consider what drives volatility down. For the past few years, it seems that one obvious driver of volatility have been the central banks. Through Quantitative Easing, central banks have been driving volatility down over the last few years. Also, central banks intervene in order to support their currency when the exchange rate becomes too extreme in the central bank’s view. These interventions have a tendency to reduce volatility.
MS: An extreme example for that has been the Swiss National Bank who drove the volatility of EURCHF to almost zero when introducing a floor to the exchange rate in September 2011. And that gives you an altogether different issue of volatility potentially underestimating risk.
3. Man AHL: Why are markets so volatile in the first place? Is that consistent with any kind of efficiency?
CH: This is an old problem in finance and some very influential papers tried to explain the market premium looking at volatility. Shiller (), among others, could not reconcile observed volatility with changes in dividends. Individual equities appear to be very volatile. There are several sources that can cause high volatility, some of which are idiosyncratic and some of which are systematic. The idiosyncratic portion can be linked to information asymmetry. When someone is selling a stock, we don’t know whether she does so simply because of liquidity purposes or whether she has valuable information. Another idiosyncratic source is in processing of information; common information such as earnings announcements is subject to different interpretations by different investors. On the systematic side, as we discussed before, asset return volatility can be caused by factor exposures to a set of common factors. Finally, there is market impact where volatility is linked to the lack of market depth; for example, buy and sell orders may move prices away from fair value. The volatility due to market impact results in market inefficiency, however, the other sources mentioned above do not necessarily reflect inefficiency.
NB: I agree with Cam’s assessment of individual stocks. But let’s talk about the aggregate stock market. Of the four sources of volatility mentioned earlier, three of them – cash flows, interest rates, and the amount of risk in the economy – have trouble explaining the volatility of the aggregate market. For example, one hypothesis is that movements in the aggregate P/E ratio are due to rational forecasts of future cash flows. In other words, that when the P/E is high, this is because people rationally expect high future cash-flow growth. Shiller () pointed out that, if this were true, the P/E ratio would predict subsequent cash-flow growth. However, he found that in reality, it doesn’t; in other words, changes in P/E ratios can’t be easily explained through rational forecasts of cash flows. Similarly, if rational forecasts of risk were driving P/E volatility, we should see lower risk after times of high P/E ratios and higher risk after times of low P/E ratios, but we don’t see this either. And the same thing for interest rates. This means that we are left with the fourth source of volatility, which is changes in investor risk aversion. This is the basis of the so-called habit model of the aggregate market which says that investors don’t want to fall below a past level of consumption they’ve become accustomed to. Under this model, people become more risk averse when the market falls, leading to more selling pressure and price declines. But I would criticize even this model. The habit model predicts that high P/E ratios will coincide with investors expecting low returns in the future, because they are less risk averse. Looking at survey data, we find the opposite to be true, in other words, investors expect high returns when P/E ratios are high.
CH: Volatility can be induced by both rational as well as behavioural sources. The question then is how much of the total volatility is caused by behavioural factors. The survey data that I collected in the past shows that both rational and behavioural factors seem important. Asking CFOs about expected market returns and then plotting their answers against the P/E ratio of the S&P index, we see a V-shape pattern. On the one hand, the managers forecast high future returns when the P/E ratio is very high which points to the over-extrapolation explanation. On the other hand, they also forecast high future returns when the P/E is very low which puts them in the rational camp.
NB: I think that the bigger part is behavioural. Looking again at the trouble we have explaining market volatility through rational forecasts of cash flows, interest rates, risk, or even through changes in investor risk aversion suggests to me that rational factors explain a relatively small part of the volatility we observe. In my view, behavioural effects like over-extrapolation play a key role in explaining volatility. Investors tend to over-extrapolate past returns, in other words, after good past returns, they expect continued good returns. This can create excess volatility because, after good returns, investors become more optimistic about the future and keep pushing prices up. Eventually, the market corrects.
AL: When we talk about why markets are so volatile, people often only think of equities but it is worth pointing out that there are markets which are rather involatile. For example, interest rates futures do not move very much and trading activity is subdued. In these markets, the tick size is a material friction in estimating volatility.
Sandy Rattray (SR): Another example is foreign exchange. As opposed to equities, the information for FX seems difficult to interpret, and it is hard to figure out what really drives the underlying. Foreign exchange volatility, though, very often is quite low. Looking at GBPUSD, for example, we see that it has been trading in a narrow band for several years.
CH: But isn’t that linked to interest rates and inflation expectations? As those quantities don’t move very much for G10 currencies, one would expect the exchange rate not to move very much either.
SR: Yes if you think that, in a rational framework, the forward should tell us where the spot will be in the future. On the other hand, exchange rates are also linked to factors like trade flows and money supply which are very hard to forecast.
NB: It is interesting to think about markets with insufficient volatility but the majority of the literature and evidence points to excess volatility. The literature is not just about equities, but also looks at other asset classes, e.g. bonds and real estate. For real estate, the price / rent ratio should play a similar role as the P/E ratio does for equities, in other words it should predict subsequent rent changes but the data show that it doesn’t as much as it should.
4. Man AHL: How does volatility change over time? What about correlation?
NS: If we look at the time-series of volatility and how volatility changes over time, we see that volatility is driven up when stock prices are going down, resulting in a negative correlation between equity returns and volatility. This is the statistical leverage effect we already mentioned. The effect is more pronounced for equity indices compared to single stocks or exchange rates. In those markets changes are much more symmetric. It is also worth noting that equity volatility sometimes changes by an order of magnitude – we don’t just see small variations in volatility levels but sometimes very big changes. It is extraordinary that we see these kinds of shifts in the volatility of a diversified market index. Correlations, on the other hand, typically change more slowly.
CH: I like your idea about statistical leverage, and we see higher volatility around economic recessions or crises. But why is it that volatility goes up when the equity market goes down? Leverage is mentioned regularly in the literature but the evidence to support this seems rather thin. Or is it mainly that during recessions there is more uncertainty or more disagreement among market participants?
NS: That’s why I call it statistical leverage, in order to say that there is no good economic explanation. Statistical leverage can help explain other interesting phenomena in the markets, for example the variance risk premium observed in option markets. But statistical leverage itself is not well understood fundamentally. There are a number of economic theories about this but none of them seems to be satisfactory.
NB: I agree that the leverage effect is not well understood. There is a paper by Veronesi () trying to explain why volatility is higher during recessions. He argues that uncertainty is higher during recessions, which implies that every bit of news carries more weight. With respect to correlations, in addition to rational and behavioural models, it is useful to look at a third framework which is called the frictional framework. Under this model, investors are by and large rational but there are frictions in the markets. For example, changes in institutional portfolios can have a considerable impact on correlations. If an asset or asset class becomes more widely held by institutions, it tends to covary more with other asset classes held as investors will increase or decrease the risk across their entire portfolio. For example, commodities became a much bigger part of institutional portfolios over the last 20 years, and indeed the correlations between commodities and equities went up. A similar effect can be seen when an asset becomes part of a category, for example if a commodity gets added to a commodity index or a stock becomes part of an equity index.
DY: Sometimes, we see correlations swinging from negative to positive in a rather short period of time or going from positive to much more positive. For example, in 2011 market participants were talking a lot about risk-on / risk-off and were characterising risk-off periods to be periods with higher correlations, at least in absolute numbers. Are there methods for detecting regime changes in correlations, going from positive to negative or from positive to more positive?
CH: This is an interesting idea but it usually is very difficult to define the regimes and the number of regimes. Are there two regimes, or three? Where are the cut-offs? At some point, this becomes arbitrary and for most models these regimes are picked ex-post and may not be helpful going forward.
5. Man AHL: Is higher volatility good or bad for investing
MS: It really depends on the type of investor. For a long only equity investor, higher volatility usually is not desirable as it is typically associated with negative stock returns rather than positive returns. For other types of investors, including momentum investors, very low levels of volatility can be risky. This is because they tend to take higher risks in those periods and are more exposed to adverse moves in the markets or potential tail events. In addition to this, it also depends on what kind of volatility we are talking about. For trading volatility, higher implied volatilities and low realized volatilities are very attractive.
AL: For momentum strategies, it is not only the level of volatility that matters but also the direction. For example, volatility could be high or rising as a consequence of a previous trend in which case a momentum portfolio should have profited. On the other hand, momentum tends to give better predictions when volatility is low.
CH: We should not only focus on higher volatility but also ask what causes volatility to be high. Looking at the sources of volatility, we have rational drivers as well as mispricing. If volatility is high because of mispricing, this can be good for investing as it presents opportunities. On the other hand, there may be few investment opportunities if the higher volatility is caused by the variation in the common factors that drive discounted cash flow valuations.
NS: I agree that one risk in a low volatility environment is that investors scale up their positions. In that case we see higher potential tail risk and portfolios are more exposed to model misspecifications. The last financial crisis can to some extent be viewed as an example of this. At a macro level, one concern currently is that investors take excessive risks as volatility has been rather low, in other words low volatility can be a risky time.
NB: There are conflicting forces at work here. On the one hand, when volatility is higher, investors may want to reduce their allocation to risky investments and increase their allocation to riskfree assets which could be bad for institutional investors. On the other hand, some of the demand for institutional investor services is about reducing anxiety, and in periods of high volatility a client may want to switch from passive investments to more actively managed investments. Moreover, higher volatility can be good for institutional investors if it is associated with uncertainty. In that case, a smart institutional investor can generate more alpha by better figuring out the uncertainty and better handling the volatility.
SR: In theory, if there is more volatility there are more opportunities. If there is no volatility, there are no opportunities. But in reality, more volatility usually is associated with more problems in the market and more things that can go wrong.
AL: It depends whether the opportunity scales with the level of volatility or, in other words, whether the signal to noise ratio stays the same as volatility changes. It is very hard to tell whether this is true or not. In a benign range of volatility, the opportunity may well scale with the level of volatility but when volatility is at extreme levels, high or low, this relation probably does not hold anymore.
6. Man AHL: Is market risk the only risk that matters?
CH: In the simple version of the capital asset pricing model (CAPM), the only risk that is rewarded is the volatility of the market portfolio. Expected returns from assets are determined by the contribution to the market portfolio. But volatility is not the only risk that matters. For example, the CAPM can be extended to account for the fact that investors care about mean-variance as well as skew, i.e. in addition to the traditional beta there would also be a skew beta as a compensation for contributing negative skew to the portfolio ().
DY: I agree that market risk is certainly not the only risk that matters. We also have to be aware of liquidity risk and credit risk. Historically, when investors or asset managers got into big trouble, it was usually not caused by market risk only. SR: It seems that we all agree that market risk is not the only risk that matters. But, somehow, it seems to be more or less the only risk that makes it into risk models.
AL: That is somewhat true and mainly due to the fact that extreme events are very hard to measure or predict. Events like war, terrorism or natural disasters are risks, too. From an investment point of view, they matter if they have an impact on markets. I believe that some of these do matter for investing but are completely unpredictable.
NB: We certainly need to care about tails and extreme events. However, if investors take these events into account, this should result in risk premia for those risks. In some sense, whether or not there is a risk premium does test whether or not investors take those events into account and take action to protect themselves against these risks. I don’t think that there is a lot of academic research on this question.
7. Man AHL: What progress in volatility forecasting has been made in the last 20 years?
NS: A lot of progress has been made. Over the last 20 years, we moved from ARCH and GARCH type of models to models using higher frequency data to measure past volatility and extrapolate into the future what volatility may be (). The usual framework for these models is to calculate a daily realized volatility using 5 or 10 minute returns and use the daily measures to forecast future volatility. This means that the forecast depends on a less lengthy look back. For ARCH type of models, one usually uses daily returns over quite a long period of time, such as a month or a quarter. If there is a big shock or event in the market which results in going from a low volatility regime to a high volatility regime, using higher frequency data allows for much faster reacting models. This can be useful for scaling positions, for risk management or for trading volatility itself. Quite recently, Tim Bollerslev () and others have looked at high frequency data of both the asset itself as well as the options on that asset. This allows better estimating implied left-hand tail risk and associated risk premia.
MS: How much take up of the higher frequency volatility models has there been in the industry? It seems that most risk management models still estimate volatility using daily returns.
NS: There has been some take up. However, for risk management purposes it is a bit harder as you need a coherent covariance estimate. If a portfolio trades assets in different time zones, using intra-day data is a bit trickier as one needs some sort of linkage between markets or time zones.
DY: Has there been a lot of work on using higher frequency data for modelling correlations?
NS: There is a lot of theory on how to estimate correlations and covariances from high frequency data. One issue, as mentioned above, is asynchronous markets which creates new biases. How to overcome these biases is at the current academic frontier. Leaving aside the asynchronicity issue, there is an interesting effect we see when measuring high frequency correlations for synchronous assets. We find that correlation measures based on high frequency data tend to be significantly lower than correlations based on daily data, on average by about 10%. This means that we see faster price discovery in some markets than in others, and it looks like these price discoveries can take a long time, quite often a week or more. This is different in volatility; when we measure volatility using intra-day data, daily data or weekly data, we tend to get the same average level.
8. Man AHL: Can we better forecast volatility by understanding the fundamental drivers?
CH: We just heard that a lot of progress has been made on the statistical measurement of volatility. With respect to the economic factors, unfortunately, there has been little progress. Nick mentioned the work by Schwert () on the economic factors driving volatility, and there are a lot of factors and variables one can think of as driving volatility. But when it comes to using those factors to forecast volatility, very little academic work has been done so far. To that extent, it really seems that price based models are all we have at this time.
AL: I think it also comes down to data. For a lot of factors we think may drive volatility, there is not a lot of data available that could be systematically captured and used for building models.
CH: I agree. We mentioned Nick Bloom’s uncertainty index () before, and I think this is a case of a factor where data would be available on a daily basis. Moreover, several companies produce sentiment indices which could be used to build factor models. SR: What if we move from volatility in general to periods of higher volatility, i.e. can we better forecast periods of higher volatility by understanding the fundamental drivers? Maybe fundamental drivers can help predict when investors become particularly irrational.
NB: Some academic research has looked at news releases and the impact of negative words. This approach may have some predictive power but so far there has not been much success using this kind of data for forecasting volatility.
9. Man AHL: Is there any value in implied volatilityfor forecasting future volatility?
NS: Yes, there is predictive power in implied volatility for forecasting future volatility, and this predictive power seems to be complementary to using high frequency data. The big problem with using implied volatility for forecasting future realized volatility is in the variance risk premium which means that implied volatility is a biased predictor. As the variance risk premium changes over time, just using implied volatility for forecasting is problematic. There is also a lot of additional useful information in implied volatility like asymmetry in the volatility smile. The extent to which put implied volatility is higher than call implied volatility for equity indices tells us whether investors are more or less worried about left-hand tail risk.
CH: Implied volatilities potentially contain valuable information. We looked at various trading strategies based on changes in implied volatilities and found some predictive power (). However, we found that after transaction costs the strategy was not sufficiently profitable to be viable.
ER: Given that implied volatility is a biased predictor of future realized volatility and given that the bias is driven by the variance risk premium, can we model and predict the variance risk premium over time in order to remove or reduce the bias?
NS: I don’t think we have good models for this. Bollerslev () and others have a paper on how the variance risk premium changes over time and how it relates to macroeconomic factors but they do not have a model that allows predicting the variance risk premium.
ER: One interesting aspect in the Bollerslev paper () is that they also look at the predictive power of the variance risk premium for subsequent market returns. They find that the variance risk premium helps predict stock market returns for subsequent months and that its predictive power is higher than that of the P/E ratio.
APPENDIX: VOLATILITY AND RISK - A BRIEF SURVEY
Markowitz (), in his 1952 work on portfolio selection, was one of the first to provide a quantitative framework for measuring portfolio risk and return. In his work, the investor faces a tradeoff between risk and return. Markowitz reasons that an investor should maximize his expected portfolio return while minimizing the portfolio’s variance of return which means that variance or volatility is taken as a proxy for risk.
Academic research soon started questioning using variance or volatility as a measure of risk as it penalizes upside moves as much as downside moves. Sortino and Price (), among others, made the argument for the use of downside risk measures, and their work proposed to replace the Sharpe ratio by the Sortino ratio. A different way of thinking about risk was introduced by Kahneman and Tversky () in their work on Prospect Theory. They argued that people derive pleasure and pain from gains and losses and that people have a (natural) loss aversion.
Looking a bit closer at volatility, both academic researchers and practitioners have observed interesting phenomena. One is the excess volatility puzzle which refers to the fact that stock markets seem more volatile than justified by economic fundamentals. Shiller () argues that stock market volatility is too high when compared to uncertainty about future dividends or discount factors. Behavioural aspects and biases can help explain the volatility puzzle. Variation in investor risk aversion allows returns to be much more volatile than the underlying dividends (). Moreover, investors tend to over-extrapolate past returns, which can create excess volatility.
As volatility is a measure of how much an asset moves and as assets are influenced by economic conditions, it seems natural to also study volatility by looking at its link to economic fundamentals. Schwert () analyses the relations between business cycles, financial crises and stock volatility and finds that volatility is higher on average during recessions. Overall, academic work focusing on explaining volatility by looking at economic conditions and fundamental drivers only finds weak links. Moreover, the literature on forecasting volatility by looking at those drivers is rather scarce and relatively recent.
The advent of high frequency data allowed researchers to build models based on increasingly shorter return horizons (). This resulted in more reactive models which produce better forecasts than standard ARCH type models.
 Harry Markowitz. ‘Portfolio Selection.’ Journal of Finance, 7(1): 77–91, 1952.
 Daniel Kahneman, Amos Tversky. ‘Prospect theory: An analysis of decision under risk.’ Econometrica, 47(2): 263–291, 1979.
 Robert Shiller. ‘Do stock prices move too much to be justified by subsequent changes in dividends?’ American Economic Review, 71(3): 421-436, 1981.
 Robert F. Engle. ‘Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation.’ Econometrica, 50(4): 987–1007, 1982.
 William G. Schwert. ‘Business Cycles, Financial Crises and Stock Volatility.’ Carnegie-Rochester Conference Series on Public Policy, 31: 83–125, 1989.
 Campbell R. Harvey, Robert E. Whaley. ‘Market volatility prediction and the efficiency of the S&P 100 index option market.’ Journal of Financial Economics, 31: 43–73, 1992.
 Frank A. Sortino, Lee N. Price. ‘Performance Measurement In A Downside Risk Framework.’ Journal of Investing, 3(3): 59–64, 1994.
 Pietro Veronesi. ‘Stock market overreactions to bad news in good times: a rational expectations equilibrium model.’ Review of Financial Studies, 12(5): 975–1007, 1999.
 Campbell R. Harvey, Akhtar Siddique. ‘Conditional Skewness in Asset Pricing Tests.’ Journal of Finance, 55(3): 1263-1295, 2000.
 Nicholas Barberis, Ming Huang, Tano Santos. ‘Prospect Theory and Asset Prices.’ Quarterly Journal of Economics, 116(1): 1–53, 2001.
 Andrew Ang, Joseph Chen, Yuhang Xing. ‘Downside risk.’ The Review of Financial Studies, 19(4): 1191-1239, 2006.
 Neil Shephard, Kevin Sheppard. ‘Realising the future: forecasting with high-frequency-based volatility (HEAVY) models.’Journal of Applied Econometrics, 25: 197–231, 2010.
 Tim Bollerslev, Michael Gibson, Hao Zhou. ‘Dynamic Estimation of Volatility Risk Premia and Investor Risk Aversion from Option-Implied and Realized Volatilities.’ Journal of Econometrics, 160: 102–118, 2011
 Tim Bollerslev, Viktor Todorov. ‘Tails, Fears, and Risk Premia.’ Journal of Finance, 66(6): 2165-2211, 2011.
 Scott R. Baker, Nicholas Bloom, Steven J. Davis. ‘Measuring Economic Policy Uncertainty.’ Chicago Booth Research Paper No. 13-02.
* The external members of the Man AHL Academic Advisory Board are compensated for their membership of the board.