Broadly speaking, the hedge fund universe can be divided into two contrasting trading approaches; discretionary1 strategies and quantitative strategies. Discretionary hedge fund strategies are those where manager skill is relied upon directly to analyze opportunities and make individual investment decisions. On the other hand, quantitative hedge fund strategies employ rule-based trading models as well as automated trade signals, rather than human discretion to make their investment decisions. Relying on the trade signals produced by computer models has the effect of removing human emotion, and can help to remove behavioral biases from the trading process.
A large proportion (34%2) of the total hedge fund assets under management (‘AuM’) is currently managed by quantitative funds, amounting to approximately USD 1,019 billion2 in total AuM. Quantitative hedge funds also make up around 27%2 of the approximately 13,5002 managers that form the global universe for hedge funds. However, according to Preqin’s 2017 Global Hedge Fund Report, this figure varies widely by strategy, since quantitative trading works more effectively in some strategies, and less effectively in others. For example, historically, quantitative strategies tend to be easier to implement in electronically traded markets, exploiting recurring pricing inefficiencies; these strategies work better when a large volume of independent trades can be identified, either in deep and liquid markets3 or over time and across assets (see ‘The law of large numbers’ segment below for more on this point). In their report, Preqin states that as many as 60% of relative value strategies are actually managed by quantitative funds, while the systematic trading approach accounts for only 12% of the managers running event driven strategies.
Why allocate to a quantitative hedge fund manager?
Allocating to quantitative hedge funds has the potential to offer diversification benefits which may be particularly valuable in the context of portfolios of traditional assets such as bonds and equities. However, a commonly underappreciated characteristic of quantitative hedge funds is that they can also be additions to a portfolio of hedge funds or alternative investments more broadly as the return stream, risk exposure and volatility characteristics of quantitative hedge funds have the potential to be notably differentiated.
Many of the best performing hedge fund managers are running quantitative trading models4, and as a result these strategies have increased in popularity relative to their peers. One of the key potential advantages of quantitative funds is that risk management is often embedded within the models and responses to changing levels of risk can be quickly and automatically deployed. For example, the total level of exposure can be scaled up or down in response to the prevailing market regime in order to help control or target the overall level of volatility, other investor risk tolerances or other investment criteria. However, some investors find it difficult to understand, or have suspicion of, quantitative funds. In this paper, we hope to familiarize investors with this type of investment strategy, and offer some perspectives on selecting the most appropriate one for their needs.
Types of quantitative hedge fund
The quantitative fund universe can be broken down into two broad strategy areas based largely on the source of individual trade ideas:
Micro-based strategies - including statistical arbitrage, equity market neutral, trade ideas and alpha capture strategies, that typically trade equities
Macro-based strategies - including CTAs, risk parity and managed futures, that trade in futures and related derivatives across a wide range of commodity and other financial markets
Of course, within each strategy type the assets traded, typical holding periods and the models used can vary widely. As well as this, the boundaries between these two strategy types are often blurry, and many larger firms will often trade both broad strategy areas. In particular the wider use of ‘big data’ techniques often crosses the boundary.
In our experience, we have found that their potential to deliver more predictable returns, reduced behavioral bias, attractive liquidity characteristics and diversification benefits mean that quantitative funds have the potential to add material value to a client’s portfolio. But there are a lot of quantitative managers out there and the challenge for investors is in deciding how to pick a fund with the risk and return characteristics that they need to help complement their wider portfolio.
With a universe of approximately 3,6002 quantitative managers, investors need to narrow their list of potential investments down to a more manageable number. Conducting on-site visits and thorough analyses takes time, so before investors get to that stage it is often necessary to create a smaller group of potential investments to consider.
At Man FRM, we try not to let past performance dictate our conviction in a manager: past performance is no guarantee of future results, and we are acutely aware of the risks of the biases that can arise in historical performance data. However, since quantitative Types of quantitative hedge fund strategies use repeatable models seeking to generate return, we believe that analysis of a quantitative manager’s track record may offer more insight compared to an analysis of a discretionary manager’s track record. Ultimately, we believe this can provide investors with rich insights into the characteristics of the fund.
The law of large numbers
One of the clearest potential strengths of quantitative managers is their ability to diversify across trade signals and instruments. These managers are able to trade a broad range of instruments and ultimately identify a large volume of trades where the statistical probability of profiting from the trade is greater than the statistical probability of the trade making a loss according to the model used. By identifying hundreds, or potentially thousands, of trades with even a marginally attractive probability of being profitable (even as low as 51/49), a quantitative manager has the potential to generate positive returns at an aggregated level. This is because of the ‘law of large numbers’.
In practise, not all of a manager’s trades will profit, however due to the high number of positions in the portfolio, a winning/losing trade ratio of 51/49 may be enough to offer a reasonably high likelihood of generating a positive return. This is of course heavily reliant on a manager’s ability to select a sufficient volume of trades with a probability greater than 51% chance of success. The greater number of trades with a marginal probability of generating a profit, the greater the chance of producing positive performance at a portfolio level.
Another way to understand this is to consider what is known as the ‘Fundamental Law of Active Management’5. This relates the profitability of a strategy (their Information Ratio (‘IR’)) to a manager’s skill for identifying profitable trades (the Information Coefficient (‘IC’)) and the breadth/volume of independent ideas they are able to identify (‘Breadth’);
IR = IC x √ Breadth
It can be seen that for a quantitative manager that takes a lot of independent bets, their skill on each one can be quite low to still generate an acceptable Information Ratio. Note that these ideas can be generated across multiple markets, multiple trade models or by executing trades at a higher frequency.
Now compare this to a discretionary manager who, due to the time to research each idea, is capable of generating far fewer independent trades. To make up for the fewer trade ideas, they need to possess far higher skill per trade idea which can be difficult to sustain.
To better understand a fund’s performance characteristics, our analysts apply a variety of standard tests including factor analysis and portfolio suitability analysis.
This identifies whether the hedge fund has a certain ‘factor bias’ in its returns. For example, common factor biases that arise in quantitative equity hedge funds are ‘value’ (the fund has long positions in cheaper companies and short positions in expensive companies, under a suitable valuation framework), ‘size’ (the fund has long positions in smaller companies, and short positions in larger companies), or ‘momentum’ (the fund has long positions in securities that have risen recently and short positons in securities that have fallen).
These factor biases are common, since the quantitative models that hedge funds use tend to be built around exploiting broad market phenomena, such as the outperformance of cheap assets, or of smaller companies, or the autocorrelation of stock returns. There are valid reasons why managers may take exposure to these biases, however the manager must show that they can outperform these broad factors, as it is often much cheaper to buy a factor replication strategy than a hedge fund. And critically, we believe when building a portfolio of quantitative managers it becomes vitally important to understand how the managers’ biases can relate to one another.
Example factor analysis
As part of a standard factor analysis, we model returns against the Barra Global Equity Model factors to try and understand how much of a fund’s return profile can be explained by simple factor exposures; momentum, leverage, value, world equity.
Figure 1. Factor betas
Source: Man FRM, Barra, August 2017. For illustrative purposes only. Chart is not intended to represent the actual performance of any strategy or fund.
It is also possible to establish how various factors, including manager alpha, contribute to returns over time. Such analysis on return attribution can help explain the quality of a manager’s returns, and similar modelling can be conducted to identify where various factors contribute to a fund’s overall risk profile. Figure 2 shows the proportion of a hypothetical strategy’s return coming from each of these factor exposures.
Figure 2. Return attribution
Source: Man FRM, Barra, August 2017. For illustrative purposes only. Chart is not intended to represent the actual performance of any strategy or fund.
Portfolio suitability analysis
Correlation analysis can be used to identify and help understand the relationship between each individual hedge fund manager and the traditional assets in an existing portfolio. In particular, certain quantitative strategies such as CTAs have historically tended to perform well during periods of equity market stress. Conversely, some statistical arbitrage strategies running at very high levels of leverage have struggled in periods of market stress. In addition, certain quantitative strategies may be so focused on ‘hedging’ all of the risk from the strategy that the end return to the investor is insufficient.
Example portfolio suitability analysis
When building a portfolio, as well as examining the factor analysis of each manager in isolation, we feel it is important to consider the interaction between the factor analysis of different managers. Used alongside more standard correlation analysis, this can potentially give an insight into the likely behavior of a portfolio in various scenarios. It can also help give a more detailed perspective as to whether investors are adding a risk that a portfolio already carries, or a truly diversified return stream.
Figure 3 illustrates a standard correlation analysis for a number of strategies. Simple tools like this can be used as part of a wider analysis to identify how investments might fit together and how a new manager might complement an investor’s existing portfolio.
Figure 3. Correlation analysis
Source: Man FRM, Man Group database, August 2017. For illustrative purposes only. Chart is not intended to represent the actual performance of any strategy or fund.
In addition to the factor analysis, and standard correlation analysis, when building our portfolios we look at the conditional returns of managers track records. While correlation is very important, many investors are particularly focused on limiting the downside risk in the portfolio. This risk historically occurs more often when correlations spike, and multiple managers experience losses simultaneously. To try to understand this, we look at conditional returns of realized track records (and simulations), and use the factor analysis as a tool for understanding when this scenario may be most likely to occur.
What makes a good quantitative fund?
Alongside analyzing the factor characteristics and portfolio suitability of a prospective investment, our analysts will typically hold a series of meetings with the fund manager, aiming to better understand what the manager is offering. Analyzing a quantitative hedge fund can be different to analyzing a discretionary hedge fund. For example, the funds in a specific strategy area may be running models that are similar to one another and this can lead to crowding. To try and mitigate the potential crowding effect and help identify a good quantitative fund more generally, we look into four distinct areas of fund management: (1) research process, (2) execution capabilities, (3) investment infrastructure and (4) risk management.
Too big to succeed
Within active management it’s well understood that a fund cannot grow indefinitely without sacrificing potential performance; quantitative hedge funds are no exception. Like other managers, we feel the managers of quantitative funds need to be aware of constraints to the capacity of their strategy, and show discipline by closing their fund to new investors at a point before the fund’s AuM begins to constrain performance.
In our view, it’s important for a manager to be able to demonstrate how they manage capacity, and explain at what point they would plan to restrict new capital. This discipline of prioritizing fund performance over fund assets can be incentivised by the performance based fees that we see in the hedge fund industry (funds that only charge management fees may be incentivised in the opposite way; to grow funds under management at the expense of fund performance).
1. Research process
Perhaps the most important area to assess is whether the firm follows a rigorous, scientific approach to designing and creating strategies.
To understand why this is important consider the following example. Suppose a researcher is seeking to create a strategy purely using historical data. They are provided with ten years’ worth of equity index data for a single index. The first strategy they develop has historically done well, except in certain market conditions. The researcher goes back to the drawing board and makes a few more tweaks: they may, for example, add an extra feature that reverses the positioning if certain market conditions prevail. This is slightly better, but still not good enough. So the researcher tries again and again, altering parameters, adding new ones, eventually generating a strategy that is consistently successful based upon historical data. The question to ask then is "will the strategy perform positively in the future, or does it look good purely by chance?". In this example the strategy is clearly successful based upon historical data, but how do we know if it will prevail in the future? Given the repeated attempts by the researcher to ‘fit’ performance to history, we would argue that it would not. Indeed, we would classify this as a ‘data mined’ strategy, and we believe such strategies offer significantly less chance of benefiting investors over the long term.
Now consider a firm that rather than having one researcher as above, has 100 researchers, who each come up with a single strategy. They then choose the strategy with the best historical performance, discarding the other 99. Is this any better than having one researcher backtest 100 strategies? Probably not, as the manager themselves have introduced an element of ‘data mining’ even if the researchers themselves are very diligent.
In our view, the most common pitfalls in generating trading strategies based upon historical data are:
- i. Overfitting – using too many parameters in building a strategy, which increases the risk of spurious results
- ii. Data-mining – where a researcher has made multiple attempts to create a strategy and has found one that works by chance
- iii. Hindsight bias – using data that wouldn’t have been known at the time
Avoiding hedge funds that make these mistakes plays a major role in shortlisting and selecting managers at Man FRM. So what do we look for?
First, we seek out a team led by highly experienced managers. We have found the most common reason for a lack of scientific rigor during the research process is lack of knowledge. Academic excellence in a scientific discipline can help to avoid accidental errors, which is why many quantitative hedge fund managers and analysts have PhDs in Mathematics, Physics or a similarly rigorous subject. In addition, previous experience of creating models that haven’t worked is an important control in developing new strategies. Another focus is the economic reasoning behind a strategy, even if it takes sophisticated analysis to unearth. This is not always the case, but a manager who can offer limited explanation for their signals, or has not deeply thought about why they work should set investor alarm bells ringing, in our view.
Second, we seek a well organized and disciplined investment process. For example, are there clearly defined stages, including review and sign off by senior managers, peers and a solid challenge process? Without this how can we be confident that a strategy really is up to the high standards required?
On this front it may be very useful to ask a manager about what action they took in some of their worst performing periods. When things go bad do they make radical changes, minor tweaks or do they maintain conviction in their process and not change at all? Investors can develop a better picture of how this works by looking into how and why a manager has changed their strategy in the past, and what the sign-off process is for making these changes. Can one person change the code, or is there a process of peer review? We believe it’s critical that the manager is able to demonstrate a robust set of controls around changes to their investment strategy and this can be a bellwether of the overall research process. Without this discipline, a quantitative process can potentially turn into an undisciplined discretionary one – not what the investor has actually bought.
Third, we look for good scientific methods. For example, is the hypothesis being tested clearly stated and understood upfront? Is the data set divided into an in-sample period (used to build the strategy) and out-of-sample (that is held back and only used to validate the research at the end of the process)? Are the assumptions used throughout the process robust (e.g. for trading costs)?
Clearly investors’ confidence can be increased if there is a live track record over many years, but as we explained before investors should not just rely upon a track record alone, since past performance does not guarantee similar future results.
Finally, we believe that it can be important to allow research ideas to fail and to foster a culture that does not stigmatize this failure. A process that accepts failure as part of ongoing research can help to incentivize more radical innovation with potentially larger pay-offs than safer research into making more incremental improvements to a strategy; innovation can also be incentivized more explicitly through an appropriate rewards system, and in our experience, some of the best quantitative managers will use this approach to great effect.
This innovation facet to the research process should not be underestimated, in our view. Over time, models degrade and become less profitable. We believe it is important that the research process constantly drives forward and creates new ideas. Not all these ideas will succeed (due to the rigor of the research process), but some will. This innovation can include accessing new markets, accessing markets that have been difficult to trade, the creation of brand new trading concepts or the utilization of new data techniques.
For example, there has been an increase in trading less liquid markets outside of the traditional futures and equities space where greater inefficiencies may potentially be observed. Similarly the growth in computing power has led to an increase into the use of ‘big data’ techniques. We believe it is only the best and most sophisticated managers that can successfully thrive in such circumstances.
2. Execution capabilities
Even a carefully designed strategy may not be able to produce returns in real trading: the interactions and frictions of dealing with markets, which can create a drag on performance, need to be considered. These frictions can occur at many levels.
Note that while the most explicit costs of trading are commissions, these tend to be smaller than some of the implicit costs of trading a quantitative strategy. These costs, sometimes called ‘slippage’ are outlined below in more detail:
i. Market impact – if a strategy wants to trade a large amount of a security immediately, it is possible that the price at which they can execute will be less attractive than the current market price; even small trades can lead to a crossing of the bid/offer spread when they need to be executed immediately. Most security exchanges operate as an order book, with a finite amount of liquidity available at the current price, and any excess demand is matched at a worse price as the hedge fund strategy has ‘moved the market’.
ii. Delay cost – in order to avoid market impact, many strategies will break large trades down into a series of smaller ones and execute over a longer period of time. However, while there may not be any discernible market impact of each of the smaller pieces, if the aggregate price achieved by all of the smaller pieces is worse than the price at the start of the trade, then this is a cost to the trading strategy. Typically these methods of breaking-up trades lead to a cost, as the aggregate impact of smaller trades can still move the market for each successive trade, especially if other market participants can detect that a large trade is being ‘worked’ through the market.
iii. Opportunity cost of not trading – one way of avoiding both market impact and delay cost is to wait for liquidity to become available to trade at the desired size. Suppose an investor wants to trade 1,000 shares of a small cap stock at a price of 10, but there are only 100 currently on offer at the market price. If the investor waits until 1,000 are on offer, they can trade in one go and apparently avoid any slippage. However, sometimes during the time spent waiting, the price will go up and investors may not be able to trade at all. In this scenario, the investor may have been better off trading immediately, taking some market impact but making a profit on the trade, than being left without a trade at all. These costs of not trading are crucial to the efficacy of many quantitative strategies, but it can be very difficult to quantify their impact when moving from a theoretical strategy to the real world.
Expertise in execution is also important because if other market participants can understand a strategy’s trading patterns then this can lead to your strategy being manipulated. For example, if the market realizes that there is a large order for a particular commodity’s futures being entered into the market at around 10am every day then over time, market participants may design strategies to manipulate this pattern.
As in the examples provided above, it is clear that the interaction between other market participants and trading is very important. Many quantitative firms invest heavily in this area including the development of their own electronic trading algorithms while ensuring that all trading is benchmarked against realistic benchmarks with the goal of delivering improved performance to their investors.
Finally, expertise in execution not only allows you to trade at better prices, or with lower costs, but it also opens up greater opportunities to trade. This is both in the form of new markets (i.e. having the expertise to trade slightly more challenging markets), or by reducing the cost of trading and allowing trades with a lower expected alpha to be executed.
In recent years we have found that expertise in execution has become one of the key differentiators between the best hedge funds and the rest of the hedge fund universe.
3. Investment infrastructure
In our view, quantitative hedge fund management is, at its core, really a technology business. We believe that the most successful managers will have a focus on maintaining and developing strong infrastructure while firms with weaker infrastructure will ultimately fail; examples of such failures abound. The best managers invest in infrastructure, they use some of the world’s most powerful computers and focus on gathering more and better quality data than their competition. These funds are some of the most prolific data science organizations on the planet, collecting data on anything from traditional price earnings data to alternative insights including historical credit scores and leveraging other data sources that are far more esoteric. In our experience it is the investment in a powerful computing platform that differentiates the very best managers.
With our advanced research process we seek to understand how managers’ infrastructure allows them to go about their research, what tools they use and how they organize their data. Ultimately only a strong infrastructure set up will allow efficient research. This is often over-looked as a part of managers’ capabilities in this space but can be significant, in our view.
We consider these four important items:
- i. Data analysis and collection
- ii. The coding platform
- iii. Implementation
- iv. Teamwork
These may not sound like the cutting edge of quantitative research, but we believe they are fundamentally important in assessing a manager’s capabilities, so let’s look at each one in turn.
i. Data analysis and collection
Data is fundamental to quantitative research. Data is used to test strategies, facilitate understanding on trading costs, and analyze existing strategies. Not surprisingly this can represent an enormous amount of information. Furthermore, as increasing levels of data becomes available, the amount of data that can be collected also grows exponentially. For example, several years ago, a significant number of funds relied upon the close price data to calibrate their models. Today, our team sees some of the most successful funds not just collecting real time data through the data for each security, but also more detailed data such as the depth of the order book. In addition to this, more and more esoteric data sets can be collected. For example, satellite imagery as well as textual data from news feeds. The efficacy of trading signals based solely on these newer data sets is still in question (at least in our mind), but it’s clear that the total amount of data available will only continue to rise.
Having an organized process to collect this data, making sure it is clean and presenting it to your researchers is an important and specialized task. We have found that many of the most successful funds have an extremely organized process to facilitate this which helps enable the research team to focus on executing their research in a reliable, repeatable and efficient manner.
- A common database platform and scalable technology platform to store high volumes of data
- Specialists in data cleaning to ensure that data presented is accurate
- A mechanism to allow researchers to quickly and easily retrieve data
- The ability to incorporate new data sets as they are required
The last two examples on this list should not be underestimated in our view. We believe that the most successful hedge funds are often focused on producing results in a repeatable manner supported by advanced database structures as opposed to relying upon inefficient and error-prone spreadsheet-based processes. We are looking to exclude such ‘cottage industry’ processes, where the researcher is responsible for all stages from collecting the raw data through to implementation.
ii. The coding platform
It is not sufficient for researchers just to have access to all the data they need, they also need to have a way to process this data and turn it into useful information. Obviously given the huge range of data available, the tools required to do this need to be very efficient, powerful and capable of processing the largest data sets.
We believe it is also equally important wherever possible to access and use a common platform. For example, we feel there should be one coding language used and understood by everyone involved. By that, we mean that the research team can all use and access the same tool kit. We believe this leads to greater efficiency, a reduction in errors and encourages the sharing of ideas. A joined up approach also aims to prevent key person risk and can help allow for greater validation of any code produced. For example, if all researchers in a team were to use a separate platform to perform their research then how would anyone else be able to check their output and what would happen if that one person leaves? Would anyone else understand their code? Would anyone be able to validate what they have done or even review their code?
The implementation of quantitative strategies requires a close attention to detail. Even the smallest errors in implementation can lead to substantial downstream issues. Therefore, we believe ensuring that there is an efficient way to move strategies from the research arena to the live trading arena is an area that must not be under-estimated.
The final area to consider is how the team is structured overall. We have mentioned the importance of having experienced leadership in place. In fact, we believe it is also important to consider how the overall research and technology teams collaborate.
We often find that the best teams are those that have strong cultures that openly encourage collaboration, co-operation and a seamless interface between technology and research. This is not entirely surprising given that the ability for people to share ideas and learn from each other is at the core of research.
4. Risk management
As mentioned previously, having quantitative methods of risk management built into the investment process can be one of the biggest potential advantages of quantitative hedge funds, but not all quantitative models for managing risk are created equal.
To get comfortable allocating capital to a manager, we believe it’s important to first get comfortable with the way that they define, measure and manage risk. Do they define risk as volatility, or do they use a measure of downside risk such as expected shortfalls or tail risks? For example, many investors are focused on a strategy’s drawdown characteristics, and as a result they will want to evaluate the manager’s risk/return profile to ensure that it’s aligned with their tolerances and expectations.
However, risk management goes beyond the tracking of risk measures. It must also be embedded into how a manager approaches their day to day trading. For example, managers should be able to demonstrate to us exactly how they decide when the time is right to give up on a current research idea, or perhaps more importantly an active strategy that isn’t working. A strategy that has worked well in the past is unlikely to continue to profit ad infinitum. As market inefficiencies disappear and opportunity sets change, investment strategies will need to be adapted or discontinued entirely if they stop working. It is our view that turning off strategies when they stop working is a fundamental part of risk management for quantitative strategy.
A note on transparency
Quantitative funds are often accused of not being transparent. In our experience, the better performing funds offer a remarkable degree of transparency, but let’s first understand what we mean by transparency.
In a quantitative fund, transparency does not necessarily mean seeing positions every single day. This can sometimes be useful for example in slower moving CTAs and can potentially help a fund researcher understand whether a fund is trading as they would expect or conversely if it is experiencing style drift. However, in some funds positions can change quite a lot through the day so this is often not the most useful data.
Shining light into the ‘black box’
For any investor, it’s important to understand your investments beyond simple risk and return characteristics, and this is achieved with reporting that is both detailed and frequent. At Man FRM, we’ve found that many managers can deliver position level data on a daily basis, providing a detailed account of a portfolio’s investment characteristics, which ultimately can enable the optimization of any portfolio construction or rebalancing decisions. But transparency isn’t simply about being shown every line of code in a trading model.
Transparency is about having light shed on what the manager is doing and how they work; this is akin to shining a light into what have traditionally been seen as the ‘black boxes’ of the investment management industry. In fact, we believe that once a manager explains their process, quantitative funds can arguably be more transparent than their discretionary peers. This is often the case since the systematic approach to their investment process can make their potential return stream more predictable and their responses to market conditions can be easier to anticipate.
What we believe is important is that a fund can explain its trading edge, details of their research philosophy, how they approach generating models, examples of models that have worked (and even more importantly ones that have failed), and provide enough information for an allocator to determine if a fund has a potential edge and also whether it is following the strategy it originally described. In our view, those managers that have a repeatable approach better understand how and why their models perform well, and will often, but not always, be more confident in making a significant commitment of their personal funds; we find that when managers are less confident in their own strategy, they are less likely to put ‘skin in the game’. We believe that being able to discuss this and all of the points raised in this paper with the fund and get clear answers is often more useful than complete transparency on the underlying positions in the fund at any one moment in time.
We believe that it’s important to take a flexible approach to the implementation of hedge fund portfolios, when deciding on an allocation to quantitative managers, investors should consider their own investment criteria as well as the criteria listed in this paper to help define a strategy that is both objectively robust and appropriate for them. As we have discussed, quantitative hedge funds have the potential to provide a source of diversification and a differentiated return stream. These managers seek to create value for their investors that are sufficiently different from what they might achieve by investing solely in equities and bonds, or with a hedge fund allocation focused on discretionary managers.
With this paper we hope to have explained the potential benefits of a quantitative hedge fund allocation, and laid out a practical approach to either allocating directly or working with a hedge fund specialist. Readers should now have a broader idea of what makes a high conviction selection of a quantitative fund, and a straightforward approach to some of the issues that may help them separate the good from the bad. As quantitative strategies continue to become a more prominent feature on the investment landscape, we hope that some investors will now feel more at ease understanding these strategies.
Past performance is not indicative of future results. Returns may increase or decrease as a result of currency fluctuations.
1. Also known as fundamental strategies.
2. Source: 2017 Preqin Global Hedge Fund Report.
3. 3. Please see further comments on the growth of activity in less liquid markets in the ‘What makes a good quantitative fund?’ section.
4. Source: Hedge Fund Research.
5. Grinold, Richard C. (1989) “The Fundamental Law of Active Management”, Journal of Portfolio Management, vol. 15, no. 3 (Spring): 30-37.