With markets increasingly dominated by algorithmic trading — Fed Chair Janet Yellen recently mentioned the growing influence of systematically driven traders and investors during her speech in Jackson Hole — a meaningful topic that has been relegated to a whisper conversation among systematic traders comes to the fore. I discussed the topic of passive forces impacting markets in my 2010 book *High Performance Managed Futures* and have had subsequent conversations on the topic with regulators, HFT investigators and market participants. The impact that algorithms and systematic models can have moving markets through their own force matters not only in crash modeling, but also recognizing the everyday moves in markets. It was true in 2010 and more so today.

From a trading and investing standpoint, one angle of particular interest is the concept of “momentum exhaustion” or as I prefer to term it in today’s market environment: “algorithmic exhaustion.”

Given the constraints of a diversified algorithmic portfolio — by definition of diversification, exposure to one given market or stock is governed on a percentage of the overall portfolio — there is a limit, a stopping point at which systematic models can influence price outcomes. Recognizing when algorithms have “dry powder” and when they are out of bullets can prove useful to regulators, investors and traders.

This article provides a brief review of academic literature on the topic and then highlights a formula to measure algorithmic exhaustion. But without actually tracking the trades of each algorithmic system, information to which only direct market regulators have access, such efforts are more generalized in nature.

### HFT algorithms detect other algorithms as one of several methods to infer trend strength

The importance of recognizing how algorithms influence markets is more often discussed in private than public. In fact, based on discussions with those regulating and investigating high-frequency trading abuses, it is now common for HFT and other systematic trading firms to identify the market movements of what other HFT firms are doing in the market as a method to determine potential trend strength.

The concept of tracking what drives market momentum as it relates to exhaustion theory has been used more by practioners than by academics, but there is academic work on the topic.

In November 2010, in the *Wiley-Blackwell Journal of Time Series Analysis*, researchers Qi Tang from the University of Sussex and Danni Yan considered the deeper meaning of the standard deviation function in pricing models to indicate when a price trend is “exhausted” and a reversal might take place.* They concluded that a model based on a random walk theory could result in an “autoregressive trend reversing indicator.” Building on volatility modeling concepts used in Black-Scholes option pricing theory, they adapted risk models to provide indications of trend exhaustion that were primarily based on the observable data that is price behavior.

Nearly one year later, Wilhelm Berghorn, CEO at Munich-based Mandelbrot Asset Management, penned a piece in the *Journal of Quantitative Finance*, validated the performance drivers that can be used to measure trend strength to various degrees.** Using a wavelet trend decomposition scheme, he attempted to measure trend size, trend drift and trend volatility. The result of his work was to determine characteristics of “scaling laws” and note that such an effect provides clues to the strength of the “momentum effect.” Unlike Tang and Yan, Berghorn not only considered pricing data but also examined longest held positions.

Tobias Moskowitz of Yale University along with Yao Hua Ooi and Lasse Pederson, affiliated with AQR Capital Management, later outlined the impacts and drivers behind time series momentum strategies and observed a variation in when a strategy was executed.

Enter into this a formula that considers both price indicators and potential position sizing to determine an estimated level of algorithmic exhaustion.

This “algorithmic exhaustion” measurement considers multiple factors: It examines a range of possible algorithmic triggers that have been executed by considering their beta market environment correlations. For instance, when a beta market environment of price persistence has been identified and adjusted for a range of time-series and various Kalman filter effects, a thesis assumption is that when a market environment of price persistence is identified with a strong presence those algorithmic triggers are executed across various time frames. This analysis is carried across three primary beta market environments below, considering price volatility as measured by standard deviation (similar to Tang and Yan). Then assumptions are made relative to the position size limits in a portfolio similar to Berghorn’s method. The given volume in a particular market or stock is calculated and a percentage assignment relative to the algorithmic share of the market is estimated.

One example is trading in the stock of Kroger (KR) on June 15 & 16:

1) Overall trade volume June 15 & 16 was beyond historic levels. While stock volume averages under 8 million on a near-term rolling basis, it spiked to over 76 million on June 15 and over 106 million on June 16. Assuming a low estimate of 30% of the volume was due to algorithmic activity, and based on the market cap and assumed diversification limits guiding allocation to an individual stock, it is deduced that this market move involved a significant degree of algorithmic exhaustion (depending on the exact triggers and their time series delays, potentially involving 35% to 45% algorithmic portfolio involvement). 2) Beyond trading volume, algorithmic exhaustion is also determined by considering the total percentage of potential trading algorithms that were likely hit relative to their beta market environment factors. On June 15 & 16, the triggering of a large majority of execution triggers in the three primary beta market environments occurred to various degrees. 2A) In the instance of Kroger, momentum and trend indicators were triggered during June in a wide variety of time frames and time series systems, particularly mid-term. Some of the longer-term time frames had issues sell signals earlier in the year. This can be determined using standard time frames used in the Societie Generale Trend Indicator as well as three factor formula. (For further explanation see this foundational article.) Further, the differential between the slow and fast moving averages in many of the trend signals, to an important extent, is an indication of the degree to which momentum signals have been hit: a tight differential between moving averages indicates lower volume of moving averages being hit, while a wide differential indicates a wider degree of execution triggers being hit. Trend and momentum signals are by far the most popular execution triggers, assumed near 70% of total CTA strategy trading volume based on analysis of the BarclayHedge CTA database based on trading strategy. 2B) Volatility triggers were also hit as the standard deviation in price volatility June 15 & 16 was beyond historical levels. On June 16, for instance, implied volatility hit 40 as the stock lost 1/3 of its value over two trading days from peak to trough, a standard deviation sell signal. 2C) Relative value signals were only partially hit, depending on the configuration, indicating that not all execution triggers were hit. As noted in the article, there is not a definitive method to conclude the percentage of algorithmic trading in an individually named issue, and this is an estimate.

## Recent Comments