part 3:
Quantifying and enhancing value
67 79 89
Chapter 7 Measuring and interpreting the performance of broker algorithms Chapter 8 Making the most of third-party transaction analysis: the why, when, what and how? Chapter 9 Enhancing market access
■ Chapter 7 Quantifying and enhancing value
Measuring and interpreting the performance of broker algorithms What does transaction cost analysis tell us about the performance of disparate algorithms? *Ian Domowitz and **Henry Yegerman
he advantages of algorithmic Trithms trading have popularised algoto the extent that most major brokers, and some technology providers, offer the service in some form. Recent studies suggest that anonymity, reduced market impact, trading efficiency, and lowering the overall cost of trading are drivers of the trend from the customer perspective.1 This list of attributes suggests that a link between cost measurement, both pre-trade and posttrade, and algorithmic trading 1 See, for example, ‘Institutional Equity Trading in America 2005: A Buy-Side Perspective’ Adam Sussman, Tabb Group, June 2005. 2 Thirty eight percent of firms responded that algorithm usage is triggered by cost measurement, while 35% identified cost analysis supporting usage to be the major driver. See ‘Marching Up the Learning Curve: The First Buy-Side Algorithmic Trading Survey’ Randy L. Grossman, Financial Insights, May 2005.
■ ALGORITHMIC TRADING
activity should be strong. In fact, firms report that algorithm usage frequently is triggered by some form of cost analysis. Further, transaction cost technology that supports algorithmic trading is found to be the leading factor influencing growth in algorithmic trading.2 The focus of this chapter is on information and measurement leading to informed choice and evaluation of algorithmic trading engines. Choice and evaluation are closely linked in our minds. The number of vendors is growing quickly, but not half as fast as the alphabet soup labeling the number of strategies available. Evaluation, hence measurement, must include the choice of strategy and a preferred set of vendors. We begin with some remarks relating to the strategies themselves. ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
67
*Ian Domowitz, managing director, ITG Inc. **Henry Yegerman, director, ITG Inc.
■ Chapter 7 Quantifying and enhancing value
68
Trade structure Algorithm construction is an exercise in structuring a sequence of trades, and the choice of an algorithm follows the same basic principles. At the most abstract level, trade structure is a continuum, ranging from unstructured, opportunistic liquidity search to highly structured, precisely scheduled sequences of trading activity, generally linked to a certain benchmark, such as VWAP. Unstructured liquidity search is associated with realtime information and raw or adjusted decision price benchmarks. Structured approaches embody tight tracking strategies, based on historical data, and are relative to participation strategy benchmarks. Although there are many ways to describe structure, the following factors serve to illustrate the nature of required pre-trade information. ■ Trade horizon. Shorter horizons require less structure. For example, a half-hour VWAP trade and a similarly timed pegging and discretion strategy will not yield wildly different outcomes. Inputs into the horizon choice include volatility and the order’s percentage of ADV, for example, in addition to any specific portfolio instructions. ■ Need to finish. The higher the need to finish an order, the more ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
structure is needed in order to avoid falling behind schedule. The type of pre-trade information here relates more to portfolio manager instructions than to specific analytics. ■ Predictability. Predictability considerations cover alpha estimates, volume profiles and transaction costs. The degree of predictability governs the degree to which a horizon and a schedule should be adhered too. This consideration requires the use of properties of the distribution of estimates, in addition to averages such as standard deviation measures. ■ Price sensitivity. As price sensitivity increases, structure becomes less useful, due to the need to advertise willingness to trade. Short-term volatility history, as well as real-time deviations, are inputs along this dimension. ■ Risk tolerance. This item refers to execution risk versus the benchmark. Greater tolerance generates less need for a structured horizon and schedule. Judgments with respect to risk tolerance are informed by pre-trade information in the form of ‘trading efficient frontiers’, mapping out optimal trade offs between risk, cost and alpha for varying trade horizons. ■ THE TRADE 2005
■ Chapter 7 Quantifying and enhancing value
■ Need for list integrity. The greater the need for time uniform execution across a list, the more structure is required. Portfolio risk increases as one moves, say, from Time Weighted Average Price strategies to liquidity search, the performance of which is measured in terms of Implementation Shortfall. Pre-trade inputs are similar to those expected in standard portfolio analysis. Stock correlations are needed, as well as acceptable tracking error to an index. Constraints with respect to dollar and/or sector imbalance are factored in through optimisation engines integrated with the pretrade analytics. This also suggests choices (and trade offs to be considered) between minimisation of total risk, minimisation of the tracking error of portfolio residuals against index benchmarks, and minimisation of expected cost. Informing the trade structure: aggressiveness and opportunity There are a variety of factors influencing the choice of trade structure; hence there are many different requirements for pre-trade information. As an illustration, we concentrate here, and later in the chapter, only on volatility and the percentage of ADV represented by the order. The former is a marketwide proxy for market conditions, while the latter is a function of the ■ ALGORITHMIC TRADING
chosen portfolio. Figure 1 (overleaf) depicts the two characteristics for a recent large transition. The original strategy contemplated breaking the list into as many equal slices as days in the horizon, as set by the transition manager. The goal was to hit VWAP for each name on each day. This strategy produced transaction costs, which were roughly double those expected, based on a pre-trade cost estimate. When costs were measured against the VWAP as a benchmark, the expense of the transition increased another 32% relative to the expected cost calculation. Figure 1 suggests, however, that different trade structures might usefully have been applied to elements of the overall list, as opposed to ‘one size fits all’. Although the horizon governing the entire transition is fixed by the manager, not all pieces need be completed according to that schedule. For example, lower ADV orders, with names exhibiting high price volatility, justify an aggressive stance, arguably with minimal trade structure, exploiting expected lower market impact for small orders, while avoiding the opportunity costs associated with greater volatility and price sensitivity. The pre-trade view also suggests orders for which a passive structure might be useful, namely for large orders exhibiting small price ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
69
■ Chapter 7 Quantifying and enhancing value
Figure 1: Volatility vs % of ADV 11.00%
Aggresive strategy
9.00% 8.00%
Volatility
7.00%
Passive strategy
Opportunistic strategy
6.00% 5.00% 4.00% 3.00% 2.00% 1.00% 0.00% 0
50%
70
3 ‘The Cost of Algorithmic Trading: A First Look at Comparative Performance’ by Ian Domowitz and Henry Yegerman, http://www. itginc.com/ research/ whitepapers/ domowitz/ algorithmictrading _2.24.2005.pdf
100%
150% % of ADV
200%
250%
300%
volatility, hence sensitivity. Market impact costs are expected to be high, while opportunity cost is low. Lack of price sensitivity suggests the ability to signal willingness to trade at a low cost. While the passive stance might suggest a high degree of structure in order to ensure completion. Uncertainty with respect to transaction cost estimates for extremely large order sizes argues for less structure, depending now on factors such as the accuracy of volume distributions for those names, which are not summarised in the preliminary analysis outlined in Figure 1. This point is especially important in considering the lower volatility, lower ADV orders labeled as ‘opportunistic strategy’ in Figure 1. True opportunism, in the interest of lower costs, might well be justified in the sense of less structure for the higher ADV orders in ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
this category. While a more structured approach, combined with some aggressiveness may work in the case of truly low ADV/low volatility segments of the transition. Although it is clear that additional pre-trade information and research is needed for this particular case, transaction costs may be cut considerably by considering even such a rough cut at the data, relative to a simple uniform strategy over a fixed horizon. Measurement and interpretation In a recent study3, we analyse the cost of algorithmic trading based on a sample of 2.5 million orders, consisting of almost 10 billion shares traded from January through December of 2004. The data comes from over 40 institutions and covers the performance of six broker-provided algorithmic trading systems. In the aggregate, we find that algorithmic trading has lower transaction costs than alternative means represented in the sample, based on a measure of Implementation Shortfall. Controlling for trade difficulty, differences in market, size of trade, and volatility regime does not change the result. Algorithmic trading performance relative to a volume participation measure, VWAP, also is quite good, averaging only about two basis points off the benchmark, although certainty ■ THE TRADE 2005
■ Chapter 7 Quantifying and enhancing value
of outcome declines sharply with order size. These differences in uncertainty are highlighted in the comparison of performance across vendors of the service, in which equality of average performance also breaks down, once order sizes exceed 1% of ADV. The study sheds light on various aspects of algorithm performance, and provides some qualitative lessons, as well as quantitative conclusions. Our interest here, however, is in the potential interpretation and use of such results from an integrated trading perspective. As an example, we follow the previous discussion of trade structure choice, expanding the two-dimensional view offered in Figure 1 to three dimensions in Exhibit 1 (overleaf), which is based on the same data used in our earlier report. Transaction cost results in basis points, broken down by order percentage of ADV and volatility, are further disaggregated by the broker providing the algorithmic trading strategy. The benchmark used in the table is the midpoint of the bid and ask prices at the time the order is received by the trading desk, which is a measure of the implementation shortfall variety4. Systematic regularities are difficult to discern, beyond the obvious conclusions that costs rise with volatility for given order sizes, and ■ ALGORITHMIC TRADING
that costs increase with size for comparable volatility conditions. Irregularities suggest the real message: the relative performance ranking of vendors varies, depending on order type and trading environment, and this information ought to be taken into account in choosing vendors for a given trade structure. This view is consistent with the study cited above, in which a clear relationship between algorithmic trading cost and certainty of outcome relative to a benchmark across brokers is found to be lacking. The leading example illustrated in the table relates to performance in different volatility regimes. If one were to look only at low volatility outcomes, brokers 3, 4, and 5 are roughly equivalent with respect to performance in the aggregate. This result is maintained for the lowest ADV trades, which dominate the sample here. On the other hand, for the next ADV category, ranging from 1% to 5% of volume, only brokers 3 and 5 appear to be doing a good job on a relative basis. As trades move into a range defined by relative order ADV of 5% to 10%, broker 3 now clearly outperforms its peers. In medium volatility environments, the cast of ‘good’ vendors changes, even in the aggregate. Broker 4 remains a top performer, but brokers 3 and 5 appear to be ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
71
4 Low volatility is defined as less than 125 basis points of price movement per day. Medium volatility is defined as between 125 – 200 basis points, and high volatility is defined as greater than 200 basis points.
■ Chapter 7 Quantifying and enhancing value
Exhibit 1: Algorithmic trading costs by broker, ADV and volatility Cost V. MBA Volatility Low
Low total Medium
Medium total
72
High
High total
Order Pct of ADV Broker Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
Less than 1% (5) (8) (2) (2) (3) (9) (5)
1 - 5% (9) (18) (5) (22) (6) (31) (11)
5 - 10% (14) (26) (1) (16) (23) (26) (14)
Total (6) (10) (2) (3) (4) (16) (6)
Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
(7) (7) (4) (5) (8) (10) (7)
(13) (15) (46) (22) (28) (25) (15)
(9) (21) (72) 4 (52) 31 (13)
(8) (9) (11) (6) (13) (13) (9)
Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
(11) (9) (14) (16) (15) (19) (12)
(11) (5) (38) (24) (38) (37) (17)
(31) (45) (206) 4 (38) (8) (34)
(11) (9) (20) (17) (24) (23) (14)
(7)
(14)
(19)
(9)
Total
replaced by numbers 1 and 2. The aggregate ranking again is only slightly changed in the lowest ADV category. As trades in the range of 1% to 5% of ADV are considered, brokers 1 and 2 appear to survive, while number 4 slips. The latter result may be a consequence of some vagaries in this particular sample, since in the range of 5% to 10% of ADV, broker 4 is back, with results even better than number 1, while broker 2 no longer appears to be performing as well. ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
As we move to the highest volatility environments, brokers 1 and 2 still dominate the aggregate rankings. Interestingly, this performance comparison survives through order sizes up through 5% of ADV, after which broker 4 again appears to dominate in terms of performance. Exhibit 2 utilises the same set of data to measure the risk associated with each broker’s algorithms across different levels of volatility and demand for liquidity. Risk is ■ THE TRADE 2005
■ Chapter 7 Quantifying and enhancing value
Exhibit 2: Standard deviation of algorithmic trading costs by broker, ADV and volatility Standard deviation of cost vs. MBA Volatility
Broker
Low
Low total Medium
Medium total High
High total
Order Pct of ADV Less than 1%
1 - 5%
5 - 10%
Total
Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
30 25 20 21 23 31 27
51 40 39 31 36 44 48
63 32 9 64 37 20 58
31 26 20 21 23 32 28
Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
42 36 26 29 31 38 38
67 69 31 48 49 63 65
75 50 52 57 93 68 74
44 38 26 29 33 40 39
Broker 1 Broker 2 Broker 3 Broker 4 Broker 5 Broker 6
60 62 35 40 49 61 56
97 93 73 76 79 83 93
113 137 133 86 89 164 115
64 64 36 42 52 63 60
40
75
89
42
Total
defined here as the standard deviation of cost outcomes versus the benchmark. A greater standard deviation indicates greater volatility in the cost outcomes of a given broker algorithm relative to the ADV and stock price volatility categories. The dispersion of cost outcomes follows common sense intuition. More volatile orders have more risk than less volatile orders and orders ■ ALGORITHMIC TRADING
demanding more liquidity embody more risk than orders demanding less liquidity. Analogous to the transaction cost results, there is a significant difference between brokers within the different categories, ranging from 11 bps for the easiest orders (low volatility and less than 1% ADV) to 78 bps for the most difficult orders (high volatility and 5-10% ADV). However, unlike the performance averages, there appears to be a ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
73
■ Chapter 7 Quantifying and enhancing value
74
certain consistency of results relative to brokers with respect to the distribution of cost outcomes. Broker 3 ranked either first or second in eight of the nine ADV/volatility categories, and broker 4 ranked either first or second in seven of the nine categories. Conversely, broker 1 ranked either fifth or sixth in six of the nine categories. In all of these cases, the extra information is important in differentiating providers. We have focused on two of the factors, volatility and order percentage of ADV, which determine how different brokers model the optimal trade structure in their algorithmic servers. The data suggests there is a difference between the algorithms of various brokers with respect to both trading cost results and the risk associated with achieving those results. This points to the possibility that some broker algorithms may be better suited for specific order types and trading environments than others, and that users of algorithms are not yet identifying which broker algorithms are preferable in different situations. Tying the pieces together In order to develop a feedback loop that allows algorithm users to make informed choices and evaluate trade offs between different broker algorithms, a body of his■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
torical performance data is required. Such data may include order information, market information, additional pre-trade analytics, and order characteristics such as the trade strategy, the broker and the actual algorithm that was employed. An example of the role of data inputs and of historical TCA measurement in the analysis of trade structure is illustrated in Exhibit 3. The information flow begins with input from the execution management system (EMS). This includes the trading characteristics of individual names in the list, as well as submission times, performance benchmarks, and any specific trade instructions, such as urgency levels. Historical broker performance is matched with transaction cost distributions across alternative algorithms. Userdefined parameters and historical cost information are complemented by inputs from pre-trade analytics, including, as in our previous examples, relative order size and volatility information. The result is information that may be used to judge trade offs. A ‘best fit’ strategy recommendation is a place on the trade structure continuum. At this level, the recommendation might be as simple as, for example, ‘unstructured liquidity search, based on a combination of pegging and discretion.’ More specific recommendations ■ THE TRADE 2005
■ Chapter 7 Quantifying and enhancing value
Exhibit 3: Transaction cost inputs to strategy recommendations Inputs from the EMS ■Symbol ■Side ■Shares ■Order submission on time ■Performance benchmark ■Trade instructions Inputs from historical TCA ■Broker performance aggregates ■Cost distributions across alternative algorithms ■Medial percentage ADV
➔
Trade structure analysis via web services
➔
Inputs from pre-trade data ■Relative order size ■Intra-day volatility ■Daily volatility
depend on the granularity of strategy information available by broker. The projected cost and risk of the general strategy are provided. Broker performance is matched against strategy recommendations, producing indicators of ‘favorable’ and ‘unfavorable’ broker choice for each strategy option. ■ ALGORITHMIC TRADING
Outputs to EMS ■Broker algorithm performance ■Best fit strategy recommendation ■Projected cost ■Standard deviation of cost estimate ■Favourable and unfavourable indicators ■Summary market information
Real time broker measurement and analysis of algorithmic trading costs Although historical TCA is valuable for analysing performance trends, the immediacy of the trading environment often demands that information be readily available directly on the trade blotter. A fully structured approach integrates historical ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
75
■ Chapter 7 Quantifying and enhancing value
76
trading cost information with realtime trading data to provide effective measurement, analysis and decision support tools for broker and algorithm choice. This in turn, provides its own short-term feedback loop of information. Integration of trading cost analysis tools into the trading blotter makes it possible to use realtime inputs to dynamically refine algorithmic strategy selection. Real-time inputs can be compared to historical norms for the stocks on the blotter, producing ‘alerts’ that signal deviations from historical pre-trade cost trends. It is then possible to run multiple hypothetical strategies that measure the estimated cost and risk at different confidence levels to achieve the desired result and determine the algorithmic strategy that provides the ‘best fit’ given current market conditions. It should be noted that trade cost models do not actually specify the preferred strategy, although they may provide relatively clear guidance in that respect. It is a trader’s decision to determine which of the set of cost/risk/confidence level trade offs is most appropriate given their appetite for price impact and opportunity cost risk. Once the ‘best fit’ strategy has been identified, it is possible to analyse the historical trading costs ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
of how different broker algorithms have performed for different stocks at different order sizes. Figure 2 illustrates how this type of information might be incorporated into a trade blotter. This type of tool provides traders with information as to which brokers and algorithms have performed best for them with similar orders, displaying the historical cost and standard deviation of different algorithms for that stock and strategy. The intra-day trade cost results of the selected algorithm strategy also can be displayed as the executions occur, showing whether the strategy is successfully meeting its benchmark or whether the strategy should be modified to adapt to current market conditions. Investment and the human element Incorporating cost and risk controls into the algorithmic trading process is an investment. These controls and informational guidance must be tailored to fit a trading organisation’s style and goals in terms of an overall portfolio of trading possibilities. Measurement and analytics integration are only the first steps. The investment consists, in part, in educating the business side involved in the investment, not just the trading process, and of ■ THE TRADE 2005
■ Chapter 7 Quantifying and enhancing value
Figure 2: Trade blotter window with historical broker algorithm costs
structuring systems which efficiently process data and feed information back to the decisionmaking support tools. It is easy to lose sight of the human element, when speaking of algorithmic trading. In our minds, this would be a critical error. A structured approach to measurement and analysis in the algorithmic trading space is a piece of the overall portfolio management puzzle. It is an asset that is not easily replicated, even in a world of potentially commoditised strategies. Once put in place, the process can provide competitive advantage to managers across all phases of planning and execution, providing a unique asset that can produce superior investment performance over time. ■ ■ ALGORITHMIC TRADING
We thank Scott Baum of ITG for his insights with respect to trade structure. The information contained herein has been taken from trade and statistical services and other sources we deem reliable, but we do not represent that such information is accurate or complete and it should not be relied upon as such. Any opinions expressed herein reflect our judgment at this date and are subject to change. All information, terms and pricing set forth herein is indicative, based on among other things, market conditions at the time of this writing and is subject to change without notice. This report is for informational purposes and is neither an offer to sell nor a solicitation of an offer to buy any security or other financial instrument in any jurisdiction where such offer or solicitation would be illegal. No part of this report may be reproduced in any manner without permission. © 2005, ITG Inc. Member NASD, SIPC. All rights reserved. Compliance #72905-68726
■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
77
■ Chapter 8 Quantifying and enhancing value
Making the most of third-party transaction analysis: the why, when, what and how? How can independent third-party transaction cost analysis be used alongside other data streams to offer a complete view of algorithmic trading performance? Robert Kay*
79
lgorithmic capabilities are now well established as an imporA tant and rapidly growing part of institutional trading. They offer an apparently low cost way for buyside clients to complete both straightforward and more complex trades without requiring the same level of support from personnel employed by executing brokers. The combination of low cost and ‘high tech’ has created an appeal that has overcome concerns about whether the algorithms deliver ‘good execution’ or even whether they deliver the results that they claim. This did not matter when utilisation was limited. However, with greater usage should come ■ ALGORITHMIC TRADING
greater scrutiny and only independent research can realistically provide the comfort that market participants need. Why independent measurement? The goal of algorithmic trading is to achieve a specific execution result through application of technology rather than reliance on human skills. Because the goal is specific and established in advance, some of the more naive providers and users of algorithms believe that evaluation of results is straightforward and independent assessment unnecessary. For these individuals, sophisticated analysis is rendered superfluous by the ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
*Robert Kay, managing director, GSCS Information Services
■ Chapter 8 Quantifying and enhancing value
■
“While most brokers provide accurate comparisons with different performance benchmarks, it is clear that some do not provide any analysis at all, while others use their own benchmarks, which may or may not exactly match the managers or those used by other brokers.”
80
belief that all the algorithms that produce the results are essentially indistinguishable. Those who think about the subject more deeply however, whether on the buy-side or the sell-side, know that in fact all algorithms are different. By understanding just ‘how different’, as well as the financial impact that the differences have on trading results, it is possible to make a strong case in support of independent measurement of the performance of algorithms, of the kind provided by GSCS Information Services to its clients within its itero service. In this, independent providers are simply following the approach adopted in most other areas of financial services, most obviously measurement of investment per■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
formance. It is recognised that the combination of performance and marketing is what enables investment managers to be successful. Such is the importance of performance measurement to continued success, that usually at least two institutions provide measures to the ultimate clients. First, the investment managers themselves assess their own investment performance. Second, an independent entity, usually a custodian or fund accountant, provides an objective third-party assessment. The ultimate clients rely on the latter as reflecting the true value of their portfolio and the fairest way to compare performance across different managers. Similarly, when it comes to execution, whether algorithmic trading, program trading, direct market access or voice trading between buy-side and sell-side individuals, performance allied to commission levels is what determines success. In execution, performance is now becoming the most critical component of where orders flow, as transparency reduces the relevance of research and other qualitative factors in the execution decision. As the importance of performance grows so does the value of and need for independent assessment as a supplement to, and validation of, the information that the ■ THE TRADE 2005
■ Chapter 8 Quantifying and enhancing value
individual executing broker may provide. There are two particular values of an independent assessment. The first is the fact that it is genuinely independent. While most brokers provide accurate comparisons with different performance benchmarks, it is clear that some do not provide any analysis at all, while others use their own benchmarks, which may or may not exactly match the managers or those used by other brokers. An independent provider can offer a comprehensive service and has no interest in under or overstating execution results. The second value is that in terms of approach to measuring execution results (benchmark definition, market data sources, etc.) and reporting layout, style and analytical approach, an independent provider can offer consistency as well as a focus on accuracy and peer group comparison. This makes analysis both easier for the manager and fair, since comparisons across brokers have been generated consistently. Just as managers would like clients to rely on their valuations, so brokers would like managers to rely on their presentation of execution results from algorithmic trading. Just as managers are frustrated by independent analysis of performance which does not ■ ALGORITHMIC TRADING
match their own, so brokers are concerned that independent review of algorithmic trading will suggest that results do not match expectations for technical reasons which independent analysis does not account for. It is of course quite reasonable to raise these kinds of concerns. However, in the long run none of them will prevail for the simple reason that evaluation of execution performance generally and algorithmic trading in particular, is too important to be left to parties who have clear conflicts of interest and economic incentives to produce the ‘right result’. When and what to measure? Algorithms, unlike simple trades, have two features that a buy-side trader is interested in. The first is the outcome in terms of the execu-
■
“Brokers are concerned that independent review of algorithmic trading will suggest that results do not match expectations for technical reasons which independent analysis does not account for. It is of course quite reasonable to raise these kinds of concerns.” ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
81
■ Chapter 8 Quantifying and enhancing value
Table 1: Multi benchmark report – by trading strategy Trade date from: 10/1/04 to 31/12/04 Currency: Euros Country Algo Interval Algo Close Algo Arrival Direct market Program Voice trading Totals
82
# Trades
Value
Av value /trade
Res comm
179 97 550 118 3,141 15,486 19,571
85,376,572 42,378,578 254,886,258 18,952,489 720,723,360 3,679,374,675 4,801,691,932
476,964 436,892 463,430 160,614 360,362 237,594
51,225 25,427 152,932 5,685 1,191,851 4,985,592 6,177,443
tion price achieved on actual transactions. The second is the way in which the algorithm achieves the intended outcome. Measuring the outcome on individual trades is no more or less complex for algorithmic trades than for any other kind. By definition measurement can only take place after the client has used the algorithm to execute one, or preferably a series of trades with the broker. What is different, however, is that in the case of algorithmic trades the intended outcome is known in advance and hence the relevant benchmark cannot be a source of disagreement. For example, if the target on some trades is to achieve ‘interval VWAP’ while on others ‘Arrival Price’ is the benchmark, then the analysis can easily be tailored within the GSCS itero service database to distinguish between ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
Res comm % 0.06 0.06 0.06 0.03 0.05 0.14 0.13
Ex value 1 Int VWAP 8,962 n/a n/a -4,189 -39,992 481,283 441,291
the different types of trade and to use a different benchmark for each. Obviously in these situations the notion of a ‘single right benchmark’ for all trades makes no sense, as brokers and managers would undoubtedly agree. Table 1 shows the kind of report that GSCS produces for clients based on different benchmarks for different execution strategies. The second measure is arguably more important as well as being much more sensitive. To what extent do clients want, or does their business merit, involvement with how each algorithm actually works. Most providers of algorithmic trading offer a series of more or less standard algorithmic outcomes: VWAP (VolumeWeighted Average Price within some period); TWAP (TimeWeighted Average Price, again within some agreed period); ■ THE TRADE 2005
■ Chapter 8 Quantifying and enhancing value
EV 1 % 0.01 -0.02 -0.01 0.01 0.01
Ex value 2 Arrival Time Price n/a n/a 525,001 17,531 2,543,719 -4,110,460 -1,566,741
EV 2 %
0.21 0.09 0.35 -0.11 -0.03
Arrival Time Price and Closing Price. However, quite clearly the algorithms do not operate in exactly the same way. Indeed, the brokers offering algorithms emphasise the superiority of their particular algorithms. This superiority is based on the amount of money spent and the proven results achieved within the firm when it has used the algorithms in supporting proprietary trading. Superiority is usually stated in terms of consistency in achieving the desired outcome. From a client perspective it is reasonable to believe that the algorithms used by each broker will be similar structurally; for example, they will break down large orders into a series of much smaller trades to be executed electronically. However, at the detail level (e.g., how many smaller orders of what minimum/maximum size, traded ■ ALGORITHMIC TRADING
Ex value 3 TD Close n/a 1,031 n/a -20,757 -958,197 3,297,812 2,339,615
EV 3 %
0.02 -0.11 -0.13 0.09 0.05
Ex value 4 Av VWAP n/a n/a n/a 6,364 -50,159 660,769 610,610
at what precise frequency) they will operate differently. Brokers do not want to give away the intellectual property contained within their specific algorithm but need to be able to justify why they are able to achieve superior performance. This offers the opportunity for brokers to make the maximum use of independent cost analysis consultants like GSCS. Actual trades, nonclient specific of course, can be provided to an independent service together with the precise nature of how the algorithm should have operated (i.e., exactly what partial fills were completed at what times to achieve the outcome). This data can be easily verified by GSCS to confirm what the prices were when each particular ‘partial fill’ was completed. The independent ‘imprimatur’ can then be offered to the broker, ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
EV 4 %
0.03 -0.01 0.02 0.01
83
■ Chapter 8 Quantifying and enhancing value
Table 2: Trade breakdown Summary 09:40 to 11:20 09:40 to 11:20
84
Av price Adj VWAP Std VWAP
£6.3852 £6.3848 £6.3861
Detail time of trade
Traded price
Shares traded
Cost (£)
9:40:14 9:42:13 9:43:25 9:46:02 9:50:45 9:51:56 9:54:46 9:58:34 10:00:03 10:01:58 10:03:56 10:06:45 10:08:34 10:10:00 10:12:54 10:13:34 10:16:45 10:17:43 10:20:23 10:21:56 10:26:00
638.50 638.00 638.00 637.50 638.00 638.50 638.50 638.00 638.50 638.50 639.00 639.50 639.50 639.50 640.00 639.50 639.00 639.00 639.50 639.00 638.50
20,000 19,000 21,000 21,000 38,400 19,700 21,300 33,500 32,600 19,000 18,650 19,600 19,600 21,400 17,600 19,200 19,800 17,500 15,460 23,450 41,200
127,700.00 121,220.00 133,980.00 133,875.00 244,992.00 125,784.50 136,000.50 213,730.00 208,151.00 121,315.00 119,173.50 125,342.00 125,342.00 136,853.00 112,640.00 122,784.00 126,522.00 111,825.00 98,866.70 149,845.50 263,062.00
Detail time of trade
Traded price
Shares traded
Cost (£)
10:27:48 10:30:01 10:33:15 10:38:04 10:39:56 10:44:17 10:45:57 10:48:02 10:49:45 10:52:45 10:56:58 10:57:34 11:00:01 11:01:23 11:05:14 11:08:14 11:12:14 11:13:29 11:16:04 11:17:45 11:19:57
638.50 638.50 638.00 638.50 638.50 638.00 638.50 638.50 638.00 638.50 639.00 638.50 638.00 638.50 639.00 638.50 638.00 638.13 638.00 638.18 638.15
18,750 18,750 28,540 42,140 43,120 23,450 21,670 22,340 21,900 18,750 28,950 33,250 21,760 18,500 25,600 28,600 29,540 19,740 18,560 17,540 19,590
119,718.75 119,718.75 182,085.20 269,063.90 275,321.20 149,611.00 138,362.95 142,640.90 139,722.00 119,718.75 184,990.50 212,301.25 138,828.80 118,122.50 163,584.00 182,611.00 188,465.20 125,966.86 118,412.80 111,936.77 125,013.59
establishing that not only does the algorithm produce the intended outcome, but does so using the methodology that the broker says it is using. This analysis and imprimatur can be made available before any trades are completed for a specific client. An example of the kind of reporting is shown in Table 2. It remains in the brokers hands whether, or to what extent, ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
they share the precise details of the function of the algorithm with clients as a means of either winning new business or retaining existing clients in the face of competition. Independent verification can be applied to different algorithmic outcomes and across an array of different ‘real life’ situations, without causing any concern about client confidentiality. ■ THE TRADE 2005
■ Chapter 8 Quantifying and enhancing value
In conclusion therefore, both the actual outcome of algorithmic trading and the basis on which it is conducted can be measured – the former only after the event, but the latter in advance of any actual trading with a specific client. Any client, who is approached by a broker offering algorithms, who is unwilling to be independently assessed, should be naturally suspicious and predisposed not to consider them as a suitable counterparty. How to measure? Unfortunately, in spite of the common terminology used by providers not even the four ‘standard’ outcomes noted above can be defined to a universally agreed standard. For example, Bloomberg allows dozens of different ways to calculate the VWAP, depending on what trades might be excluded from the calculation. In highly liquid securities the exclusions may make only marginal difference to the VWAP calculation, but in some cases the changes made deliver a noticeably different result. Similarly, when assessing the Arrival Time price should it be the bid, ask or mid price and should that determination depend on whether the trade is a buy or a sell, and whether or not the market in the security is rising or falling. ■ ALGORITHMIC TRADING
Should any attempt be made to take account of the volume being traded at or around the Arrival Time, or should the last tick be the price chosen. All this assumes there are no errors in any of the data from market data vendors or exchange feeds. The important point for buyside traders to bear in mind is that the proponents of algorithmic trading make the case that a particular result is ‘assured’ to within a very small error tolerance. It is suggested that this result will be achieved without the buy-side trader having to spend time and effort ‘monitoring’ the progress of the transaction. Rather they can leave it to the algorithm to deliver the chosen result. However, if the chosen result is not defined with sufficient precision, almost any outcome can be justified as having met the objective of using an algorithm. This is clearly unsatisfactory. However, while there may be differences in the definitions of the benchmarks being targeted and in the way the programs actually operate, from a client perspective the important question is the extent to which this may make a difference to the outcome (i.e,. the actual price achieved on the trade). As well as considering the difference on any particular trade, ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
85
■ Chapter 8 Quantifying and enhancing value
■
“To embark on algorithmic trading without having a plan and process to effectively monitor the outcome is potentially costly to the buy-side firm.”
86
clients will also want to assess the extent to which ‘outperformance’ by an algorithm is consistent or simply reflects the specifics of an individual trade. In terms of impact on performance, most firms engaged in the business accept that differences in definitions can be meaningful in terms of outcomes. Data from within the GSCS universe, which covers millions of trades in thousands of different equity securities, suggests that the bid/offer spread is seldom less than 15 bps, and in some countries (not merely emerging markets) more than 50 bps. Clearly defining the price target in precise terms is critical when measuring Arrival Time or Closing Price targets. Similarly, while minor changes to VWAP definitions may make only a few basis points difference across daily VWAP data, the impact on VWAP during particular intervals of time may be much more substantial. Again the GSCS universe of data confirms that variations of up to 50 bps should not ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
be regarded as uncommon, even though overall the figure is lower. In the context of commission savings of 10 bps or less for algorithmic trading, the potential loss of value caused by the application of an inappropriate ‘benchmark’ is clearly significant. To embark on algorithmic trading without having a plan and process to effectively monitor the outcome is potentially costly to the buy-side firm. What is important within any measurement system is consistency in approach across all brokers, an ability to ‘tailor’ analysis of the trades to suit the particular circumstances of the client and an interactive approach to the subject which allows for assessment methodologies to be tried and adjusted as and when necessary. An ability to compare algorithmic trading outcomes against those achieved by other managers using the same brokers and algorithms is an added bonus. Only independent analysis can offer the possibility of achieving these diverse but critical objectives. ■
■ THE TRADE 2005
■ Chapter 9 Quantifying and enhancing value
Enhancing market access
Integrating algorithmic trading strategies with the appropriate technology to optimise execution Mark Muñoz* and Mark Ponthier**
here are a number of different Talgorithms. ways that traders engage with In some – if not most – cases, they may simply send an order through to a broker on the understanding that the latter will run it through an appropriate algorithm. If an institutional trader simply wishes to achieve VWAP on the order, he is likely to leave the manipulation of the order to the service provider. If the trader wishes to adopt a more hands-on strategy with a particular trade, he may run the algorithm on his own desktop – perhaps using a third-party application – and then employ a broker’s DMA capability or a FIX engine to reach the market. There are, however, traders for whom neither of these options is ■ ALGORITHMIC TRADING
appropriate. These traders build their own algorithmic trading engines to send orders for their own book. Their main requirement is simply to get their trading decisions to the appropriate destination in as short a time as possible. Such traders are also looking to absorb more market data than their peers. Real-time and historical data vendors have recognised this and now make more data readily available to the buy-side. This allows traders to hone their trading strategy by back testing it against some defined set of criteria. In short, as traders have evolved in terms of their own sophistication, so has the trading technology, so have the exchanges and so have supporting products, such as market ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
89
*Mark Muñoz, senior vice president, Corporate Development, Nexa Technologies **Mark Ponthier, director – Engineering, Automated Trading Systems, Nexa Technologies
■ Chapter 9 Quantifying and enhancing value
■
“At the ‘high end’ of the community in terms of technology are the so-called black-box traders. There is another set of traders who might be called grey-box traders. They let their models run, but like to intervene on occasion.”
90
data. All of these components of the trading process have their own unique requirements. Each is somewhat dependent on the other. The trader depends on the technology provider for execution and data services, just as the technology provider depends on the exchange for a flexible and efficient architecture. Each player must be aware of the others’ diverse requirements, many of which are based on high volume, time-sensitivity and highreliability. Understanding how each player came into the automated trading arena will provide the information needed to deliver on current requirements as well as gauge how each player needs to grow and adapt. Trading styles At the ‘high end’ of the community in terms of technology are the so-called black-box traders. There ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
is another set of traders who might be called grey-box traders. They let their models run, but like to intervene on occasion. They may range from institutions with stand-alone portfolio management systems to individual day traders using Excel to allow them to trade. What unites this otherwise diverse group is a lack of reliance on traditional sell-side providers for stock recommendations or, indeed, for execution. Many of these traders do not want to depend on a third party for any aspect of their investment or trading strategies, which are integrally linked. Everything they do is proprietary. They are aware that the moment their strategy gets out in the market, the moment they capitalise on a particular opportunity, it is gone. They will therefore not send their order through a broker, as they believe they have to remain proprietary throughout the entire process. These traders are working in milliseconds. By looking at market data and being able to process it in real time, they’re noticing inefficiencies that occur for a fraction of a second. Their goal is to take advantage of that. To do so, they have to get to the market exceptionally quickly. It is not unheard of for such traders to ■ THE TRADE 2005
■ Chapter 9 Quantifying and enhancing value
blast through 2,000 to 4,000 orders one second after the opening bell. Latency is therefore a crucial consideration. Some will go so far as to co-locate their trading technology as close to the exchange as possible. If for example, you are in Vancouver and you are trying to reach the Toronto Exchange through an automated trading system, you may be looking at 40-100 milliseconds. Such traders will be looking to cut that down to 12-15 milliseconds by relocating their trading technology (i.e., the servers running the trading algorithms) to Toronto. Back to basics To capitalise on the opportunities that only technological sophistication can allow, these traders have two key issues. The first is throughput: how many orders can I put through my system – or the one I choose to use – at any one time, in a way that will process them, vet them, get them to the exchange and back to me? There is a real question of scale to be addressed. The second issue is time to market: how fast can I get the order to the exchange? These two issues are really inseparable. It doesn’t pay these traders to have a system that can get an order to an exchange in 15 milliseconds, but only handle one order at a time. There are, for example, a ■ ALGORITHMIC TRADING
■
“Algorithmic trading strategies cannot be assessed in isolation, without considering the technology that will be used to get the trades to where they are going to be executed.”
number of trading systems that are arbitraging between exchanges. They watch the market data move in one particular direction and are then having to make trading decisions within a fraction of a second and then execute those trades within a fraction of a second. Technology choices Algorithmic trading strategies cannot therefore be assessed in isolation, without considering the technology that will be used to get the trades to where they are going to be executed. Here, proprietary traders hit a potential obstacle. In order for any automated trading system to execute efficiently, it must be able to execute on multiple exchanges. Given the latency issue, this means multiple lines, which is a huge financial and technical burden and not one that speaks to a trader’s critical skill. In addition, the complexity of connectivity to ■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
91
■ Chapter 9 Quantifying and enhancing value
■
“If latency and throughput are such key issues – and these require proprietary control of the end-to-end process, how can traders ensure that they are deploying the best available algorithmic strategies? After all, brokers have invested significant resources and intellect in developing and refining their algorithms.”
92
those different exchanges can be overwhelming. Traders therefore need to consider the option of an aggregator, a FIX hub, which allows them to reach their chosen destinations through a single pipe. If I am a buy-side trader, running my strategy on my own desktop, why not simply use the DMA capability of one of my existing brokers to get to the market? That does indeed happen, but it raises again the question of throughput: is that DMA system designed to handle diverse levels of order flow? If, for instance, a trader were to route 3,000 orders at one time through a broker’s DMA service, the chances are that they would soon receive a call ■ ALGORITHMIC TRADING
■ A BUY-SIDE HANDBOOK
from the broker concerned! Most DMA services may well be excellent at handling a limited number of orders issuing from a trader’s front-end system. But can they deal with thousands of orders being blasted simultaneously through the system, not through a front-end terminal? By definition, these trades do not require the traditional expertise of the broker to work the order. The traders originating them simply want to go directly to the market. They do not want any further manipulation at that point. They’ve already determined how, when and where they want to go. Strategic choices If latency and throughput are such key issues – and these require proprietary control of the end-to-end process, how can traders ensure that they are deploying the best available algorithmic strategies? After all, brokers have invested significant resources and intellect in developing and refining their algorithms. The answer is that it is not an ‘all-or-nothing’ scenario. It is important that any trader running their own trading system should have access to a full range of destinations, including their traditional broker. If, for example, a trader is attempting to arbitrage ■ THE TRADE 2005
■ Chapter 9 Quantifying and enhancing value
between two exchanges and their trading engine determines that they need to buy 100,000 shares of Microsoft, the system needs to be configured to assess instantly that the best way to get the best price on that order is to route it to a large broker for manipulation by their algorithmic engine. Alternatively, it may go out and buy 100,000 shares directly on an ECN, because it sees the volume and the price level that the trader wants. Yet again, it may route an order to an electronic exchange, but through a large broker. The key point is that the system, rather than the end user, is making these determinations according to a pre-defined, built-in algorithm of its own. In some cases, the trader may wish to override that, but the option of full automation must be there. A typical black-box group, for example, will have a mathematician who understands the market and has defined the algorithms. A developer will then have actually coded the strategies into the system. There will then be IT experts to support the hardware, connections and other technical details. If these systems are not tightly managed in all these aspects, they can end up costing the investor hundreds of thousands or even millions of dollars. But at the same time, if they are finding inefficien■ ALGORITHMIC TRADING
■
“A typical black-box group will have a mathematician who understands the market and has defined the algorithms. A developer will then have actually coded the strategies into the system. There will then be IT experts to support the hardware, connections and other technical details.”
cies in the market that only exist for milliseconds and they’re able to act on these, they can make hundreds of thousands if not millions of dollars for their owners. ■
■ A BUY-SIDE HANDBOOK
■ THE TRADE 2005
93