Statistical Arbitrage Models Of Ftse 100

  • Uploaded by: greg
  • 0
  • 0
  • August 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Statistical Arbitrage Models Of Ftse 100 as PDF for free.

More details

  • Words: 5,509
  • Pages: 11
STATISTICAL ARBITRAGE MODELS OF THE FTSE 100 A. N. BURGESS Department of Decision Science London Business School Sussex Place, Regents Park, London, NW1 4SA, UK E-mail: [email protected] In this paper we describe a set of statistical arbitrage models which exploit relative value relationships amongst the constituents of the FTSE 100. Rather than estimating cointegration vectors of high dimensionality, a stepwise regression approach is used to identify the most appropriate subspace for the stochastic detrending of each individual equity price. A Monte Carlo simulation is used to identify the empirical distribution of the Variance Ratio profile of the regression residuals, under the null hypothesis of random walk behaviour. Both a chi-squared test on the joint distribution of the Variance Ratio profile, and additional tests based on its eigenvectors, indicate that as a whole the stochastically detrended stock prices deviate significantly from random walk behaviour and hence may contain predictable components. A combined cross-sectional and time-series model indicates that the relative “mispricing” of the equities tends to trend in the short-term and revert in the longer term. The out-ofsample performance of the models is consistently profitable using a simple trading rule, with the combined portfolio suggesting a possible annualised Sharpe Ratio of over 7 for a trader with costs of 50 basis points. Furthermore, information derived from the in-sample variance ratio profile is shown to be significantly correlated with the out-of-sample profitability of the individual models – suggesting that the performance may be improved further by modelling the time-series properties conditionally on such information.

1

Introduction

In many cases the volatility in asset returns is largely due to movements which are market-wide or even world-wide in nature rather than specific characteristics of the particular asset; consequently there is a risk that this “market noise” will overshadow any predictable component of asset returns. A number of authors have recently suggested approaches which attempt to reduce this effect by suitably transforming the financial time-series. Lo and MacKinley (1995) create “maximally predictable” portfolios of assets, with respect to a particular information set. Bentz et al (1996), use a modelling framework in which prices are relative to the market as a whole, and returns are also calculated on this basis; this “detrending” removes typically 90% of the volatility of asset returns, as is consistent with the Capital Asset Pricing Model (CAPM) of finance theory. Burgess and Refenes (1996) use a cointegration framework in which FTSE returns are calculated relative to a portfolio of international equity indices, with the weightings of the portfolio given by the coefficients of the cointegrating regression. Steurer and Hann (1996) also adopt a cointegration framework, modelling exchange rates as short-term fluctuations around an “equilibrium” level dictated by monetary and financial fundamentals. This type of approach in general is characterised as “statistical arbitrage” in Burgess (1996) where a principle components analysis is used to create a eurodollar portfolio which is insulated from shifts and tilts in the yield curve and optimally exposed to the third, “flex” component; the returns of this portfolio are found to be partly predictable using neural network methodology but not by linear techniques. We define statistical arbitrage as a generalisation of traditional “zero-risk” arbitrage. Zero-risk arbitrage consists of constructing two combinations of assets with identical cash-flows, and exploiting any discrepancies in the price of the two equivalent assets. The portfolio Long(combination1) + Short(combination2) can be viewed as a synthetic asset, of which any price-deviation from zero represents a “mispricing” and a potential risk-free profit 1. In statistical arbitrage we again construct synthetic assets in which any deviation of the price from zero is still seen as a “mispricing”, but this time in the statistical sense of having a predictable component to the price-dynamics. Our methodology for exploiting statistical arbitrage consists of three stages:

1

Subject to transaction costs, bid-ask spreads and price slippage

• • •

constructing “synthetic assets” and testing for predictability in the price-dynamics modelling the error-correction mechanism between relative prices implementing a trading system to exploit the predictable component of asset returns

In this paper we adopt an approach to statistical arbitrage which is essentially a generalisation of the econometric concept of cointegration. We modify the standard cointegration methodology in two main ways: firstly we replace the cointegration tests for stationarity with more powerful variance ratio tests for “predictability”, and secondly we construct the cointegrating regressions by a stepwise approach rather than the standard regression or principal components methodologies which are found in the literature. These two innovations are easily motivated: firstly, variance ratio tests are more powerful against a wide range of alternative hypotheses than are standard cointegration tests for stationarity, and hence are more appropriate for identifying statistical arbitrage opportunities; secondly, the high dimensionality of the problem space (approx. 100 constituents of the FTSE 100 index) necessitates the use of a methodology for reducing the models to a manageable (and tradable!) complexity, but in a systematic and principled manner – for which the “subset” approach of stepwise regression is ideally suited. The predictive model is simply a linear error-correction model using the cointegration residuals (asset “mispricings”) and lagged relative returns to forecast future relative returns on a one-day ahead basis. The trading system described in this paper is very simple – simply taking offsetting long and short positions which are proportional to the forecasted relative return. For a discussion of moresophisticated trading rules for statistical arbitrage, see Towers and Burgess (1998a, b). The paper is organised as follows. Section 2 describes the stepwise cointegration methodology and the Monte Carlo experiments to determine the distribution of the variance ratio profile under the null hypothesis that the variables are all random walks. Section 3 describes the tests for predictability which are based on the variance ratio analysis, and the results of applying these tests to the statistical “mispricings” obtained from the stepwise regressions. Section 4 describes the time-series model for forecasting changes in the mispricings and section 5 analyses the out-of-sample performance of this model. Section 6 explores the relationship between the characteristics of the variance ratio for a given mispricing and the profitability of the associated statistical arbitrage model. Finally, a discussion and brief conclusions are presented in section 7. 2

Distribution of the Variance Ratio profile of stepwise regression residuals

Our methodology for creating statistical arbitrage models is based on the econometric concept of cointegration. Cointegration can be formally defined as follows: if a set of variables y are integrated of order d (i.e. must be differenced d times before becoming stationary) and the residuals of the cointegrating regression are integrated of order d-b where b > 0 then the time-series are said to be cointegrated of order (d,b). i.e. if each y i is I(d) and εt is I(d - b) b >0 then y ~ CI( d, b) The most common and useful form of cointegration is CI(1,1) where the original series are random walks and the residuals of the regression are stationary according to a “unit root” test such as the DickeyFuller (DF), Augmented Dickey-Fuller (ADF), suggested by Engle and Granger (1987) or the cointegrating regression Durbin-Watson (CRDW) proposed by Sargan and Bhargava (1983). Tests based on a principal components or canonical correlation approach have been developed by Johansen (1988) and Phillips and Ouliaris (1988) amongst others. In our case, however, the data consists of 93 constituents 2 of the FTSE 100 together with the index itself, giving a dimensionality of 94- much higher than normal for cointegration analysis, and large relative to the sample size of 400 (see section 3 for a description of the data). In order to reduce the dimensionality of the problem we decided to identify relationships between relatively small subsets of the data. There remains the problem of identifying the most appropriate subsets to form the basis of the 2

the remaining FTSE constituents were excluded from the analysis due to insufficient historical data being available (e.g. for newly quoted stocks such as the Halifax building society)

statistical arbitrage models. In order ensure a reasonable span of the entire space, we decided to use each asset in turn as the dependent variable of a cointegrating regression. To identify the most appropriate subspace for the cointegrating vector we use a stepwise regression methodology in place of the standard “enter all variables” approach. Before moving on to analyse these models further, we will describe the basis of the “Variance Ratio” methodology which we use to test for potential predictability. The variance ratio test follows from the fact that the variance of the innovations in a random walk series grows linearly with the period over which the increments are measured. Thus the variance of the innovations calculated over a period τ should approximately equal τ times the variance of single period innovations. The basic VR(τ) statistic is thus:

∑ (∆ d

− ∆τ d

)

τ ∑ ∆d t − ∆d

)

τ

VR(τ ) =

t

t

t

(

2

(1)

2

The variance ratio is thus a function of the period τ. For a random walk the variance ratio will be close to 1 and this property has been used as the basis of statistical tests for deviations from random walk behaviour by a number of authors since Lo and McKinley (1988) and Cochrane (1988). Rather than testing individual VR statistics, we prefer to test the variance ratio profile as a whole, firstly because there is no a priori “best” period for the comparison and secondly because it can summarise the dynamic properties of the time series: a positive gradient to the variance ratio function (VRF) indicates positive autocorrelation and hence trending behaviour; conversely a negative gradient to the VRF indicates negative autocorrelations and mean-reverting or cyclical behaviour. Figure 1, below, shows the VRFs for the Dax and Cac indices together with the VRF for the relative value of the two indices.

Variance Ratio

1.1 1 0.9 0.8 0.7 0.6

Dax

Cac

Relative

RW

0.5 1

11

21

31

41

Period Figure 1: the Variance Ratio profile of the Dax and Cac indices individually and in relative terms. The x axis is the period over which asset returns are calculated (in days), the y axis is the normalised variance of the returns. In this case, the fact that the relative price deviates further from random-walk behaviour suggests that it may be easier to forecast than the individual series

The usefulness of the variance ratio profile can be seen from the fact that it indicates the degree to which the time-series departs from random walk behaviour – which may be taken as a measure of the potential predictability of the time-series. This is unlike standard tests for cointegration which are concerned with the related but different issue of testing for stationarity – a series may be nonstationary but still contain a significant predictable component and thus the variance ratio will identify a wider range of opportunities than the more restrictive approach of testing for stationarity. For both the Dax and the Cac the VRFs fall below 1, suggesting a certain degree of predictability - even though both series are nonstationary. Note also that the VRF for the relative price series is consistently below those of the individual series, indicating that the relative price exhibits a greater degree of potential predictability than either of the individual assets.

A problem with using the Variance Ratio test in conjunction with a cointegration methodology is that the residuals of a cointegrating regression (even when the variables are random walks) will not behave entirely as a random walk – for instance, they are forced, by construction, to be zero mean. More importantly, the regression induces a certain amount of spurious “mean-reversion” in the residuals and the impact of this on the distribution of the VR function must be taken into account. In our case, there is one further complication in that we are using stepwise regression and hence the selection bias inherent in choosing m out of n > m regressors must also be accounted for. This is akin to the “data snooping” issue highlighted by Lo and McKinley (1990) We thus performed a Monte Carlo simulation to identify the joint distribution of the variance ratio profile under the null hypothesis of regressing random walk variables on other random walks (i.e. no predictable component), accounting in particular for the impact of (a) the mean-reversion induced by the regression itself, and (b) the selection bias introduced by the use of the stepwise procedure. The distribution was calculated from 1000 simulations, in each case the parameters of the simulation match those of the subsequent statistical arbitrage modelling: namely a 400 period realisation of a random walk is regressed upon 5 similarly generated series from a set of 93 using a forward stepwise selection procedure, and the variance ratio profile calculated from the residuals of the regression3. The variance ratio is calculated for returns varying from one-period up to fifty periods. Note however, that by construction the value of VR(1) can only take the value 1. From these 1000 simulations, both the average variance ratio profile and the covariance matrix of deviations from this profile were calculated. As we are interested in the “shape” of the VR profile we also conducted a principle component analysis to characterise the structure of the deviations from the average profile. The scree plot of the normalised eigenvalues is shown below:

100% 90% 80% 70% 60% Normalised Eigenvalue

50% 40%

Cumulative

30% 20% 10% 0% 1

2

3

4

5

6

7

8

9

10

11

Figure 2: the scree plot of normalised eigenvalues for the covariance matrix of the variance ratio profile. The fact that almost the entire variability can be represented by the first few factors (out of a total of 49) shows that deviations from the average profile tend to be highly structured and can be characterised by only a small number of parameters.

The average profile and selected eigenvectors are shown in figure 3, below. The average profile shows a significant negative slope which would imply a high degree of mean reversion if this were a standard

3

Clearly it would be straightforward to repeat the procedure for other experimental parameters, sample size, number of variables etc, but the huge number of possible combinations leads towards recalibrating only for particular experiments rather than attempting to tabulate all possible conditional distributions.

variance ratio test. In our case it merely represents an artefact of the regression methodology which can be taken as a “baseline” for comparing the variance ratio profiles of actual statistical “mispricings”. Note also the highly structured nature of the eigenvectors – indicating that deviations from the average profile have a tendency to be correlated across wide regions of lag-space rather than showing up as “spike” in the VR profile. The first eigenvector represents a low frequency deviation in which the variance is consistently higher than the average profile – patterns with a positive projection on this eigenvector will tend to be trending whilst a negative projection will tend to indicate meanreversion. The second eigenvector has a higher “frequency” and characterises profiles which meanrevert in the short term and trend in the longer term (or vice versa). Similarly the third eigenvector represents a pattern of trend-revert-trend. The higher-order eigenvectors (not shown in the figure) tend to follow this move towards higher frequency deviations. The fact that the associated eigenvalues are large only for the first few components tells us that the residuals derived from random walk time-series tend to deviate from the average profile only in very simple ways, as represented by the low-order eigenvectors shown in the diagram. 1.2

Average EigenVec1 1

EigenVec2 EigenVec3

0.8

0.6

0.4

0.2

0

Figure 3: Variance Ratio profiles for: average residual of regression from simulated random-walk data; characteristic deviations from the average profile as represented by selected eigenvectors

3

Analysis of Variance Ratio profiles of statistical “mispricings” of FTSE 100 stocks

Given the average profile and covariance matrix of the profile under the null hypothesis of random walk behaviour, we can test the residuals of actual statistical arbitrage models for significant deviations from these profiles. The data used consist daily closing prices of the FTSE 100 and 93 of its constituent stocks. The prices were obtained from the Reuters TS1 database and it total consist of 500 observations from 13 June 1996 to 13 May 1998. Of these 400 observations were used to estimate the cointegrating regressions and the final 100 observations were reserved for the purposes of out-of-sample evaluation. Each asset in turn was used as the dependent variable in a stepwise regression, with constant term and five regressors selected from the possible 93, and the VR profile of the resulting statistical mispricing tested for potential predictability in the form of deviation from random walk behaviour. Two types of test were used, the first treating the distribution of the VR profile as multivariate normal and measuring the Mahalanobis distance of the observed profile from the average profile under the null hypothesis. This approach to joint testing of VR statistics has previously been used by Eckbo and Liu (1996) and it is easy to show that the test statistic should follow a chi-squared distribution with degrees

of freedom equal to the dimensionality of the test. The second set of tests are designed to identify different types of deviation from the average profile and are based on the projection of the deviation onto the different eigenvectors – under the null hypothesis these statistics should follow a standard normal distribution. Figure 4, below, shows Variance Ratio profiles of the mispricings for selected statistical arbitrage models: 1.4 Model 1 (FTSE+ ) 1.2 Model 2 (ABF+ ) 1

Model 76 (SDRt+ ) Model 87 (ULVR+ )

0.8

0.6

0.4

0.2

0

Figure 4: Selected variance ratio profiles for statistical mispricings obtained through stepwise regression of asset on remaining assets in FTSE 100 universe

The test results are shown in the table below; in order to account for deviations from (multivariate) normality we report the nominal size but also the empirical size of the tests – calculated from the calibration data from the original simulation and also a test set from a second similar but independent simulation. Eigenvectors derived from both the correlation and the covariance matrix are used in the analysis.

Cal Test Model

Chi-sq EigCov1 EigCov2 EigCov3 EigCov4 EigCov5 EigCor1 EigCor2 EigCor3 EigCor4 EigCor5 1.8% 1.6% 1.4% 1.4% 0.9% 1.4% 1.7% 1.1% 1.7% 1.5% 1.2% 4.3% 1.2% 0.9% 1.8% 1.3% 1.2% 1.3% 0.9% 1.3% 1.3% 1.6% 36.2% 8.5% 1.1% 2.1% 3.2% 3.2% 8.5% 4.3% 3.2% 4.3% 8.5%

Table 1: Comparison of VR tests for random-walk simulations and actual mispricings, nominal size of test = 1%

Cal Test Model

Chi-sq EigCov1 EigCov2 EigCov3 EigCov4 EigCov5 EigCor1 EigCor2 EigCor3 EigCor4 EigCor5 6.6% 4.5% 5.1% 4.8% 5.2% 6.0% 4.7% 5.8% 4.0% 4.1% 4.8% 9.9% 3.9% 5.5% 4.6% 4.8% 5.4% 4.1% 4.2% 4.3% 5.6% 6.2% 53.2% 11.7% 8.5% 7.4% 12.8% 11.7% 11.7% 9.6% 8.5% 14.9% 13.8%

Table 2: Comparison of VR tests for random-walk simulations and actual mispricings, nominal size of test = 5%

Cal Test Model

Chi-sq EigCov1 EigCov2 EigCov3 EigCov4 EigCov5 EigCor1 EigCor2 EigCor3 EigCor4 EigCor5 11.7% 8.7% 9.8% 9.5% 10.6% 10.5% 8.3% 10.0% 8.4% 9.4% 9.7% 14.5% 8.2% 10.5% 9.3% 10.9% 10.7% 7.5% 10.1% 8.5% 12.1% 10.4% 59.6% 20.2% 13.8% 14.9% 18.1% 18.1% 19.1% 14.9% 16.0% 19.1% 23.4%

Table 3: Comparison of VR tests for random-walk simulations and actual mispricings, nominal size of test = 10%

The tests indicate that the mispricings of the statistical arbitrage models deviate significantly from the behaviour of the random data – suggesting the presence of potentially predictable deviations from

randomness. The table below shows ‘z’ tests of the average scores of the true mispricings when compared to the simulated test data: Chi-sq EigCov1 EigCov2 EigCov3 EigCov4 EigCov5 EigCor1 EigCor2 EigCor3 EigCor4 EigCor5 AveTest 50.99 -0.01 -0.01 0.00 0.01 0.00 -0.15 0.03 0.10 0.08 -0.07 VarTest 135.19 0.22 0.04 0.01 0.01 0.00 33.57 5.56 2.72 1.57 0.83 AveModel 70.79 -0.23 0.01 0.07 -0.03 -0.03 -2.34 0.32 1.02 -0.65 0.61 VarModel 676.34 0.41 0.04 0.02 0.01 0.00 62.67 7.45 3.83 1.86 1.06 z' stat 7.3 -3.2 0.5 4.0 -4.1 -5.0 -2.6 1.0 4.4 -5.0 6.3 p-value 0.00000 0.00129 0.61169 0.00006 0.00004 0.00000 0.00890 0.32816 0.00001 0.00000 0.00000 Table 4: Comparison of mispricings

average values of the various VR tests for random-walk simulations and actual

This result reinforces the findings that the actual mispricings deviate from random behaviour. In the next section we describe a forecasting model based on these mispricings.

4

Modelling the dynamics of the statistical mispricings

In this section we describe the error-correction model which forecasts one-day-ahead changes in the statistical mispricings of the FTSE 100 stocks. A single “pooled” model was estimated across the cross-section of 94 mispricing models and sample period of 400 observations. In order to capture any “mean reversion” effects, the one day ahead changes in the mispricings were regressed on the current level of the mispricing: 5 (2)

MIS s,t = Ps, t −

(∑

i =1

)

ws,i Pc ( i, s), t + c

where Pc(i,s) is the price of the i’th constituent asset for the model of stock ‘s’ and ws,i is the associated regression coefficient (portfolio weighting). The remaining independent variables were selected in order to capture properties of different segments of the lag-space of mispricing dynamics and are of the form:

L(n, m) s ,t = MISs,t-n − MIS s,t-m

(3)

with the resulting regression of the form:

MISs,t +1 − MIS s,t = α + β 0 MIS s,t + β1 L(0,1) s, t + β2 L (1,2 ) s, t + β 3 L(2 ,5) s, t + β4 L(5,10) s, t + β 5 L(10,20) s, t + ε s, t +1

(4)

In total, 94*400 = 37600 observations were used to estimate the model, leaving 94*100=9400 for out-ofsample evaluation. The regression output is shown below:

SUMMARY OUTPUT Regression Statistics Multiple R 28.6% R Square 8.2% Adjusted R Square 8.2% Standard Error 0.016 Observations 37600 ANOVA df 6 37593 37599

Regression Residual Total

SS 0.83 9.31 10.14

MS F 0.14 559.02 0.0002

Significance F 0

Coefficients Standard Error t Stat P-value 0.000 0.0001 -2.05 0.0401 -0.188 0.0043 -43.48 0.0000 0.021 0.0027 7.99 0.0000 0.030 0.0036 8.24 0.0000 0.037 0.0043 8.76 0.0000 0.018 0.0060 2.96 0.0031 0.107 0.0060 17.97 0.0000

Intercept MIS L1020 L510 L25 L12 L01

Lower 95% 0.000 -0.197 0.016 0.023 0.029 0.006 0.096

Upper 95% 0.000 -0.180 0.027 0.037 0.046 0.029 0.119

The model shows significant predictability in future changes of the statistical mispricings. This predictability derives from two sources - firstly a short term trend as represented by the positive coefficients for the lagged difference terms L(n,m), and secondly a long term error-correction as represented by the negative coefficient for the mispricing MIS. Given the size of the dataset from which the model was estimated, the results are all highly significant and the adjusted R2 suggests that the predictable component accounts for 8.2% of total variability in the mispricings. In spite of this, it is unclear how much of this effect is spuriously induced by the cointegrating regression methodology which was used to generate the mispricings - the true test of the model is on the out-of-sample performance, an evaluation of which is presented in the following section. 5

Performance Analysis

Firstly let us consider the aggregate performance which is achieved by averaging the cross-section performance of the models - this is equivalent to trading a portfolio with an equal weight in each of the individual statistical arbitrage models. The out-of-sample aggregate equity curve is shown in figure 5, below: 18% 16% Combined Models

Cumulative Profit

14%

With costs = 50bp

12% 10%

8% 6% 4%

2% 0% 0

10

20

30

40

50

60

70

80

Time (Days)

Figure 5: Aggregate equity curve, averaged across the performance of the 94 statistical arbitrage models

90

A set of performance metrics for the aggregate performance are reported in table 5 below:

No costs Costs = 50bp

Profitable Ave Ret SD ret 85% 0.16% 0.14% 67% 0.08% 0.14%

Ret (Annual) 31.75% 15.73%

SD (Annual) Sharpe 2.03% 15.7 2.02% 7.8

Table 5: Aggregate cross-section performance of the statistical arbitrage models: the first row shows performance excluding trading costs, the second row shows performance with trading costs assumed equal to 50 basis points (0.5%) The metrics are directional ability (percentage of periods in which profits are positive), daily and annualised return and risk (measured as standard deviation of return), and Sharpe Ratio of annualised return to annualised risk.

The trading performance suggests that the model is highly successful - the diversification across models means that on this aggregate level the strategy is profitable in 85% of the out-of-sample periods (falling to 67% when costs are included). After costs the annualised return is just over 15% which is very satisfactory given that the trading is market neutral and could be overlaid on an underlying long position in the market. Alternatively the Sharpe Ratio suggests that the returns are large when compared to the capital requirements of covering the associated risks and that in this risk-adjusted sense the system is highly attractive. Note that the performance is highly sensitive to the assumed level of trading costs - one-way costs of 50bp reduce the return by half, with the break-even point lying close to transaction costs of 1%. From this perspective the usefulness of such a system is conditional on the circumstances of the user - whilst a bank may have costs as low as 10-20 basis points, the equivalent cost for an individual is likely to be over 1%, hence negating the information advantage provided by the model. The table below summarises the performance metrics of the individual models; the detailed results are presented in Appendix C. Model Correlation Direction Return Risk Sharpe Direction(Adj) Return(adj) Min 0.006 46% -7.0% 5.8% -0.3 26% -38.0% Max 0.386 66% 184.4% 67.6% 5.4 59% 160.3% Ave 0.224 56% 58.2% 21.6% 2.8 44% 25.3%

Risk(adj) Sharpe (adj) 5.8% -6.5 67.2% 4.0 21.4% 1.0

Table 6: Summary of the performance metrics evaluated for individual models; the table reports the min, max and average values of: predictive correlation (between actual and forecasted returns), Directional forecasting ability, annualised return risk and Sharpe Ratio, and equivalent figures adjusted for transaction costs at a level of 50 basis points (0.5%). Note that the figures in a given row may be derived from different models.

The key feature of the results in table 6 is the wide range of performance across the individual models. Note that, after adjusting for transactions costs, the models are only profitable in 44% of the out-ofsample periods and yet still return positive profits - suggesting that the models are better at forecasting the larger moves. The average Sharpe Ratio of the models is only 1.0 but notice that by aggregating across the models the average return is unaffected whilst the average risk is significantly reduced. From this perspective the improvement from a Sharpe Ratio of 1.0 on an individual basis, to 7.8 on an aggregate basis (see table 5) would be expected only from models which are almost uncorrelated and hence can significantly reduce risk by means of diversification. 6

Investigation of the relationship between Variance Ratio profile and profitability

In the final phase of the analysis, we investigated the relationship between the insample properties of the variance ratio profiles of the different models, and the variability in their profitability during the outof-sample period. This analysis consisted of regressing the out-of-sample Sharpe Ratios of the individual models on their VR statistics (Mahalanobis distance and eigenvector projections). A stepwise regression procedure resulted in the model shown below:

Multiple Regression Analysis ----------------------------------------------------------------------------Dependent variable: adjSharpe ----------------------------------------------------------------------------Standard T Parameter Estimate Error Statistic P-Value ----------------------------------------------------------------------------CONSTANT 0.330649 0.166633 1.9843 0.0503 EigCor3 0.281245 0.075705 3.71502 0.0004 EigCor5 1.35602 0.316422 4.28548 0.0000 EigCov5 14.3265 5.00645 2.86161 0.0052 ----------------------------------------------------------------------------Analysis of Variance ----------------------------------------------------------------------------Source Sum of Squares Df Mean Square F-Ratio P-Value ----------------------------------------------------------------------------Model 61.4094 3 20.4698 12.67 0.0000 Residual 145.455 90 1.61617 ----------------------------------------------------------------------------Total (Corr.) 206.864 93 R-squared = 29.6858 percent R-squared (adjusted for d.f.) = 27.342 percent Standard Error of Est. = 1.27129 Mean absolute error = 0.915465 Durbin-Watson statistic = 1.85346

The regression diagnostics indicate a significant relationship between certain aspects of the variance ratio profile during the insample period, and risk-adjusted return during the out-of-sample. In particular the projections of the deviations from the average profile onto Eigenvectors 3 and 5 were found to be significantly related to profitability. In principle this information could be used in two ways: firstly to identify the models which are more likely to be profitable and weight them appropriately; secondly as additional conditioning information in an appropriate nonlinear error-correction model equivalent to (4) but allowing for interaction effects between the time-series dynamics and the information derived from the variance ratio profile. This aspect is the subject of current research. 7

Conclusion

The concept of cointegration provides a suitable basis for statistical arbitrage models but, being motivated by the search for theoretical understanding, is rather too restrictive if applied in the standard manner. This paper introduces two general directions in which cointegration analysis can be generalised to statistical arbitrage: the first is the method which is used to generate the “mispricing relationship” - in this case stepwise regression rather than standard regression - and the second is the nature of the tests employed - in our case tests for predictability which are based on the variance ratio profile of the mispricing time-series. In this case Monte-Carlo analysis is required in order to estimate the joint distribution of the individual variance ratio statistics and the test results indicate that the assumption of multivariate normality is almost - if not quite - accurate. In this paper, we have hardly touched upon the many options which are available for building the error-correction models, and for implementing trading systems which best exploit the information in the forecasts which they generate. In spite of the fact that the focus of the work is placed elsewhere, the trading performance of the system appears to be impressive - returning an annualised Sharpe Ratio of 7.8 at a realistic transaction cost level of 50 basis points. From this perspective, the key feature of the system is the benefit of combining diversified models, in terms of reducing the aggregate risk - the maximum Sharpe Ratio of the individual models is 4.0, and the average only 1.0, much less impressive than the overall figure! Finally, the underlying approach is equally applicable to a wide range of asset classes where assets share common stochastic trends and hence can be rendered more predictable by modelling in terms of relative prices rather than in raw form. Current research projects are concerned with a range of equity, fixed-income and derivatives markets, with sampling frequencies ranging between 10 minutes and daily.

References Bentz, Y., Refenes, A. N. and De Laulanie, J-F., 1996, Modelling the performance of investment strategies, concepts, tools and examples, in Refenes et al (eds), Neural Networks in Financial Engineering, World Scientific, Singapore, 241-258 Burgess, A. N., 1996, Statistical yield curve arbitrage in eurodollar futures using neural networks, in Refenes et al (eds), Neural Networks in Financial Engineering, World Scientific, Singapore, 98-110 Burgess, A. N. and Refenes, A. N., 1996, Modelling non-linear cointegration in international equity index futures, in Refenes et al (eds), Neural Networks in Financial Engineering, World Scientific, Singapore, 50-63 Cochrane J. H., 1988, How Big is the Random Walk in GDP?, Journal of Political Economy, vol 96, no 5. pp 893-920 Eckbo B. E., and Liu, J., 1993, Temporary Components of Stock Prices: New Univariate Results, Journal of Financial and Quantitative Analysis, Vol.28, No.2, pp.161-176 Engle, R. F. and Granger, C. W. J., 1987, Cointegration and error-correction: representation, estimation and testing, Econometrica, 55, 251-276 Johansen, S., 1988, Statistical analysis of cointegration vectors, Journal of Economic Dynamics and Control, 12, 131-154. Lo, A. W. and MacKinlay A. C., 1988, “Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test”, The Review of Financial Studies, 1988, Vol 1, No. 1, pp. 41-66 Lo, A. W. and MacKinlay, A. C., 1990, Data-Snooping Biases in Tests of Financial Asset Pricing Models, Review of Financial Studies,Vol.3, No.3 Lo, A. W., and McKinley A. C., 1995, Maximizing predictability in the stock and bond markets, NBER Working Paper #5027 Phillips P. C. B. and Ouliaris S., 1988, “Testing for cointegration using Principal Components Methods”, Journal of Economic Dynamics and Control, Vol 12, pp 105-30 Sargan, J. D. and Bhargava, A., 1983, Testing residuals from least squares regression for being generated by the Gaussian random walk, Econometrica, 51, 153-174 Steurer, E., and Hann, T. H., 1996, Exchange rate forecasting comparison: neural networks, machine learning and linear models, in Refenes et al (eds), Neural Networks in Financial Engineering, World Scientific, Singapore, 113-121 Towers N. and Burgess, A. N., 1998, Optimisation of Trading Strategies using Parametrised Decision Rules, International Symposium on Intelligent Data Engineering and Learning (IDEAL) 1998, Hong Kong, October 14-16, 1998. Towers N. and Burgess, A. N., 1999, Implementing Trading Strategies for Forecasting Models, Computational Finance 99, New York, January 1999.

Related Documents


More Documents from "Muhammad Saeed Babar"