# Monthly Archives: October 2009

## Dealing with Outliers in a markovian SDE

I was developing a state model similar to the ADS (Arouba-Diebold-Scotti index) to provide a daily indication of economic cycle for the Canadian market.   Economic data sourced from news can be particularly noisy (ie dataentry errors, etc).   I had encountered this in the state system:

The instability was due to bad data points in the series.  Cleaning the bad data solved the problem.  However, detecting unintended outliers at runtime is especially important for real-time strategies.

A markovian state system, where each state depends on the previous, is particularly sensitive to data issues.   Such a system is typically set up as:

In particle filtering or kalman filtering one is estimating hidden state based on observations and using the hidden state to make statements about the market.   An outlier is such that:

In such a situation a kalman filtered state system will collapse and one will get a huge spike in the state system as an oscillating function as it attempts to minimize the disturbance in subsequent iterations.   Assuming the “outlier” is in fact correct data, our state system suffers from one of a number of possible problems:

1. Distribution does not have proper tails
2. Covariance of innovation is too small, causing system to be numerically unstable in the presence of near zero probability observation.

There are approaches for dealing with this such as adopting a weighted least squares approach or bayesian weighting of observations (see Learning an Outlier -Robust Kalman Filter).

In our case, however, the outliers are data errors.   For non-realtime systems, the data can be cleaned.  In the case of a real-time strategy we don’t have the option of manually cleaning data, so we may need to resort to a heuristic to determine whether the data is an “outlier” or not.

One heuristic approach is to determine:

and then evaluate the probability of this observation Yt given Xt.   This can be accomplished by evaluating the pdf on the residual of the projected versus observed:

If the probability is 0 or near 0 then the data is clearly an outlier.  The estimation of the expected value of Xt can be determined by estimating the distribution of state.

How does one evaluate E(X[t] | X[t-1])?   A simple (though expensive) way to evaluate this is to draw noise samples from the innovation (error) distribution, evaluate Fx(X[t-1],) + Ex, and use the robust mean of those samples weighted by the probability of the draw.   The number of samples needed can be reduced by using a kernel estimator when determining the expectation.

Finally if a outlier is detected, we can treat it as a missing value.  Information has been lost, but at least we have preserved the stability of our state system.   This is a reasonable approach, I think, for the occassional outlier.

Leave a Comment

Filed under state-space-models, statistics, stochatistic

## Intraday volatility prediction and estimation

GARCH has been shown to be a reasonable estimator of variance for daily or longer period returns. Some have adapted GARCH to use intraday returns to improve daily returns. GARCH does very poorly in estimating intra-day variance, however.

The GARCH model is based on the empirical observation that there is strong autocorrelation in the square of returns for lower frequencies (such as daily). This can be easily seen by observing clustering and “smooth” decay of squared returns on daily returns for many assets.

where the second equation is the ML optimization for the parameters.

Here is an example for daily Canadian 2Y CMT yield.  The red is the GARCH(1,1) variance, the black is the series, and the grey is the log return, and the green circle is the predicted variance for the next period:

Contact me if you would like to get the R source code for the above.

Intra-day squared returns, however, have many jumps, with little in the way of autocorrelated decay pattern. Looking at the EUR/USD series, the squared returns have jumps that reduce the ML to the point where GARCH parameterization does not converge.  There does appear to be a longer-term pattern, though, allowing for a model, though not GARCH.

With expanded processing power and general access to tick data, research has begun to focus on intra-day variance estimation. In particular, expressing variance in terms of price duration has become an emergent theme. Andersen, Dobrev, and Schaumburg are among a growing community developing this in a new direction.

At this point have disqualified GARCH as a useful measure for my intra-day strategies but am planning to use for a daily strategy. I am investigating a formulation of a duration based measure for intra-day volatility.

Leave a Comment

Filed under statistics, volatility

## Hawkes Process & Strategies

Call me unread, but I had not encountered the Hawkes process before today. The Hawkes process is a “point process” modeling event intensity incorporating empirical event occurrence.

The discrete form of the process is:

where ti is the ith occurrence at time ti < t for some t. The form of the function is typically an exponential, but can be any function that models decay as a counting process:

Ok, that’s great but what are the applications in strategies research?

Intra-day Stochastic Volatility Prediction
The recent theme in the literature has been to replace the quadratic-variance approach with a time-based approach. The degree of movement within an interval of time is equivalent in measure to the amount of time required for a given movement, and can be interchanged easily as Andersen, Dobrev, and Schaumburg have shown in “Duration-Based Volatility Estimation”.

Cai, Kim, and Leduc in “A model for intraday volatility” approached the problem by combining an Autoregressive Conditional Duration process and a Hawkes process to model decay, showing that:

and then equivalently expressed in terms of intensity (where N represents the number of events of size dY):

relating back to volatility measure as:

The intensity process is comprised of an ACD part and a Hawkes part:

They claim to model the intra-day volatility closely and propose a long/short straddle strategy to take advantage of the predictive ability.

High Frequency Order Prediction Strategy
The literature suggests the use of Hawkes processes to model the buying and selling processes of market participants.

John Carlsson in “Modeling Stock Orders Using Hawkes’s Self-Exciting Process”, suggests a strategy where if the Hawkes predicted ratio of buy/sell intensity exceeds a threshold (say 5) buy (sell) and exit position within N seconds (he used 10).

This plays on the significant autocorrelation (ie non-zero decay time) of the intensity back to the mean. A skewed ratio of buy vs sell orders will surely influence the market in the direction of order skew.

The strategy can be enhanced to include information about volume, trade size, etc. We can also look at the buy/sell intensity of highly correlated assets and use to enhance the signal.

10 Comments

Filed under point processes, statistics, strategies, volatility

## What multivariate approaches can tell us

I’ve focused on “univariate” strategies in the high frequency space for the last few years. Recently I did some work on medium/long term strategies for the Canadian market. In the course of investigation realized that I’ve been ignoring information by just focusing on signals using a single asset and not looking at related assets to provide additional signal.

Of course it has always been in the back of my mind to diversify into strategies of more than one asset. But even if one intends to trade single securities, the information that other related assets or indicators can provide gives us an edge. In particular I need to be looking at:

• Multivariate SDEs with jointly distributed series
In the simplest case this is expressed in covariance, but covariance is just one of the moments of relationships.
• Economic signals (for medium / long term)
• Cointegration relationships
Not only linear relationships but quadratic. These need to be tested carefully in and out of sample

The Canadian research results underscored how effective the multivariate can be.

Leave a Comment

Filed under Uncategorized