Many measures work best in a homoscedastic volatility regime. This is not a big secret. Most regressors, the simplest of which are the ever popular moving averages, are especially biased in the context of a heteroscedastic series.

Probably the best way of normalizing a heteroscedastic series into one with near constant variance is to observe the following. If we assume our process is roughly a SDE with normally distributed innovations (or alternatively a Hurst constant close to 1/2), we know that:

As a rough measure, we can remove much of the vol of vol by scaling our time axis in proportion to the variance. I use a duration based local volatility measure with smoothing or alternatively for daily data an EWMA based evaluation of:

We can then change measure:

where ψ(t) is a smoothing / scaling function. An example of such a scaling (with the red curve in the upper pane indicating the degree of scale from the baseline):

### Like this:

Like Loading...

*Related*

What do you think of this time dilation algorithm:

http://www.olsen.ch/fileadmin/Publications/Working_Papers/090106-thetaAlgorithm.pdf

I’m thinking of implementing it.

I’ve seen that you have also investigated HHT. I’m also planning on giving it a spin.

Hi, I briefly skimmed part of the paper. Looks like they are blending intra-day seasonality (measured over a week or so) and an IV measure. The seasonal vol is def important and can offer better predictions as compared to another measure that does not take that into account. The problem with most vol measures on a window of data is that they lag due to the averaging aspect in the expectations used. So that part is definitely useful. I did not spend enough time to understand their approach beyond that.

As for HHT, the upside of HHT is that it does capture all components of the spectrum, particularly for high frequency, whereas other measures tend to drop aspects of the HF envelope. I’ve looked at partial recompositions on another basis function with mixed success …

By IV measure I assume you mean implied volatility. I didn’t notice anything like that in the paper. However, I’m probably wrong.

I am not a quant, I can barely follow the heavily explained math from the paper I linked to. From you time dilation post I didn’t understand anything, I have no idea what your method does 😦 But I will someday search the net trying to figure out what each variable of each equation means. I’ve been collecting papers on various time dilation algorithms, I have around 5 of them.

I’m hoping to compensate my lack of math skills with my excellent programming skills. I’ve worked with everything, from creating an audio editor, to a mobile Google Maps like application, to computing intersections between height fields and moving solid objects on the GPU (for CAD/CAM), low level C++ optimizations using multi-threading/SSE/assembly, a 100% shader based OpenGL 3D engine in Python which I then converted to C++, web programming, and now downloading and storing between 1-4 billion Forex ticks (didn’t count them yet) from 4 brokers (FXCM, dbFX, Dukascopy, GAIN Capital) with 3 different APIs in a custom compressed binary format using just 10 GB (average compressed tick size: 2 bytes).

I have a little more work on the tick management code, then I’ll start to work on a simple backtester. The best one I found, RightEdge, which allows you to write your own tick data plugin, still has some issues (for example it only stores bids, and not also asks). Then I want to implement a time dilation algorithm, to see how much it will improve the mean reversion strategies I want to test. Based on the papers I’ve read I’m expecting between 0% to 7%.

I have a few colleagues which might help me on the math side, but while they expressed some interest, they are certainly not that into it. The other traders I know are not interested in science based trading at all. They are very happy with their heteroskedastic candlestick charts and their SMAs, even after showing to them books which say that these methods are sub-optimal.

So thanks a lot for your blog, it’s great to see how a real quant approaches the problem, and hopefully I will be able to use some of your writings in an actual trading system.

One more thing, don’t get too hung up on volatility scaled time. There are more fundamental things to pursue before you consider looking at this.

If you have not already, learn more about mean-reversion models / trading. Get a thorough understanding of AR models and autocorrelation. Most people don’t really use AR models, but they form a conceptual basis for the notion of persistence in price series.

Other concepts to explore are “stationarity” and cointegration.

Going beyond that one can look at state-space models using kalman or particle filters, maximum likelihood analysis, etc.

Finally, closer to your CS roots is the field of machine learning. ML is used extensively in models …

It just takes some time and exposure. I am a CS / Physics guy who loves programming and only gradually moved into quantitative trading some years ago. So I am coming from a hybrid background …

The reality is that most of the math in algo strategies is pretty straightforward once you’ve become accustomed to the way it is presented. Mathematicians would tell you that wall street math is easy. I won’t go that far, but I would say that the most important thing is creativity blended with the right tools.

Given your creativity in the CS space, can translate into something successful in this space.

As for traders, every successful trader trades with a “system” and “world view” that works for them. It is dangerous for a trader to rely on a model he/she doesn’t understand, so for that reason they will tend to stick with basic approaches they are comfortable with.

Given my CS / numerical background I am more comfortable with a model based approach, and not comfortable with the gut / technicals approach. I can’t say that one is superior or inferior. Andrew Lo, for instance showed that technical analysis has statistical significance.

I already know about stationarity, co-integration, AR, ARMA, ARIMA, GARCH, but I never really studied any of them.

The way I found out about these concepts was either starting from some Wikipedia page like Time Series or by finding mentions about them in the ton of financial papers I have downloaded and then searching further. This is how I discovered HHT, by searching for “non-stationary data analysis” or something like that.

This is one of my biggest problems, that I don’t know what I don’t know. Non-stationarity was pretty much a revelation to me once I figured out what it means and what it’s implications are.

Thanks for the state-space models hint and for recommending AR and auto-corellation. While I have seen mentions about AC, I wasn’t aware it’s so important to understand.

I am not dismissing technical analysis, the basic of my mean-reversion strategy is the RSI(2) indicator, for which I’ll try to search for a more scientific version.

I have some ideas based on spline smoothing (or some other averaging process). One of them I consider novel (probably because it’s stupid): instead of measuring the simple difference between the price and the smoothing spline, I want to measure the geometric distance to the spline (the length of the perpendicular from the price to the spline). Before finding out about time dilation strategies (or non-linear time scales like I call them) I was thinking about using this smoothing spline as the basis for a new time series. I’m not sure if I used the right language here, the idea is to “stretch” the spline until it is flat, this will deform the time series and the trend of the spline will be removed. This can be done in two ways, by using simple difference of with the geometric distance. I would really like your opinion on these spline ideas of mine if possible.

By observing the market I discovered some behaviors which I don’t quite know where to fit, for example round prices acting like magnets (1.3600), or the observation that every new local maxima/minima (local being at least 1-2 weeks) will be reverted a bit during the night of it’s main market. These have psychological/behavioral explanations. Not sure if there is some quant version/special name for this kind of strategies.

Cool, sounds like you are already immersed.

As for autocorrelation, is a measure of the degree to which the past has relevance to the future price. You’ll see that the AR(p) or ARMA models are very straightforward (though more used as a reference rather than as a model in practice). Assets with strong autocorrelation are more likely to be predictable with momentum indicators for instance.

As for non-linear time scales, that is more precise. I just like to play with titles.

With regard to price attractors and other phenomenon, you might find some literature here and there on behavioral modeling. My own view on round prices in the equities market (just guessing), is that one is seeing more trade activity around stops and limit orders. Naturally, manual traders like to use round numbers.

Nice post on an underappreciated topic. Do you have R code for this, which you are willing to share?