I’ve been keeping a close eye on the costs of the various clouds versus the costs of internal cpu farms. Amazon EC2 pricing for high CPU map-reduce appears to be evenly priced with my costs to host internally.
I calculated this based on depreciating core i7 920s over a 3 year period and accounting for 0.14$ / khw @ 150 watts continuously. I arrived at a lower cost than Amazon’s, however when adjusting for cpu performance, !/$ ties out or is bettered by the Amazon proposition.
Amazon’s rates for calculations using map-reduce are 1/5th the cost of a normal instance. I’m estimating that the the high cpu 8 core is performing at SPECfprate2006 of approximately 150, twice the core i7 920. The cost is $0.12 / hr versus a non map-reduce instance cost of 0.68 / hr.
This is great news for those doing transient large scale scientific computations (such as myself). I now need to look to map my machine learning and strategy evaluation algorithms to map-reduce.
In a number of my systematic strategies I evaluate hundreds or thousands of sub-strategies, using a blend of these to decide on the next period trade.
Assume we are trading from a portfolio of possible assets, going fractionally long or short on a sparse subset of possible assets in the portfolio. For example I may be working with an equity portfolio of 2300 possible equities. From period to period I may trade anywhere from 0-40 of these.
At time t we want to know the apriori “optimal” weights for time t+1 for the 2300 assets. Each of our sub-strategies predicts the best weights (trades) for time t+1. We know the cumulative return, the average return, and risk adjusted return for each of these “paper” strategies.
Here were some approaches:
- use the top N strategies by highest cumulative historical return, risk adjusted
This works reasonably well if each strategy is trading with the same frequency. However, with one of my strategies, some “paper” strategies trade 85% of the time and other maybe 5% of the time. The cumulative returns of the less frequently traded will be much lower.
- use the top N strategies by average return, risk adjusted
This equalizes the trading frequency problem however the low frequency strategies can dominate the selection if the low frequency have much higher returns. The problem is that the aggregate strategy will rarely trade because the top N are dominated by low frequency traded strategies.
- use the top N non-zero weights strategies by average return, risk adjusted
This avoids the dominating non-traded strategies above. However there may be periods where the best trade is no trade at all.
- use the top N non-zero weights strategies with average return in the top X percent
This fixes the above problem by just evaluating the top X percent at most in search of the top N trading strategies.
One of my strategies has the following complication. Many of the “paper” strategies are non-orthogonal in the sense that some of the strategies have overlapping stimuli and responses. For example if 2 strategies are very similar, they may respond to the same events 80% of the time and have similar performance. Including both of the strategies in the aggregate response will double count.
To avoid this have resorted to using a modified blending rule that discards strategies that are subsumed by higher performers in the selection.
There is a lot of nervousness over both the bullish market recovery of recent months and credit issues in the eurozone. Due to this lack of confidence, the market is easily manipulated on the downside. That is not to say that the market does not require correction (I think it does), but yesterday’s drop appears to be more of a manufactured event.
I was watching a number of stocks (such as AAPL) yesterday. Saw a 16% drop on the back of no additional news. It appears that multiple 1 cent sell orders for 100 shares in a basket of popular stocks was put in. Naturally this knocked over many algos, prompting dramatic selling for a short period.
The claim is that was a fat-finger exercise, but I think it could just as easily be an extreme case of the sort of manipulation that occurs everyday in enticing algorithms and traders to react to small price shocks, revealing their hand or pushing the stock in a given direction.
The timing during one of the least liquid periods of the day made the orders all the more effective. Algos beware
It appears that there were greater than 30 sell orders for 0.0001 on at least 1 stock and probably a similar pattern on the others. This would clean out the order book and as separate orders push the orderbook lower with each order. See this link. The author speculates that dramatic yen buying was the trigger. It is possible that an algo reacted to strong yen buying and put in limit orders to liquidate a basket of equities (though a very poorly thought out exit strategy).
Errant algo or manipulation for a buying opportunity. Hard to say, either is equally likely.