Showing posts with label physics of finance. Show all posts
Showing posts with label physics of finance. Show all posts

Tuesday, 12 February 2013

Algorithmic trading -- the positive side

Wednesday, August 10, 2011



Ref: The Physics of Finance

In researching a forthcoming article, I happened upon this recent empirical study in the Journal of Finance looking at some of the benefits of algorithmic trading. I've written before about natural instabilities inherent to high-frequency trading, and I think we still know very little about the hazards presented by dynamical time-bombs linked to positive feed backs in the ecology of algorithmic traders. Still, it's important not to neglect some of the benefits algorithms and computer trading do bring; this study highlights them quite well.

This paper asks the question: "Overall, does AT (algorithmic trading) have salutary effects on market quality, and should it be encouraged?" The authors claim to give "the first empirical analysis of this question." The ultimate message coming out is that "algorithmic trading improves liquidity and enhances the informativeness of quotes." In what follows I've given a few highlights -- some points being obvious, others less obvious:
From a starting point near zero in the mid-1990’s, AT (algorithmic trading) is thought to be responsible for as much as 73% of trading volume in the U.S in 2009.
That's no longer news, of course. By now, mid-2011, I expect that percentage has risen to closer to 80%.

Generally, when I think of automated trading, I think of two activities: market makers (such as GETCO) and statistical arbitrage high-frequency traders, of which there are many (several hundred) firms. But this article rightly emphasizes that automated trading now runs through the markets at every level:

There are many different algorithms, used by many different types of market participants. Some hedge funds and broker-dealers supply liquidity using algorithms, competing with designated market-makers and other liquidity suppliers. For assets that trade on multiple venues, liquidity demanders often use smart order routers to determine where to send an order (e.g., Foucault and Menkveld (2008)). Statistical arbitrage funds use computers to quickly process large amounts of information contained in the order flow and price moves in various securities, trading at high frequency based on patterns in the data. Last but not least, algorithms are used by institutional investors to trade large quantities of stock gradually over time.
One very important point the authors make is that it is not at all obvious that algorithmic trading should improve market liquidity. Many people seem to think this is obvious, but there are many routes by which algorithms can influence market behaviour, and they work in different directions:
... it is not at all obvious a priori that AT and liquidity should be positively related. If algorithms are cheaper and/or better at supplying liquidity, then AT may result in more competition in liquidity provision, thereby lowering the cost of immediacy. However, the effects could go the other way if algorithms are used mainly to demand liquidity. Limit order submitters grant a trading option to others, and if algorithms make liquidity demanders better able to identify and pick off an in-the-money trading option, then the cost of providing the trading option increases, and spreads must widen to compensate. In fact, AT could actually lead to an unproductive arms race, where liquidity suppliers and liquidity demanders both invest in better algorithms to try to take advantage of the other side, with measured liquidity the unintended victim.
This is the kind of thing most participants in algorithmic trading do not emphasize when raving about the obvious benefits it brings to markets.

However, the most important part of the paper comes in an effort to track the rise of algorithmic trading (over roughly a five year period, 2001-2006) and to compare this to changes in liquidity. This isn't quite as easy as it might seem because algorithmic trading is just trading and not obviously distinct in market records from other trading:
We cannot directly observe whether a particular order is generated by a computer algorithm. For cost and speed reasons, most algorithms do not rely on human intermediaries but instead generate orders that are sent electronically to a trading venue. Thus, we use the rate of electronic message traffic as a proxy for the amount of algorithmic trading taking place.
The figure below shows this data, recorded for stocks with differing market capitalization (sorted into quintiles, Q1 being the largest fifth). Clearly, the amount of electronic traffic in the trading system has increased by a factor of at least five over a period of five years:


The paper then compares this to data on the effective bid-ask spread for this same set of stocks, again organized by quintile, over the same period. The resulting figure indeed shows a more or less steady decrease in the spread, a measure of improving liquidity:


So, there is a clear correlation. The next question, of course, is whether this correlation reflects a causal process or not. I won't get into details but what perhaps sets this study apart from others (see, for example, any number of reports by the Tabb Group, which monitors high-frequency markets) is an effort to get at this causal link. The authors do this by studying a particular historical event that increased the amount of algorithmic trading in some stocks but not others.The results suggest that there is a causal link.

The conclusion, then, is that algorithmic trading (at least in the time period studied, in which stocks were generally rising) does improve market efficiency in the sense of higher liquidity and better price discovery. But the paper also rightly ends with a further caveat:

While we do control for share price levels and volatility in our empirical work, it remains an open question whether algorithmic trading and algorithmic liquidity supply are equally beneficial in more turbulent or declining markets. Like Nasdaq market makers refusing to answer their phones during the 1987 stock market crash, algorithmic liquidity suppliers may simply turn off their machines when markets spike downward.

This resonates with a general theme across all finance and economics. When markets are behaving "normally", they seem to be more or less efficient and stable. When they go haywire, all the standard theories and accepted truths go out the window. Unfortunately, "haywire" isn't as unusual as many theorists would like it to be.

** UPDATE **

Someone left an interesting comment on this post, which for some reason hasn't shown up below. I had an email from Puzzler183 saying:

"I am an electronic market maker -- a high frequency trader. I ask you: why should I have to catch the falling knife? If I see that it isn't not a profitable time to run my business, why should I be forced to, while no one else is?

You wouldn't force a factory owner to run their plant when they couldn't sell the end product for a profit. Why am I asked to do the same?

During normal times, bid-ask spreads are smaller than ever. This is directly a product of automation improving the efficiency of trading."

This is a good point and I want to clarify that I don't think the solution is to force anyone to take positions they don't want to take. No one should be forced to "catch the falling knife." My point is simply that in talking about market efficiency, we shouldn't ignore the non-normal times. An automobile engine which uses half the fuel of any other when working normally wouldn't be considered efficient if it exploded every few hours. Judgments of the efficiency of the markets ought to include consideration of the non-normal times as well as the normal.

An important issue is to explore if there is a trade-off between efficiency in "normal times" as reflected in low spreads, and episodes of explosive volatility (the mini flash crashes which seem ever more frequent). Avoiding the latter (if we want to) may demand throwing some sand into the gears of the market (with trading speed limits or similar measures).

But I certainly agree with Puzzler183: no one should be forced to take on individual risks against their wishes.

Monday, 11 February 2013

Steve Keen on "bad weathermen".


Monday, January 14, 2013,

by Mark Buchanan/ The Physics of Economics Blog.

 

I've made quite a lot of the analogy between the dynamics of an economy or financial market and the weather. It's one of the basic themes of this blog, and the focus of my forthcoming book FORECAST. I don't pretend to be the first one to think of this at all. I know that the head of the Bank of England Mervyn King has talked about this analogy in the past, as have many others.

But the idea now seems to be gathering more popularity. Steve Keen even writes here specifically about the task of economic forecasting, and the entirely different approaches used on weather science, where forecasting is now quite successful, and in economics, where it is not:
Conventional economic modelling tools can extrapolate forward existing trends fairly well – if those trends continue. But they are as hopeless at forecasting a changing economic world as weather forecasts would be, if weather forecasters assumed that, because yesterday’s temperature was 29 degrees Celsius and today’s was 30, tomorrow’s will be 31 – and in a year it will be 395 degrees.

Of course, weather forecasters don’t do that. When the Bureau of Meteorology forecasts that the maximum temperature in Sydney on January 16 to January 19 will be respectively 29, 30, 35 and 25 degrees, it is reporting the results of a family of computer models that generate a forecast of future weather patterns that is, by and large, accurate over the time horizon the models attempt to predict – which is about a week.
Weather forecasts have also improved dramatically over the last 40 years – so much so that even an enormous event like Hurricane Sandy was predicted accurately almost a week in advance, which gave people plenty of time to prepare for the devastation when it arrived:

Almost five days prior to landfall, the National Hurricane Center pegged the prediction for Hurricane Sandy, correctly placing southern New Jersey near the centre of its track forecast. This long lead time was critical for preparation efforts from the Mid-Atlantic to the Northeast and no doubt saved lives.

Hurricane forecasting has come a long way in the last few decades. In 1970, the average error in track forecasts three days into the future was 518 miles. That error shrunk to 345 miles in 1990. From 2007-2011, it dropped to 138 miles. Yet for Sandy, it was a remarkably low 71 miles, according to preliminary numbers from the National Hurricane Center.

Within 48 hours, the forecast came into even sharper focus, with a forecast error of just 48 miles, compared to an average error of 96 miles over the last five years.

Meteorological model predictions are regularly attenuated by experienced meteorologists, who nudge numbers that experience tells them are probably wrong. But they start with a model of the weather than is fundamentally accurate, because it is founded on the proposition that the weather is unstable.

Conventional economic models, on the other hand, assume that the economy is stable, and will return to an 'equilibrium growth path' after it has been dislodged from it by some 'exogenous shock'. So most so-called predictions are instead just assumptions that the economy will converge back to its long-term growth average very rapidly (if your economist is a Freshwater type) or somewhat slowly (if he’s a Saltwater croc).

Weather forecasters used to be as bad as this, because they too used statistical models that assumed the weather was in or near equilibrium, and their forecasts were basically linearly extrapolations of current trends.
 
How did weather forecasters get better? By recognizing, of course, the inherent role of positive feed backs and instabilities in the atmosphere, and by developing methods to explore and follow the growth of such instabilities mathematically. That meant modelling in detail the actual fine scale workings of the atmosphere and using computers to follow the interactions of those details. The same will almost certainly be true in economics. Forecasting will require both lots of data and also much more detailed models of the interactions among people, firms and financial institutions of all kinds, taking the real structure of networks into account, using real data to build models of behaviour and so on. All this means giving up tidy analytical solutions, of course, and even computer models that insist the economy must exist in a nice tidy equilibrium. Science begins by taking reality seriously