With much fanfare the joint CFTC and SEC staffs reported on their findings of the market events which occurred on May 6, 2010, now popularly known as the Flash Crash. It acquired this moniker because during about a 20-minute period in the afternoon, major equity indices in both the futures and securities markets plummeted 5 to 6% in a matter of minutes before reversing and rebounding just as quickly.
I will let the readers familiarize themselves with the Report. There has been sufficient press coverage of it that most readers should have a good idea of the CFTC/SEC’s interpretation of events. The report is long, detailed and supported by pages of colourful charts similar to a physicist’s analysis of the aftershock of a major earthquake (a not unreasonable analogy).
But, according to one prominent real time market data feed firm, Nanex, the report did not have enough detail and because of that shortfall, failed to discover the real reasons for the flash crash. Their analysis makes very interesting reading. It feeds into the major point of several of my previous blogs.
Early on Nanex contacted the investigating staff committee and offered its services and its software to help it understand events. The staffers took them up on this offer. Subsequently, the Nanex website logged more visits from the SEC than from any other source. However, in their ensuing analysis, the staffers elected to use one-minute data increments. The real action according to Nanex was occurring in millisecond increments!
This is a massive difference. Let’s rebase this difference to make the context apparent. Looking at trade data in minutes when the action is in milliseconds is like looking at two month intervals if the action is in minutes. We all know a lot of trees would be missed in the woods under that level of scrutiny!
Reading their analysis left me breathless. I come from the old school of trading that prevailed until about a decade ago (and yes, I do remember ticker tape). The world of high frequency trading (HFT) is somewhere out in the cosmos as far as I am concerned.
Important for this discussion is to understand two oft-used concepts in HFT–latency and co-location. Latency is the lag taken to transmit an order to the exchange’s computer and get a response. Latency is bad in this competitive world, so HFTs do anything possible to reduce it. Reductions are measured in 50 millisecond moments or less. One popular means of doing so is through co-location, which means that HFTs set up their computers as close to the exchange computer as possible. However, this presents one problem: there are nine exchanges on which these trades can occur and co-location can only occur in one place. So at which exchange would you co-locate your computer? Naturally the largest, the NYSE.
What did Nanex find when it reduced the time slices from minutes to milliseconds? A very curious phenomenon was occurring in the 50 millisecond interval. The NYSE trades were consistently occurring at slightly higher prices than other exchanges, which upon further analysis, was a result of its order time stamps being ever so slightly delayed.
The effect of this was to cause a rush of sell orders to flow towards the NYSE and buy orders to flow to all the other exchanges where prices were ever so slightly lower but correct in real time. This caused liquidity to flow away from all these exchanges towards the NYSE, quickly choking up the system.
What was causing this delay? Further analysis down at the millisecond level showed massive order bursts were hitting the NYSE at the bid (certain to be executed). These bursts were so large that they slowed down the exchange’s processing times. Several HFTs reported time delays causing them to cease trading which further drained liquidity.
Eric Hunsader, the founder of Nanex, has proposed a disturbing scenario. It is his view that certain HFTs are ‘order stuffing’ for the sole purpose of slowing the NYSE down thereby giving them the opportunity to do profitable arbitrages on the smaller exchanges. This is not simply a theory as he has the analytic evidence and data to back up his statements, all of which can be found on the aforementioned website. In fact, he outlines a number of toxic strategies that certain HFTs engage in to gain an advantage over their competitors. No doubt some of these algorithmic traders are pursuing these strategies to make up for their relatively high latency rates.
Hunsader points out there have been several instances of the effects of the flash crash occurring in the past. Luckily the environment they occurred in was not as volatile as the market was on May 6th. That is not to say they won’t reoccur in the future.
It is necessary to understand that HFT is a product of the regulators. It is unlikely to have occurred without their apparently overzealous rush to reduce dark trading (see earlier blogs). Now a whole series of new problems have cropped up. And unfortunately it certainly reinforces the theme that Wunsch and Bookstaber speak to in my previous four blogs.
It’s no wonder that noble objective of “ethics on Wall Street” has become an oxymoron!