“Transaction Cost Theory to assess High-Frequency Trading systems impact on market’s performance”
Newly-known Algorithmic Trading or High-Frequency Trading (HFT) systems in capital markets may be objectively seen today as a symbol of a world seeking for each day more engineering in its processes and execution. Effectively, this booming propensity of financial entities to elaborate mathematical algorithms to integrate in automated decision-making machines leads people to imagine a bright future for such technologies, in a context where HFT firms represented 2% of the approximately 20,000 US trading firms operating on the markets in 2007 (Lebreton), but accounted for more than 60% of all US equity trading volume and more than 50% of all European equity trading volume in 2012, partially over performing the predictions of Pflimlin & Checola (2011).
Realizing this on one hand and given that no entrepreneur in the world could decently agree on an Efficient Market Hypothesis theory being the true reflect of what is reality on markets on another, I got interested in thinking of HFT in relation to a (not so much) theoretical material that stands at a very fundamental level of our economy and business administrations: transaction costs.
Thus being a pillar of the competitive dimension of any business, costs appear to me as being an appropriate element assessing performance on markets. Publications around transaction costs led several contributors to be distinguished with a Nobel Prize and conducted to the progressive formalization of the Transaction Cost Theory (TCT) over the last 75 years.
A big deal indeed. And I invite you to have a look on it so you could catch the trend with us.
Transaction costs are costs occurring in an economic exchange. They can be of various forms and can be detailed as the costs a company or entity has to pay at the moment of any financial or commercial exchange since the very beginning of the operation and until its end, including eventual later extra fees. Costs of research, brokers’ commissions or quality controls are considered as transaction costs as they are directly (stock commissions) or indirectly (prospection costs) linked to a transaction.
The idea of costs pricing system was first evocated by the economist Ronald Coase in his article The Nature of the Firm (1937). He wondered about the origins of economic organizations. According to him, the presence of transaction costs will lead every economic actor engaged in a transaction, to seek for the posture leading to bigger reduction of transaction costs each time the loss implied by this posture in the following steps of the transaction process is lower than the saved transaction costs. In this regard, Coase states that companies or firms are a way to limit transaction costs by creating collaboration between employees (Coase, 1937) thus, implicitly identifying the TCT as part of the Organizational Theory.
Various minds contributed to the TCT’s elaboration such as Oliver Williamson, still considered today as the father of the theoretical stream called Transaction Cost Theory. Inspired by Ronald Coase’s work on transaction costs, Herbert Simon’s on bounded rationality and Kenneth Arrow’s definitions, Williamson defines the transaction costs as the functioning costs of the economic system (1985). His main purpose is to assume that in any economic activity of a company, there are automatically associated costs to be minimized through governance, in order to know how to use either the market or the firm to produce its goods. Williamson proposes then two different categories of costs: the Ex ante transaction costs, defined as costs inherent to negotiation, administration, meetings and discussions that lead the different parts to consider different types of contract; and the Ex post transaction costs, inherent to a system implementation and support, commonly related to bargaining costs when bilateral efforts are taken to correct eventual bad ex ante contracts.
Carl Dahlman has been the very last major contributor to the TCT through his analysis of the generation of externalities (1979). He underlines three different interpretations of the transaction consistent to his conclusions stating that transaction costs are the fundamental causes of externalities on a market. Based on Coase’s definition of transaction costs (1960) Dahlman underlines the three identifiable transaction costs interpretations known as firstly, the “search and information costs” being inherent to either a lack of information when collecting all the existent markets’ opportunities at a defined time or various other characteristics of transactions’ items such as their quality. Secondly, the “bargaining and decision costs” defined as resources’ allocations to identify and motivate the agents’ propensity to trade at certain conditions such as defined price and time. And finally, the “policing and enforcement costs” grounding on a lack of knowledge or assurance regarding one or both parts’ probability to break its contractual obligation eventually. As a matter of fact these interpretations are categorizing occurring transaction costs according to the natural path of a transaction process, from preliminary information search and gathering until the eventual policing costs occurring on the post transactional phase. Dahlman called this “the natural classification of transaction costs”.
So deep. But why am I considering this?
Well, in my opinion as suggested above, the existence of costs in general and specifically of transaction costs as briefly exposed here, is de facto consistent with the observation any operator on markets makes of reality in terms of costs first, as well as in terms of markets’ efficiency by extension put in perspective with theory. Indeed, markets’ efficiency is one of the fundamental hypotheses of financial theories. It presumes the absence of transaction costs, the gratuity and homogeneity of the information. It is mainly based on Fama’s works (1965, 1970) who defined an efficient market on an informational sense, as a market where any pertinent information is completely and instantly reflected in the price; a market where arbitrage is structurally simply impossible. But this hypothesis has been frequently tested and contested, including by Fama himself, on the FX and Equities markets (Fama, 1965, 1970, 1991; Fama & French, 1988a; Jensen, 1969; Summers, 1986; Mignon, 1998…etc.). The definition of the market efficiency finally appeared to be incompatible with the real market functioning, mainly because of the extent of the market’s heterogeneities such as the irregularities of actor’s anticipation, the importance of market’s imperfections and, wait for it, the presence of distinct transaction costs.
Ok, but what about the specific applications on capital markets?
In addition, TCT has been put into perspective with financial markets on various dimensions. Firstly on markets’ structure, when Dumas (1992) and Anderson (1997) stated that transaction costs can be variable from not only an agent to another and based on transaction orders, but also that they can define specific threshold for every investor. Anderson (1997) suggested then that the market price deviations in comparison to their fundamental value could create arbitrage opportunities whereas irrational behaviours of the agents make the price tends to its equilibrium. He identified the transaction costs to be responsible for the market price adjustment to be dissymmetrical. A fact supported by Jawadi & Koubbaa as the presence of heterogeneous transaction costs can dissuade investors from financial arbitrage, preventing them from trading financial securities when the predicted potential benefit is inferior to these costs (Jawadi & Koubbaa, 2006). Transaction costs are in this sense considered as market frictions, imperfections eventually leading to speculations inherent to a gap between the fundamental value of a security and its market price in reality, seized by the propensity to opportunism, characterizing most of the actors on the financial markets.
Which leads us to the second application of transaction costs on capital markets: the establishment of markets’ patterns and modelization. As stated in the last paragraph, linear modelling techniques cannot reproduce effects inherent to the presence of costly information sources, variable transaction costs and heterogeneous anticipations of the investors on capital markets (Jawadi & Koubbaa, 2006). On another hand, it has been shown that transaction costs were partly responsible of failures in asset pricing models according to Fama and French 1996’ Three-Factor Model (Chae, Yang, 2007). They proved that in any market, for the US market as well as for emerging markets, transaction costs is negatively correlated with performance of an asset pricing model. They additionally stated that if transaction costs are not minimized, any pricing model will have various cross-sectional differences in its performance.
TCT has been studied in relation with market’s liquidity and volatility as transaction costs and execution costs are analogically changing with markets’ fluctuations: trading costs are fluctuating according to these two parameters, liquidity and volatility (Domowitz, Glen & Madhavan, 2001). They also stated that innovation and investments in new technologies are often conditioned by costs observations. Indeed, traders use to look after reliable costs prediction that definitely impacts on their portfolios’ strategies: they appear more likely to adapt their trading strategies towards different initiatives that give them access to a better prediction and control of costs (Domowitz, Glen, Madhavan, 2001). However, it seems important here to highlight the fact that back in 2001, the HFT systems as used nowadays were not operational on any market yet and I assume, are different from the (older) automated order systems mentioned in their works as relevant examples of costs’ forecasting tools.
The bravest of you people, once having reviewed all this, may be interested in using their brain now to assess the implications of the use of HFT systems on markets’ performance through the TCT. As far as I am concerned proceeding in this sense would imply to first, extract from all the statements above the questions that may be relevant to ask and then, think about it.
I would like to insist on the fact that the following statements and hypotheses are pointless, subjective, independent thought and have nothing to do with any published research. However, I like to think that every research work began with this kind of modest curiosity-based reflections, sometimes leading to the elaboration of a research agenda. This is what stimulates me and I would like to make people feel the same by catching the trend through their own intellectual aspirations.
So here are my propositions based on Dahlman et alii:
Are HFT systems lowering research and information costs? – before a transaction ;
Are HFT systems lowering negotiation and decision costs? – during a transaction ;
Are HFT systems lowering control and execution costs? – after a Transaction.
And based on Domowitz et alii:
Are HFT systems significantly predicting transaction costs?
Because we are certainly open minded people and also maybe because formally answering these simple questions would be of tremendous difficulty, we are going to focus on the different dimensions comprised in the questions: research, information, negotiation, decision, control, execution and prediction. In this regard, we will be mainly focused on the two first questions around the dimensions of research, information, negotiation and decision costs. The other ones underlined by Dahlman, Domowitz et alii such as execution, control and costs prediction appearing as redundant or inconsistent with a reflection under a transaction costs perspective to me. Shall we?
(1) HFT systems are machines configured by traders and quants on markets, the action of the machine itself actually being limited to the execution of automated, controlled operations. In this regard the implementation of such technology does not contribute to the reduction of fundamental investigation costs as they will still be supported by human teams. In this regard it seemed important to me to detail the difference between the two terms “research” and “information” comprised in the question. “Research” first consists in exogenous information. It is about global information external to the automated system itself. Quants and traders will identify all the information needed to calibrate their algorithmic machine so it can act with all its characteristics according to inputted humans’ strategy. This type of external (global) investigation for information is not processed by the machine itself but computerized in it as the strategic frame of its actions on the markets. Exogenous information is so collected with the aim of building a concrete action strategy for automated systems on markets. On the other hand, “information” corresponds to endogenous information. They are information collected by the system itself during its action on markets such as spread or price limits from other counterparties and even public news.
In this regard I would make the assumption that the research and collection of exogenous information should at least lasts the same amount of time as for a standard pre-trade data research and collection, given that the human-based process is the same, besides the time used to calibrate the machines based on some of the information collected. And that, on the other hand, the research and collection of endogenous data is significantly faster when ran by an HFT system, execution speed being the central characteristic of HFT. They can be seen as the fundamental interface existing between humans and internal, short-term or immediate financial information on markets such as spreads, counter-parts bid/ask or price limits. Milliseconds or nanoseconds HFT execution timescale corresponds to an execution speed on markets 1,000 to 10,000 times faster than humans. This is actually so reactive that HFT systems give access to usually inaccessible gains for standard traders. Put this way, exogenous research time would be hardly as longer as the endogenous research time is faster than standard pre-trade human-based-only research time. Consequently, I will assume that globally lowering research and execution timescale in fine implies lowering the costs albeit significant initial investments for the HFT system itself: it will break even at some point.
(2) Similarly, when thinking of decision-making through HFT actions on markets, comes again the execution speed factor of such systems, giving HFT agents an instant-edge over non-HFT agents from a pricing perspective. Even if one could argue that HFT systems are not taking any instinctive decision per se as their actions are purely and simply consequences of preliminary decisions taken by humans, the machines do execute their “algo-logic” decisions according to the ongoing contextual market conditions. We are talking here about specific algorithmic decisions humans can imagine, forecast and sometimes determine through statistics but that could not be executed by themselves at these timescales.
Now looking at HFT’s negotiation processes through TCT implies, additionally to the central characteristic of speed execution, to consider the technical characteristics of HFTs. Indeed, when considering “negotiation” under a technical point of view as “the ability of determining the best price”, HFT technologies appear to be very efficient through two different strategies coarsely explained below.
When buying on markets, the “order fragmentation” consists in splitting an initial order into various smaller ones with the same target price with the aim of reducing the investment risks but as well as giving a significant advantage of acquiring as many shares as possible at the best market price on a defined moment – Counterparty [A] instead of getting a unique long position of USD 100m with maximum target price at USD 25 will split it in 10 long positions of USD 10m each. These 100 long positions will be diffused on the market seeking for best deal price through a bottom-up approach, some of them hitting USD 23.5 for example and until the maximum acceptable target USD 25. Human traders commonly use this technique but again the difference in timescale allows HFT systems to process this manipulation almost instantly with much more efficiency regarding human capacities ;
The “market maker fragmentation” is about arbitraging the first strategy. The machine acting on the market may be able to understand if counterparty is processing a split-order strategy by detecting the several packets with same characteristics of target price and volumes. In this situation counterparty [B] selling on market will send sell offers to buying counterparty [A] with a top-down approach from higher prices to lower ones in order to discover the maximum target price of counterparty [A]. In this case [B] may get a hit at USD 25 before that [A] buys its global USD 100m. If [A] only acquired for USD 30m and is still under processing its bottom-up, split-order strategy, [B] goes to the market, buys as many underlying as possible at USD 24.5 and sell them to [A] at USD 25 entirely, filling [A]’s demand.
That is what these machines do, they identify and act on the best possible prices. And accordingly, negotiation, still considered under this technical point of view, appears to me as more efficiently executed by HFT systems than by humans given the two different dimensions of efficient strategies processed on markets on one hand, through extra low latencies on the other hand.
(3) No need to come back on the dimension of execution by HFT systems as it has been clearly exposed in the present article as better than human traders at arbitraging noise across a multi-asset universe at very short-term scales because of its execution speed.
However, another dimension which, I think, deserves our reflection is the control of costs; but on a slightly different point of view than the control as stated by Dahlman. Indeed, when identifying control of costs he was referring to post transactional costs of control. A dimension on which I do not see HFT systems acting at all in terms of customer service. We will be discussing about control in the next part.
Consequently and as already suggested in the introduction of this reflection, questions around HFT systems impacting execution and control costs appear to me as being already answered on one hand and inconsistent on the other hand.
(4) As well as for control, the costs prediction dimension appears to me as being inconsistent with the study of HFT systems impacting markets performance or not. Predictions of costs around financial operations are essentially made by humans. A good example may be the use of extra low latency processes for arbitrage operations on capital expenditures. HFT systems will execute large volumes of buying/selling operations in the basis of preliminary known and controlled arbitrage spreads. But humans are the ones who computed the exact spreads to be executed on the operations, not the machines.
This, folks, is how it is done. Thanks to our brilliant intuitions, we succeeded to imagine capital markets performance to be impacted by HFT systems under a transaction cost perspective for five out of the seven dimensions stated by the TCT contributors: execution, research, information, negotiation and decision. Unfortunately and as always, reality cannot be embraced by a unique theoretical model, even as brilliant as the TCT, besides applied by the limited imagination of a poor lad. Indeed, we can clearly note the limits of our approach here when assessing HFT systems impacting markets performance through the TCT.
First, we could impute it to the fact that even if costs are a relevant way of assessing markets performance, they do not include all its dimensions. In this sense, markets’ liquidity for example is a main argument of the HFT partisans as the robots engage huge volumes on markets each day, favouring the availability of the stocks which directly benefits to the investors (Pflimlin & Checola, 2010). But the fact commonly opposed to this statement is the number of cancelled orders by HFT systems. The reality is that only a fraction of all the orders are actually leading to a real transaction. For example, the NYSE observed that between 2007 and 2011 real transactions represented about 6.5% of all the orders on the market. Showing that the “cancelled to booking orders” ratio was about 11.6 times higher in 2011 than in 2007 (Mattern & Cvetkovic, 2013). In other words, while HFT systems fundamental objective is arbitraging noise on short-time scale, they are actually creating more noise on market by faking close to 93% of their orders.
A second reason limiting our approach may be imputed to the fact that some assessment levers provided by the TCT, are simply not applicable to HFT systems specifications such as post-transactional costs of control. As seen before, control for example, considered under a transaction costs perspective cannot be applied on HFT systems as no customer relation service belongs to these automated machines’ quantitative objectives (yet). However, control may be observed under a different perspective which allows us to assess HFT systems performance not in terms of costs management but of reliability. Indeed, the truth is that the main reason for the recent people awareness of HFT systems is their increasing presence in literature and stories in the news. Various dramatic examples are tarnishing the short existence of new generation HFT systems. On September 08th 2008, a misinterpretation by HFT systems of market information around United Airways bankrupting made the stock fall by more than 75% in one day. On May 06th 2010 the Dow Jones Index felt by 9% in few minutes, corresponding to a global loss in assets’ value of about USD 1,000bn. On August 1st 2012, Knight Capital Group lost control over a trading robot during 45 minutes at the end of which it recorded a global loss of USD 440m – meaning USD 10m a minute. The company’s stock fell by 80% in two days.
What happens here is that, even as brave as you have been for going through all this development and the efforts you developed for understanding it expecting some kind of knowledge incoming, a much higher degree of complexity around its variables is pointed out and prevent anyone to master it. It seems to be uniquely solvable through a rigorous research agenda that may able to help us understand, as future responsible market operators and perhaps managers, the global implications of such practice. Meanwhile, I will keep on feeling that in the environment of this milliseconds or nanoseconds reactivity, the frontier between the three main different steps of information search, decision-making and execution is fading somehow. And I will not be able to help thinking about understanding this sort of (dangerous) “paradigm distortion” lying on the obvious difference observed between HFT and non-HFT agents’ operational timescales, eventually promising a world in which no humans would trade on markets anymore. A terrible caricature I will admit, however consistent with the common observation of automated strategic systems applied on longer time-scales arguably creating a behavioural edge over human traders as they eliminate the emotional factors which are largely responsible for the fact that in their vast majority traders demonstrably underperform a simple buy-and-hold strategy over long periods of time. Disembodied actors with fast execution abilities apparently being the key for surviv… I mean profit.