Big Data’s main potential is to help companies improve their operations in the sense that they’re able to make faster, better and more intelligent decisions by collecting and analyzing the data with the aim of gaining a useful edge and increasing revenues. Are finance and Trading exempt from this massive improvement in Data Science?
Big Data… What’s that?
Big data has been used since the 1990’s, with the computer scientist John Mashey making it more popular. It includes data sets (A specific collection of data) with sizes exceeding the ability of commonly used tools. Its main philosophy involves encompassing data, be it unstructured in the form of texts and images, semi-structured or structured in the form of numerical inputs and tables. In order for it to work, Big data requires a set of techniques with more advanced integration forms, able to reveal insights from the massively scaled, complex and diverse collected data sets.
By the 21st century, researches and reports have associated Big data and its data growth challenges with a stack of characteristics.
- Volume: Represents the quantity of generated or stored data. The value and potential insights are often determined by the size of the collected data.
- Velocity: Represents the speed at which the data processing is done. A threshold is often set in order to meet the requirements and challenges lying in the path of a company’s growth.
- Variety: Represents the nature and type of data used. Classifying the collected data will often help the analysts determine an effective use of the resulting insights.
- Veracity: Accurate analysis heavily depends on the quality of data sets. As such, captured data are forgone an extensive veracity analysis with the aim of increasing the insights’ performance.
- Machine Learning: Often used as an introductory operation to Big Data, it explores the study and implementation of algorithms able to learn from and make predictions on data. In other words, machines are given the ability to learn without being explicitly programmed.
As such, Big data begins to have a predominant role in feeding computers and servers thriving on useful knowledge, enabling the companies to maintain a competitive edge in their respective environments.
That’s great… And what about Financial Trading?
Like any other forms of trading, financial trading is all about buying and selling financial instruments, be it shares, forex, bonds or derivatives in the form of CFDs, futures, swaps and options. It doesn’t matter which financial instrument is involved, the outcome should be common: To make a profit, which is easier said than done! In financial markets, millions of firms, individuals and even governments simultaneously tend to attempt making profits from trading.
However, with all these traders colliding against one another, the prices of the instruments tend to move in a rather random pace, making it very hard to predict the future prices, with the conventional methods at least. Some markets tend to be very volatile in the sense that not only it is moving a lot and bringing more profit opportunities, but also increasing the risk…
…Which bring us to the enigma of risk! No matter what instrument is being traded, who’s trading it or where the trade is taking place, it is all about balancing the potential profit against the involved risk.
Big Data and Financial Trading… How do they correlate?
Good question. As financial markets tend to be some of the most dynamic entities to exist, the trading methods must follow the same dynamism in order to consistently generate profits. As such, traders will consistently develop trading methods that are temporarily profitable for the corresponding market conditions and constraints. But what will happen if the conditions change? The methods will ultimately show their failures.
This leads us to the infamous enigma of traders: Is there a way to build and implement a system able to consistently calculate the optimal probability of executing profitable trades? We all know that it has become almost impossible for the trader to keep up with the overwhelming surge of incoming data from market analysis, especially with the use of classic methods involving market monitoring.
This is where Big data analytics come to the rescue. Traders are starting to switch from the classic, manual trading strategies to what we call to this day, Quantitative Trading. Exactly as its name states, it consists of trading strategies based on quantitative analysis, which by itself relies on mathematical computations and number crunching with the hope of identifying trading opportunities. As quantitative trading is effective for extremely large-in-size transactions, it is mostly used by Hedge Funds and financial institutions. That doesn’t matter anymore since even individual investors are getting used to it!
For now, let’s break the quantitative trading down. The very first things a trader needs are data inputs. For a quantitative analyst, the most commonly used inputs are the price and volume. Next, the trader is prone to select the technique they wish to use, such as high-frequency trading or statistical arbitrages, and then couple it with the quantitative tools like the moving averages, stochastic indicators and oscillators.
But here’s where it gets more complicated. The trader creates their mathematical models and then develops a computer program able to simulate the model with the help of historical data. Of course, depending on the obtained results, the model may forgo backtests and optimizations, and once validated, the model is hence implemented in real-time markets. This leads us to understand how quantitative trading works best: It uses all the possible analogies, patterns and trends in order to predict the outcome of a specific event, which in our case is the future price.
Based on the Big data’s characteristics we’ve already mentioned above, financial organizations and retail traders are finally able to extract a great deal of information, helping them in their trading decisions.
Quantitative traders, rejoice! Thanks to the predictive capabilities Big data has recently given, historical data (Prices) can easily be crunched with the advanced techniques of Machine Learning and Artificial Intelligence, and then be explored to identify patterns allowing the traders to refrain from order punching and switch to the more creative aspect of estimation. This will notoriously help the trader park their capital at the right time and place.
A simple proof of Big Data being extremely useful for automated trading is the fact it is widely used by the biggest financial institutions like J.P Morgan, which are, for the record, mass-recruiting Data scientists who perfectly understand Machine Learning and Data analysis using Big Data.
Going even further, some financial institutions have begun to use the sentimental analysis technique, which by itself is a form of data mining. Also known as opinion mining, it involves computationally identifying and categorizing opinions (Buy at a specific timespan, Sell at a specific timespan, Indifferent, Waiting for the market to move, etc) usually expressed in the form of texts. The aim is to properly determine whether a specific population of traders’ attitude towards a specific financial instrument at a specific timespan is positive, negative or neutral. This technique can show some very interesting results when coupled with the previously stated predictive models using Big Data.
In a nutshell
Big data is starting to show its notoriety when it comes to Quantitative and High-Frequency trading, whether done by financial firms or private investors. As firms receive petabytes of live tick data from electronic transactions and feed them to the dedicated server, they are used as historical inputs for developing quantitative models and algorithms based on the obtained trade decisions.
As mouth-watering as it may sound, it also presents imperfections. Of course, not only big data and machine learning have drastically reduced the margins of error caused by human decisions, but have also made it possible to trade more accurately, and thus dramatically impact the way transactions are executed. However, traders need to understand that not all the market scenarios can be predicted or at least recreated.
You could have all the possible data sets, coupled with the best patterns generated by Big data, and then use the best quantitative model there exists, but still end up with a trade loss! This can be explained by the incompleteness of Big data patterns, in the sense that they do not include the sudden market surges caused by human errors and/or false rumors. Nevertheless, it won’t be very long before Big data becomes a mainstream necessity for financial institutions… Or has it already?