Alex Talei of Price Markets Prime Brokerage UK shares a white paper below regarding the events on the 15th of January caused by the Swiss National Bank (SNB). The drastic move is causing people in the FX industry to think twice about the way they operate. Whether a Broker/Dealer, HFT or individual trader – everyone has some lessons to learn, very much reminiscent of May 6th (2010) when the Dow Jones Industrial Average dropped 560 points in 4 minutes.
The firm would also like to inform everybody that it’s brokerage model proved to be resilient even in such times as January 15th stating: “We ended the day and the week at a profit group-wide with no changes to our business model or credit line extensions in the foreseeable future.”
Alex outlined some of the issues related to Retail Broker/Dealers (B/D) in a White Paper done for clients. Some of his findings shared below relate to the industry in general. Note that this is aimed at Retail Broker/Dealers (B/D).
The Aftermath of CHF Spike. A Prime Brokers Perspective.
The Event.
SNB surprised many market participants by removing the cap on EURCHF and dropping interest rates to -0.75% (-50bp).
These two moves caused an inability for most price-makers to quantify how to fairly price it in their models. It started by most liquidity being withdrawn which then lead to circuit-breakers getting hit on many internal models causing them to remove liquidity as well due to the higher than “normal” spike in CHF. Essentially, the event can be described as a very quick avalanche of sell-side withdrawal from market participation .
Flash-crash.
Our findings is that the intermarket effect was much more dampened than the so-called “flash-crash” events on May 6th for a number of reasons which we have detailed briefly below:
- The Interest level was much lower than in EUR on May 6th.
- The Time of the CHF spike was early Europe – which prevented immediate spillover onto US Futures/Equities markets.
- The Issue started in the Cash markets at a time when US Futures markets were closed (our findings are that they started in Cash markets on May 6th as well – contrary to popular belief). This prevented more sell-side participants to react, mainly limited to FX participants.
- The Correlation between CHF and other major assets or markets is lower than EUR.
- May 6th was caused by “unknowns”. Jan 15th was caused by “knowns”, however with outcomes proving to be defined as too high risk to price immediately.
- Liquidity was picked up mainly by buy-side margin calls, stop-outs and other emergency flow requirements.
- Sell-side liquidity with low inventory reacted initially only at bargain rates.
Conclusion: Intermarket relations are very difficult to manage for small- and mid-sized market making broker/dealers (B/D) on an automated basis. The expenditure on technology to do that will normally prove being a bad investment. For non client-serving market makers (HFTs) – not much of a surprise here. So I’ll cut it short on this topic.
Technology.
Note that we are not going to discuss how to implement logic here, just the infrastructure behind it.
The issue at hand was not mainly technology-related for those who got stung heavier at this at first sight, however the architecture implemented, implementing market data/execution logic and re-distributing liquidity to internal retail FX ecosystems, proved to be quite crucial. We will focus here on the activities of B/D’s.
So let’s start with the “Bridge”.
Those Broker/Dealers that lost a great deal of money – mainly lost money because they are passing through liquidity from one or more sources onto their own or 3rd-party retail technology using a simple bridge.
The process is generally as follows:
Liquidity (single or aggregated) -> Bridge -> Retail Front-end.
Issues:
- A Bridge is generally a simple piece of software enabling communication between liquidity providers and front-ends via FIX.
- Retail front-ends are generally not built to deal with risk on a significant level.
From our experience of building bridges, even if we custom-build our software and cut down our execution logic to 2 microseconds and strip down the hardware to process it at 3-4 microseconds – we are still light years ahead of processing done by retail front-ends (data overflow, slow but simple credit checks, etc). This causes a bottle-neck which forces us to queue in order for the front-end to be able to process our logic.
If we are to think: “since there is a bottle-neck, why should I cut latency further” – we may face troubles down the line. Our idea when it comes to bottlenecks for these scenarios is simple: Implement more logic while you “wait”. Some say it’s stupid – but we believe it enables us to improve performance rather than accepting the natural fact of being only as strong as our weakest node. The simple idea is to use any latency advantage we have to improve our own position, rather than “saving” on resources we are already “paying” for.
We already know what’s going on in our eco-system as well as externally where we can cover our risks. Hence the importance of being able to process some logic while we are forced to “wait”.
Another issue is that nearly all B/D’s use FIX to retrieve market data.
Most FX venues can provide data in more “economical” formats such as ITCH or similar binary protocols, as long as the rest of the infrastructure can handle the bandwidth and other costs it can carry. By retrieving market data in similar binary protocols, you are essentially decreasing the size of messaging between your machines and those of the venue, enabling you to process data un-throttled and with access to full market data in real-time. This enables you to discover risks faster and access liquidity more efficiently.
For some retail B/D’s who provide a larger set of product offering we see an additional risk as they process market data through third-parties. Once again, it may be a budget issue or know-how issue, but we are too often seeing risk management being done on filtered prices (to us it means NOT sourced from the absolute source, with absolute quality and at absolute speed).
The more products you price, the more data you need to collect and data storage can hit extreme levels very quickly in such scenarios, so some take an alternative route to not spend money processing and storing vast amounts of “clean” market data. Instead, they prefer cheaper and filtered indicative prices to derive their own price-making from and also further to the inefficiencies mentioned above, also manage internal risk with.
We will raise the Liquidity issue below and then come with a joint Technology and Liquidity conclusion.
Liquidity.
In the end, what generates a risk like Jan 15th on CHF is the ability to source liquidity within a level which is a part of our risk management formula.
To figure out what can happen in the future is generally done by going back and looking at what has happened, and Jan 15th is already far behind and it’s provided us with a lot of data to use for future research.
Before we deal with the Credit (Leverage) issue, we believe there is a greater issue in managing liquidity risks.
One issue we keep seeing is the non-usage of “big-clip” venues for risk management purposes (generally price taking) as the general consensus is that it’s not suitable for retail price distribution (price making). In this specific case, there is one single big-clip venue where CHF liquidity is sourced – everything else on any other venue – we see internally as a “cash derivative pricing”.
This method is rarely applied by retail B/Ds – because the technology in the back-end (normally a retail Bridge) can only process orders directly across the same market data provider and order execution venue with simple logic, essentially not having a separate set of logic for risk management and also when having the ability – just using it as a backup, essentially waiting for other feeds to drop before attempting to access liquidity on the other venue.
Let’s look at the workflow here, there is a reason the first one dropped – and if you know it, most other participants know it too. But if you tried to execute your risk management orders as soon as the risk event was discovered (before wasting time on other venues failing) you would generally be able to absorb the liquidity you desire, earlier.
There are many other factors we see generally implemented by some other industry sectors but not by the retail price-making sector.
Conclusion:
- When you build a pricing engine or “retail matching engine” an important aspect is to be able to implement smart commercial logic (whether to make you money or prevent you from losing it).
- The idea behind a matching engine is to achieve on avg 0.8 microseconds. A pricing engine, which a “retail matching engine” essentially is for some B/Ds, and according to us was the greatest ineffciency for B/D’s – normally turn 2 microseconds and above. But once credit-check is done through a dark layer of software plugged to each other, it’s not unusual for us to see some B/Ds turning at above 10 milliseconds, even at tech-savvy firms. Such numbers are not sustainable for this type of ecosystem.
- Most retail matching engines have the ability to source healthy amounts of liquidity (mainly limited by amount of credit, location, etc) within sub-1 millisecond ranges. The rest of the internal work-flow has to be done by the time they can access the liquidity required.
- Although the liquidity post-event was scarce, an overall strategy should be implemented to improve risk management order execution logic.
- There has to remain a healthy balance between quality of market data and willingness to budget for it. Important that the data is continuously used for research.
- Enabling more order execution gateways – not just for the simple reason of price deriving or offsetting, but also to fulfil other aspects of internal workflow requirements – mainly risk management.
- Liquidity Aggregation is pointless if there is no clear rule-based purpose and an in-depth understanding of matching engine processes.
Credit.
The main issue for the retail B/D community in this instance was Credit (or Leverage).
As the retail B/D world are joining the “risk-offset game” (essentially passing through client flow onto other venues or single-LP’s) we believe credit procedures should not only match their commercial expectations but also the risk factors they are creating for the rest of the infrastructure. We are not going to discuss capital requirements here.
There are some general ways retail B/D’s offset risk:
- Passing on all risk to a clearer directly (no middleware).
- Passing on risk after internal processing (middleware implemented).
- Internalising and offsetting flow based on proprietary logic (no end-to-end network).
Passing on all risk to a clearer directly (no middleware).
This is the model implemented by Price Markets Prime Brokerage (for sake of disclaimers). Greatest risk here is generally counter-party risks – the ability of counter-parties to honor their commitments to you and your clients. This means there is no middleware in place, no extra layer of software or node in the network, filtered logic, etc. Generally the prudent way to do this is to spread across custodians, clearers, venues, ISV’s in a purposeful manner to mitigate risk. The greatest issue is the commercial aspect, so it requires very specific business models.
Passing on risk after internal processing (middleware implemented).
This is the most popular architecture implemented by retail B/D’s where there is no manual processing of order flow. This can include all risks mentioned in this article depending on what type of architecture is implemented. However there is at least another layer or node in the setup. A front-end, credit-check, risk management, order execution gateway, etc. Generally, we see overall performance being reduced to the least efficient node in the architecture unless logic is implemented to manage it.
Internalising and offsetting flow based on proprietary logic (no end-to-end network).
This is “classic” FX. Order flow is built up. Offset very rarely or only when reaching regulatory requirements or resource limitations. Normally the only logic implemented is the decision-making process of one or more individuals. No real node added as there is no functioning network for order execution, but in the market data processing there are extra nodes and in most cases as we see, without any logic at all implemented for risk management purposes and extremely simple logic for price-making purposes. Pricing logic is as simple as Derive->Arithmetic Filter->Publish.
We don’t believe there is a preferred model for all retail B/D’s to go and implement – it mainly has to match the commercial model, make sense with the risk capital utilised, how your business is positioned and the stage of your businesses growth. However, the decision making process commercially is generally once the spread on “income per million” lines start narrowing.
It’s a difficult change from one business model to another – especially if it’s done just to keep up with new commercials affecting the market place rather than organically growing into it.
Conclusion.
We believe these occurrences will take place again, whether in a more “populated” or liquid market or not doesn’t matter much.
But what would be the consequences if this took event in EUR/USD or USD/JPY – or in both markets at the same time? What potential event could cause it?
As the questions are being processed and we are implementing ways to protect ourselves and our clients from it, all we can know is that we can’t rule it out.
We need to look at all aspects of our architecture and the way we conduct our operations to ensure it fits our business model. Those who are willing to take the step from a “classical FX Dealer” business model to a more technical, have to remember that it’s not as simple as plugging a piece of software somewhere and running it on auto-pilot. This is a more demanding task and in order for it to work for you, it has to ensure operational requirements are met with quality and built in a competitive manner. Reality is, that this industry is in a state of constant competition and there will always be winners and losers.
So the short conclusion is:
- Go through the entire architecture – from grounds up.
- Spot the inefficiencies.
- Run the numbers.
- Fix it.
To view the release from Price Markets, click here.