Alexander Sokol, founder of software company CompatibL, has seen plenty of risk modelling changes during his 25 years in the business. In the 1990s, when US rates spent most of the decade between 3% and 6%, the common model used a lognormal distribution – excluding the possibility of rates falling below zero seemed fair enough at the time. As rates fell, lognormal was ditched in favour of normal and square root distributions.
In each period – as Sokol now recalls it – modelling practices shifted naturally from one approach to another, along with prevailing market conditions. The past couple of years have felt different, he says, with no one distribution, model or philosophy fully able to address the spectrum of possibilities.
This new market environment and unprecedented volatility has really stressed traditional models to the limit
Alexander Sokol, CompatibL
“Traditional models look for steady state and ‘business as usual’,” he says. “But this new market environment and unprecedented volatility has really stressed traditional models to the limit.”
After a decade of relative calm, the market risk landscape changed dramatically in 2022. In the developed world, inflation hit a 40-year high; central banks began the long process of ending quantitative easing and raising rates, some moving more rapidly than others; stocks and bonds sold off in tandem; and Russia’s invasion of Ukraine was a long-signalled horror that many believed would never come to pass.
The resulting volatility and uncertainty has put a wide variety of pricing, valuation and risk models to the test, with many users stopping to question established tools and practices. Parameters are being reset, reporting is being revamped, and flexibility – in the form of scenario analysis and stress-testing – has become ever more important.
For vendors, it has been a stressful time – but also a creative one, as the more than 140 submissions to this year’s Markets Technology Awards made clear (the full list of winners, plus the judging panel and awards methodology, can be found below).
The great re-examination
Sean Deutsch, director of risk strategy at FactSet, has “absolutely, 100%” seen the firm’s buy-side clients looking afresh at how they measure risk over the past year – continuing and intensifying scrutiny that began during the pandemic.
“As the market is changing, clients are questioning figures, saying things like ‘Hey, we’re seeing the number of breaches of our model exceed what we expected’,” says Deutsch. “I think clients are now doing a more holistic review of the information they pull from our platform, whether it’s exposures, leverage, or how those numbers are changing over time.”
In some contexts, chopping and changing can be tricky – audit and compliance functions may frown on it – but Deutsch says some users are now almost constantly “tweaking and adjusting” a wide range of parameters in their models. As one common example, in FactSet’s default multi-asset class market risk models, Deutsch has seen clients re-examine the time horizons for the data they use and recalibrate the decay factor – the weighting of historical data.
“Clients may say ‘I’m really a long-term investor. That [model is] way too reactive for me, I want to use a decay factor of one, I want to have a five-year history, and for it to be less sensitive’,” he says. “Then I have clients who are saying, ‘Whoa, I see too many breaches, I need a higher decay factor and I want to use the last three weeks of data’.”
Inflation has been a predictable focus for the insurance clients of Conning. The firm releases a quarterly update to its model parameters reflecting latest market conditions, including inflation. But recent volatility has meant some clients are now looking at their inflation assumptions on a near monthly basis.
I think clients are now doing a more holistic review of the information they pull from our platform
Sean Deutsch, FactSet
They may also be looking for more specificity. While many insurers were previously happy to model the standard US benchmark – the consumer price index – the past year has seen some clients looking at inflation by underlying sector. The goal is for insurers with a large portfolio of healthcare or motor vehicle assets – for example – to more accurately model risks for their own business.
Market dynamics and the transition to risk-free-rates have also pushed clients to re-examine which models are important to them, says Satyam Kancharla, chief product officer at Numerix.
“It means a measure that might not have been material a year or so ago is suddenly on the radar,” he says.
Kancharla points to funding valuation adjustment as an example – a component of derivatives pricing that reflects the costs and benefits associated with uncollateralised and partly collateralised trades. If a market-maker is in-the-money on a trade with a corporate client that does not post collateral, for instance, then it may need to fund collateral to post to its counterparty in any offsetting hedge. Since rising to prominence during the last decade, FVA has been a concern primarily for big dealers, but it becomes a more material issue as interest rates rise – making funding more expensive – and markets become more volatile, increasing the size of collateral calls.
This has made it much more likely that clients will take action based on the numbers the model spits out, for instance by revisiting hedges or attempting to renegotiate the documentation that governs collateral posting – one of last year’s trends was for big derivatives users to seek greater latitude in lists of eligible collateral.
Kancharla says Numerix has had to do a lot of work helping clients to both validate and adapt these models to get a better understanding of the numbers.
What if …?
In other cases, models are not the answer – particularly if senior management, board members or risk committees are worrying about a specific event, or a market move of a specific magnitude.
Matt Lightwood, director of risk solutions at Conning, says stress testing is “much more ad hoc at the moment … people are being asked by their boards, what would happen in a particular scenario around inflation? And then they want to answer that question very quickly”.
Some Conning clients are turning to increasingly complex stress tests, using a combination of deterministic and stochastic approaches. For instance, they may start with a particular shift in the yield curve, and then leave the model to generate the behaviour of other variables – such as equity returns or foreign exchange – contingent on the change in rates.
More clients are leveraging sophisticated PnL-explain tools that relate daily PnL changes to market risk
Rohan Douglas, Quantifi
At Bloomberg, market shocks have also pushed clients to seek far more sophisticated custom stress tests of their portfolios, much of it driven by the concerns of senior managers and investors. For example, in January and February 2022, clients were simulating the potential impact of Russia invading Ukraine on market liquidity, while in June they were simulating the impact of potential interest rate hikes, inflation – and of broader recession scenarios.
Senior management isn’t just asking for pictures to be painted of specific events. They also want to be spoken to in their own language, and the language of investors. In other words, they want the worlds of profit-and-loss (PnL) and risk to be tied together – so PnL developments are explained in terms of risk, and vice versa.
This trend has taken off in the past 12 months, says Rohan Douglas, chief executive of Quantifi. Used well, it can help demonstrate the value of risk management.
“More clients are leveraging sophisticated PnL-explain tools that relate daily PnL changes to market risk,” he says. “These methodologies provide both explanatory power as well as a strong validation of risk management accuracy.”
For vendors, meeting some of these demands is client- and case-specific, but there are also broader responses.
One common thread is the attempt to give users more control – to make parameters more customisable, to create their own visualisations and reports, or even to add custom applications and workflows using no- or low-code systems.
Quantifi’s Douglas recalls one client whose goal was to allow its traders and risk managers to create their own portfolio-level scenarios as required, via a simple programming interface. As a result of this conversation – and others – Quantifi spent the past year developing a data science platform that allows clients to quickly whip up bespoke portfolio-level analysis in Python and other languages. The aim was to offer what Douglas calls “extreme flexibility”.
It is a similar story at Numerix, where Kancharla recalls an episode in which one client had to quickly introduce bespoke stress tests in an attempt to understand their near-term liquidity and funding risks – looking 30 days ahead.
“I think a significant element of our technology that we’re really proud of is that the system has been able to morph and adapt as things change,” he says, adding that having a flexible programming interface as well as being on the cloud enabled Numerix to get the job done in a relatively painless way.
CompatibL’s Sokol believes now is the time for a more radical change to the way firms analyse risks – taking the power laws and other simple maths that constrain conventional models, and replacing them with machine learning.
He and his team have spent the past three and half years developing a new piece of software that uses machine learning to create yield curve models using a wider array of parameters. Instead of leaning on traditional mathematical modelling, the tool uses ‘variational auto-encoders’ (VAEs), a type of neural network that has been applied in the field of image recognition and manipulation – for example, to recognise and change human smiles. CompatibL is teaching its VAEs to optimally represent a different kind of smile – volatility smiles, yield curves and vol surfaces. CompatibL claims “dramatically lower error rates” for the new approach, when compared to conventional models.
“I think there’s a confluence of factors that make now the right time for machine learning to really make inroads in traditional trading and risk management,” says Sokol, pointing to last year’s market turmoil as well as the adoption of cloud technology and the widespread use of AI across all aspects of society. He says machine learning can help “cut out some very cumbersome aspects of model construction” and integrate more factors to create more accurate models.
A few years ago, the idea was greeted with blank stares, Sokol says, but clients are now much more open to the idea of innovating their risk management models.
“There are some very famous cases where some of the popular models failed us. Classical models have a lot of shortcomings,” he says.
Risk Markets Technology Awards 2023: The winners
In total, there are 25 awards in this year’s MTAs. Entries were invited for a further nine categories, but there were either too few entries in the categories – or no compelling entrant.
Counterparty risk product of the year: Quantifi
Market liquidity risk product of the year: Bloomberg
Market risk management product of the year: SS&C Algorithmics
Best support for risk-free rates: Bloomberg
Best UMR: Adenza
Execution management system provider of the year: FactSet – Portware EMS
FRTB product of the year: Opensee
Regulatory reporting product of the year: Droit
XVA calculation product of the year: Numerix
Pricing and analytics: fixed income, currencies, credit: Quantifi
Pricing and analytics: structured products/cross-asset: Bloomberg
Trading systems: fixed income, currencies, credit: Murex
Electronic trading support product of the year: TransFICC
Best execution product of the year: Tradefeedr
Buy-side market risk management product of the year: FactSet
Market scenario generator of the year: Conning
DATA AND OTHER SPECIALIST CATEGORIES:
Best vendor for system support and implementation: RiskVal Financial Solutions
Risk data repository and data management product of the year: Moody’s Analytics
Central counterparty clearing support product of the year: Adenza
Collateral management and optimisation product of the year: CloudMargin
Best use of cloud: Adaptive Financial Consulting
Best use of machine learning/AI: Riskfuel Analytics
Best modelling innovation: CompatibL
Best use of natural language processing: Mirato
Best user interface innovation: OpenFin
Methodology and judges
Technology vendors were invited to pitch their products and services in 34 categories covering traded risk, front-office regulation, pricing and trading, buy-side technology, back office, data and other specialist areas. Candidates were required to answer a set of questions within a maximum word count about how their technology met industry needs, its differentiating factors and recent developments. More than 140 entries were received.
A panel of nine industry experts and Risk.net editorial staff reviewed the shortlisted entries, with judges recusing themselves from categories or entries where they had a conflict of interest or no direct experience. The judges individually scored and commented on the shortlisted entrants, before meeting in November to review the scores and, after discussion, make final decisions on the winners.
In all, 25 awards were granted this year. Awards were not granted if a category had not attracted enough entrants or if the judging panel was not convinced by any of the pitches.
This year’s judging panel consisted of:
Sid Dash, chief researcher, Chartis Research Services
Sudipto De, head of investment risk, Principal Asset Management
Jenny Knott, founder, Fintech Strategic Advisors
Ray O’Brien, advisory board, Quantexa
Becky Pritchard, contributor, Risk.net
Peter Quell, head of portfolio analytics for market and credit risk, DZ Bank
Navin Sharma, chief risk officer, Hartford Investment Management Company
Edward Wicks, head of trading, Legal & General Investment Management
Duncan Wood, global editorial director, Risk.net