One of the factors that resulted in the financial crisis of 2007–8 was that some financial firms took on too much risk and made risky investments. Their approach to risk management had, it seems, been far from normal.
And their understanding of financial risk hadn’t been normal in a statistical sense, either. The ‘normal distribution’ is what appears when we plot graphs of people’s weights, or stock price movements, or anything else where most measurements are clustered around the middle, with fewer at the extremes. It’s a bell-shaped curve with ‘tails’ at either side, and models the data – fits its general pattern, enabling predictions – so well and so often that it’s well known to statisticians, scientists and bankers alike.
“For years, financial risk analysts always assumed that the distribution of financial returns was the normal distribution or a symmetric distribution,” explained Brunel statistician Prof Keming Yu. “But when they were looking at what had gone wrong in the financial crisis, they realised they hadn’t been modelling the tails well enough.”
The data in these tails was where the biggest stock price movements could be. The tempting area where the biggest risk was, which could make people a lot of money – and sometimes big losses. But, it turns out, the normal distribution didn’t model the data well enough in these outer regions. The tails were a different shape to normal. They were a bit fatter, and this meant the analysts’ decisions weren’t so well-informed.
“So that’s the reason why they knew they needed a new technology to identify the risk,” said Prof Yu. Yet a year before the crisis hit, he had organised an international workshop and given a presentation* on what ended up being a big part of the solution: Bayesian quantile regression, which Prof Yu and his co-authors had been the first to explore.
This is a way to model relationships between data, focusing on splitting it up into quantiles, which are portions of percentages. Giving an illustrative example, Prof Yu said: “Regulators have become more concerned about the protection of financial institutions against catastrophic market risks, otherwise the value of an investment will decrease due to movements in market factors. The difficulty of modelling these rare but extreme events has now been carried out by extreme statistics methods.
“In finance risk management, the value at risk, or VaR, is a standard measure of the riskiness of financial entities or portfolios of assets. VaR summarises the distribution of possible losses by a quantile, a point with a specified probability of greater losses. For example, if the 95% one-month VaR is $1 million, this means that there is 95% confidence that over the next month the portfolio will not lose more than $1 million.”
The Bayesian part of Prof Yu’s method means it factors in what we know from previous data, such as the top 1% of stock price movements over recent weeks. We can learn from recent history, statistically speaking.
“In the tails of distributions, there isn’t much data for analysts to analyse, which makes their decisions even riskier,” Prof Yu added. “That’s why Bayesian inference is important. It’s particularly powerful for smaller quantities of data for making better decisions.” And his method is useful for distributions other than normal ones – different shapes of graphs that also have the same tails where it gets difficult to make predictions or draw conclusions.
In the audience at Prof Yu’s presentation was a senior research statistician from SAS, the organisation behind the statistical software of the same name that nowadays is used by millions at more than 80,000 business, government and university sites in 150 countries. Shortly after the financial crisis, the software incorporated a Bayesian quantile regression package, based on Prof Yu’s regression models. And in 2014 it was updated to include Yu’s new and more advanced MCMC algorithm (standing for Markov chain Monte Carlo), a new development in this research area. This improved risk-based decision-making for its worldwide users, and enabling high-dimensional data analysis.
The People’s Bank of China, one of the world’s largest public companies, has certainly benefited from the SAS Bayesian quantile regression package. It has allowed them to improve how they determine the relationship between financial risk and return, and every corporate decision that can lead to profits is based on Prof Yu’s model of risk assessment due to its accuracy. In 2015 the bank used Prof Yu’s algorithm to make corporate decisions on 20,000 business cases, worth £110 million. Better decision-making contributed to an increase in the bank’s profits of around 12% year on year between 2014 and 2018.
But the SAS Bayesian quantile regression package isn’t just for the financial industry. Commenting on the impact of Prof Yu’s research, staff from SAS said that “the Bayesian inference methods have been attracting increasing interest within the pharmaceutical industry at various stages of the research, development, manufacturing, and health economic evaluation of new healthcare interventions.” The benefits include robust and powerful tools for dealing with issues in clinical trials, they added, such as those involving progressive diseases, in which the distribution tails are often a different shape to normal. Other applications include extreme climate or environment analysis, because climate studies that focus on averages cannot model climate change well enough.
Prof Yu’s models have also been used in engineering through his collaboration with TWI, the research and technology experts in materials joining and engineering processes. He helped them predict where buried pipes, such as water pipes, would rust and fail.
“Pipeline corrosion occurs on both the inside and outside of pipes, reacting to the substances carried by them as well as external conditions. It’s an expensive problem to put right if left untreated,” Prof Yu said. “Once the corrosion area of a buried pipe reaches a threshold, the pipe needs to be dug out and replaced, but it’s difficult to predict if and when the threshold value is reached. So I worked with TWI to treat the threshold value as a special quantile to develop regression methods, making predictions a little easier.”
Prof Yu’s disseminated modelling approaches have been used by TWI, for clients in Japan and the United Arab Emirates, to solve engineering problems, particularly those that require data analytics expertise in the development of algorithms for clients for use in their software. This allows these clients to reduce their risk of catastrophic incidents and manage their infrastructure more cost-effectively.
TWI’s Chief Executive, Prof Aamir Khalid
*Prof Yu presented at and organised the International Centre for Mathematical Sciences workshop on ‘Quantile Regression, LMS and Robust methods in the 21st Century’, held from 19–23 June 2006 in Edinburgh and funded by the UK’s Engineering and Physical Sciences Research Council and the London Mathematical Society.
Meet the Principal Investigator(s) for the project
Professor Keming Yu - Keming Yu – Chair in Statistics
Impact champion of REF – in UOA (Mathematical Sciences)
Keming joined Brunel University London in 2005. Before that he held posts at various institutions, including University of Plymouth, Lancaster University and the Open University. Keming got his first degree in Mathematics and MSc in Statistics from universities in China and got his PhD in Statistics from The Open University, Milton Keynes.
Related Research Group(s)
Mathematical and Statistical Modelling - Established in 2021 the Centre spotlights the transformative role of mathematics and statistics in innovative projects, their impact in real-world applications and in adding societal and economic value.
Partnering with confidence
Organisations interested in our research can partner with us with confidence backed by an external and independent benchmark: The Knowledge Exchange Framework. Read more.
Project last modified 11/05/2022