Artificial Intelligence And The Stability Of Markets – Analysis

By

Artificial intelligence is increasingly used to tackle all sorts of problems facing people and societies. This column considers the potential benefits and risks of employing AI in financial markets. While it may well revolutionise risk management and financial supervision, it also threatens to destabilise markets and increase systemic risk.

By Jon Danielsson*

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).

Risk management and micro-prudential supervision

In successful large-scale applications, an AI engine exercises control over small parts of an overall problem, where the global solution is simply aggregated sub-solutions. Controlling all of the small parts of a system separately is equivalent to controlling the system in its entirety. Risk management and micro-prudential regulations are examples of such a problem.

The first step in risk management is the modelling of risk and that is straightforward for AI. This involves the processing of market prices with relatively simple statistical techniques, work that is already well under way. The next step is to combine detailed knowledge of all the positions held by a bank with information on the individuals who decide on those positions, creating a risk management AI engine with knowledge of risk, positions, and human capital.

While we still have some way to go toward that end, most of the necessary information is already inside banks’ IT infrastructure and there are no insurmountable technological hurdles along the way.

All that is left is to inform the engine of a bank’s high-level objectives. The machine can then automatically run standard risk management and asset allocation functions, set position limits, recommend who gets fired and who gets bonuses, and advise on which asset classes to invest in.

The same applies to most micro-prudential supervision. Indeed, AI has already spawned a new field called regulation technology, or ‘regtech’.

It is not all that hard to translate the rulebook of a supervisory agency, now for most parts in plain English, into a formal computerised logic engine. This allows the authority to validate its rules for consistency and gives banks an application programming interface to validate practices against regulations.

Meanwhile, the supervisory AI and the banks’ risk management AI can automatically query each other to ensure compliance. This also means that all the data generated by banks becomes optimally structured and labelled and automatically processable by the authority for compliance and risk identification.

There is still some way to go before the supervisory/risk management AI becomes a practical reality, but what is outlined above is eminently conceivable given the trajectory of technological advancement. The main hindrance is likely to be legal, political, and social rather than technological.

Risk management and micro-prudential supervision are the ideal use cases for AI – they enforce compliance with clearly defined rules, and processes generating vast amounts of structured data. They have closely monitored human behaviour, precise high-level objectives, and directly observed outcomes.

Financial stability is different. There the focus is on systemic risk (Danielsson and Zigrand 2015), and unlike risk management and micro-prudential supervision, it is necessary to consider the risk of the entire financial system together. This is much harder because the financial system is for all practical purposes infinitely complex and any entity – human or AI – can only hope to capture a small part of that complexity.

The widespread use of AI in risk management and financial supervision may increase systemic risk. There are four reasons for this.

1. Looking for risk in all the wrong places

Risk management and regulatory AI can focus on the wrong risk – the risk that can be measured rather than the risk that matters.

The economist Frank Knight established the distinction between risk and uncertainty in 1921. Risk is measurable and quantifiable and results in statistical distributions that we can then use to exercise control. Uncertainty is none of these things. We know it is relevant but we can’t quantify it, so it is harder to make decisions.

AI cannot cope well with uncertainty because it is not possible to train an AI engine against unknown data. The machine is really good at processing information about things it has seen. It can handle counterfactuals when these arise in systems with clearly stated rules, like with Google’s AlphaGo Zero (Silver et al. 2017). It cannot reason about the future when it involves outcomes it has not seen.

The focus of risk management and supervision is mostly risk, not uncertainty. An example is the stock market and we are well placed to manage the risk arising from it. If the market goes down by $200 billion today it is going to have a minimal impact because it is a known risk.

Uncertainty captures the danger we don’t know is out there until it is too late. Potential, unrealised losses of less than $200 billion on subprime mortgages in 2008 brought the financial system to its knees. If there are no observations on the consequences of subprime mortgages put into CDOs with liquidity guarantees, there is nothing to train on. The resulting uncertainty will be ignored by AI.

While human risk managers and supervisors can also miss uncertainty, they are less likely to. They can evaluate current and historical knowledge with experience and theoretical frameworks, something AI can’t do.

2. Optimisation against the system

A large number of well-resourced economic agents have strong incentives to take very large risks that have the potential to deliver them large profits at the expense of significant danger to their financial institutions and the system at large. That is exactly the type of activity that risk management and supervision aim to contain.

These agents are optimising against the system, aiming to undermine control mechanisms in order to profit, identifying areas where the controllers are not sufficiently vigilant.

These hostile agents have an inherent advantage over those who are tasked with keeping them in check because each only has to solve a small local problem and their computational burden is much lower than that of the authority. There could be many agents simultaneously doing this and we may need few, even only one, to succeed for a crisis to ensue. Meanwhile, in an AI arms race, the authorities probably lose out to private sector computing power.

While this problem has always been inherent in risk management and supervision, it is likely to become worse the more AI takes over core functions. If we believe AI is doing its job, where we cannot verify how it reasons (which is impossible with AI), and only monitor outputs, we have to trust it. If then it appears to manage without big losses, it will earn our trust.

If we don’t understand how an AI supervisory/risk management engine reasons we better make sure to specify its objective function correctly and exhaustively.

Paradoxically, the more we trust AI to do its job properly, the easier it can be to manipulate and optimise against the system. A hostile agent can learn how the AI engine operates, take risk where it is not looking, game the algorithms and hence undermine the machine by behaving in a way that avoids triggering its alarms or even worse, nudges it to look away.

3. Endogenous complexity

Even then, the AI engine working on the behest of the macroprudential authority might have a fighting chance if the structure of the financial system remained constant, so that the problem is simply of sufficient computational resources.

But it isn’t. The financial system constantly changes its dynamic structure simply because of the interaction of the agents that make up the system, many of whom ae optimising against the system and deliberately creating hidden complexities. This is the root of what we call endogenous risk. (Danielsson et al. 2009).

The complexity of the financial system is endogenous, and that is why AI, even conceptually, can’t efficiently replace the macro-prudential authority in the way it can supersede the micro-prudential authority.

4. Artificial intelligence is procyclical

Systemic risk is increased by homogeneity. The more similar our perceptions and objectives are, the more systemic risk we create. Diverse views and objectives dampen out the impact of shocks and act as a countercyclical stabilising, systemic risk minimising force.

Financial regulations and standard risk management practices inevitably push towards homogeneity. AI even more so. It favours best practices and standardised best-of-breed models that closely resemble each other, all of which, no matter how well-intentioned and otherwise positive, also increases pro-cyclicality and hence systemic risk.

Conclusion

Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.

About the author:
* Jon Danielsson
, Director of the ESRC funded Systemic Risk Centre, London School of Economics

References:
Danielsson, J and J-P Zigrand (2015), “A proposed research and policy agenda for systemic risk”, VoxEU.org, 7 August.

Danielsson, J, H S Shin and J-P Zigrand (2009), “Modelling financial turmoil through endogenous risk”, VoxEU.org, 11 March.

Danielsson, J, R Macrae and A Uthemann (2017), “Artificial intelligence, financial risk management and systemic risk“, LSE Systemic Risk Centre special paper 13.

Knight, F H (1921), Risk, Uncertainty and Profit, Houghton Mifflin.

Silver, D et al. (2017), “Mastering the game of Go without human knowledge“, Nature 550: 354-359.

VoxEU.org

VoxEU.org was a policy portal set up by the Centre for Economic Policy Research (www.CEPR.org) in conjunction with a consortium of national sites. Vox aims to promote research-based policy analysis and commentary by leading scholars. New content can be found at CEPR.org

Leave a Reply

Your email address will not be published. Required fields are marked *