AI And Regtech – OpEd

By

AI is a technology with potential for enormous societal and economic impact, bringing in new opportunities and benefits. Recent technological advances in computing and data storage power, big data, and the digital economy are facilitating rapid AI deployment in a wide range of sectors, including finance. In a recently published paper “Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance,” we discuss the new opportunities and benefits it brings in the financial sector, as well as new concerns it creates and actions needed to mitigate potential risks.

Competitive pressures are fueling wide adoption of AI in the financial sector and rapidly changing its landscape. AI is facilitating gains in efficiency and cost savings, reshaping client interfaces, enhancing forecasting accuracy, and improving risk management and compliance. AI systems also offer the potential to strengthen prudential oversight, and to equip central banks with new tools to pursue their monetary and macroprudential mandates.

The COVID-19 pandemic has further increased the appetite for AI adoption in the financial sector. Banks have been relying increasingly on AI systems to handle the high volume of loan applications during the pandemic to improve their underwriting process and fraud detection, as well as to comply with mandated COVID-19 relief requirements.

It’s conceivable, not far in the future, that closed loop AI-driven financial ecosystems could emerge, where the extension of credit and other financial services, risk management and compliance through regtech, oversight through suptech, and remedial actions, will all be made by AI applications communicating directly with each other.

These advances, however, are creating new concerns arising from risks inherent in the technology and its application in the financial sector. Concerns include issues like embedded bias in AI algorithms, the opaqueness of their outcomes, and their robustness, particularly with respect to cyber threats and privacy. Furthermore, the technology is creating new sources and transmission channels of systemic risks, including, greater homogeneity in risk assessments and rising interconnectedness that could quickly amplify shocks.

However, in step with the Bali Fintech Agenda’s call on national authorities to embrace the fintech revolution, regulators should broadly welcome the advancements of AI in finance and undertake the preparations to capture its potential benefits and mitigate its risks. Cooperation and knowledge-sharing at the national, regional, and international levels, is becoming increasingly important. This would allow the coordination of legal and regulatory actions to support the safe deployment of AI systems in the financial sector.

With this in mind we see significant potential for AI to improve the soundness and integrity of the financial sector. Advances in AI over the past few years are reshaping risk and compliance management by leveraging broad sets of data, often in real time, and automating compliance decisions. This has improved compliance quality and reduced costs.

Increased adoption of AI in regtech has significantly expanded its use cases, cutting across banking, securities, insurance, and other financial services, and covering a wide variety of activities. These include identity verification, AML/CFT, fraud detection, risk management, stress testing, micro-prudential reporting, macro-prudential reporting, as well as compliance with COVID-19 relief requirements.

In particular AI-powered technologies, including analysis of unstructured data and consumer behavior, are used extensively in fraud detection and AML/CFT compliance. In the latter, AI is being deployed effectively to reduce false positives, which constitute the vast majority of AML/CFT alerts, allowing financial institutions to devote more resources to cases that are likely to be suspect.

Many global banks are using AI-enabled data analytics to improve the analysis of complex balance sheets and stress testing models to meet stress-testing regulatory requirements.

Emerging and promising regtech applications include mapping and updating regulatory obligations, reducing costs and improving regulatory compliance in the process, and conduct risk management to monitor sales calls by financial institutions’ employees to ensure compliance with regulatory requirements to accurately disclose the offered financial products’ features and risks.

Regulators have generally been supportive of the adoption of regtech by regulated financial entities. Various regulators (like Hong Kong SAR) have developed strategies to promote the adoption of regtech that include boosting awareness, promoting innovation, and enhancing regulatory engagement within the regtech ecosystem. Even where there are no explicit strategies, many authorities have supported regtech adoption (like Malaysia).

Many supervisory authorities are actively exploring the use of AI in their risk-based supervision process (Suptech). AI in microprudential supervision could improve the assessment of risks, including credit and liquidity risks, and governance and risk culture in financial institutions, allowing supervisors to focus on risk analysis and on forward looking assessments.

AI systems are also applied for market surveillance purposes to detect collusive behavior and price manipulation in the securities market—potential misconducts that can be especially hard to detect using traditional methods

AI in suptech examples include:

a. The ECB is using AI-enabled machine reading of the “fit and proper” questionnaire to flag problems, as well as to search for information in supervisory review decisions to facilitate the identification of emerging trends and clusters of risks. Similarly, Bank of Thailand is using AI to analyze board meetings minutes of financial institutions which is used by supervisors to assess the regulatory compliance of the board.

b. AI data-analytics is used by De Nederlandsche Bank to detect networks of related entities to assess the exposure of financial institutions to networks of suspicious transactions. Banca d’Italia is exploring the use of AI in loan default forecasting. Similarly, the Monetary Authority of Singapore is working on a project using AI algorithms for credit risk assessments by supervisors.

The use of AI in suptech is relatively new and comes with challenges and risks that need to be carefully considered.

a. The effectiveness of AI-driven supervision will depend on data standardization, quality, and completeness but this could be challenging for authorities and regulated institutions alike, particularly when leveraging non-traditional sources of data.

b. Resource and skills gaps could pose a challenge to the effective and safe adoption of AI by supervisory authorities. The personnel pool could be expanded to include AI specialists and data scientists.

c. Finally, deploying AI systems presents risks to supervisors similar to those faced by the supervised financial institutions, including, privacy, cyber risk, outcome explainability, and embedded bias risk.

Notwithstanding the advances in regtech and suptech and the benefits they bring with them to the financial sector’s soundness and integrity, there is still a cloud of issues hanging over AI systems. In the sensitive and highly regulated financial sector, several challenges would need to be addressed by the industry and regulators to ensure the robustness of AI algorithms, which also affects regtech and suptech applications. Maintaining public trust in an AI-driven financial system remains the foundation of financial stability. Clearly public trust in the robustness of AI algorithms is central on this front.

In particular, AI adoption brings in new unique cyber risks and privacy issues.

a. In addition to traditional cyber threats from human or software failures, AI systems are vulnerable to novel threats that focus on manipulating data at some stage of the AI lifecycle that allow attackers to evade detection and prompts AI to make the wrong decisions or to illicitly extract information. [1]

b. Privacy concerns regarding bigdata are well known and predate the emergence of AI into the mainstream. The robustness of AI algorithms, however, in preventing data leakage from the training dataset raises new privacy concerns. For example, AI has the capacity to unmask anonymized data or may “remember” information about individuals in the training set after the data is used, or algorithm outcome may “leak” sensitive data directly or by inference.

There are also concerns with respect to the robustness of AI performance. These are related to minimizing AI algorithms false signals during periods of structural shifts and having in place an appropriate governance over the AI development process to strengthen prudential oversight and avoid unintended consequences.

Addressing these challenges would require broad regulatory and collaborative efforts that goes well beyond the deployment of regtech and suptech applications. An adequate policy response requires developing clear minimum standards and guidelines for the sector, coupled with stronger focus on securing the necessary technical skills. Given the inherent interconnectivity of issues related to the deployment of AI systems in the financial sector, collaboration among the financial institutions, central banks, financial supervisors, and other stakeholders is important to counter potential risks.

Let me conclude by noting that the evolving nature of the AI technology and its applications in finance means that neither the users, nor the technology providers and developers, nor regulators understand, at present, the full extent of strengths and weaknesses of the technology. Hence there might be many unexpected pitfalls that are yet to materialize, and countries will need to strengthen their monitoring and prudential oversight.


[1] Data manipulations include (a) data poisoning attacks that influence AI algorithm during the training stage and cause the AI to incorrectly “learn” to classify or recognize information, or create trojan models, which hide malicious actions that wait for special inputs to be activated, and (b) Input attacks that allow attackers to introduce perturbations to data inputs and mislead AI systems during operations.

Leave a Reply

Your email address will not be published. Required fields are marked *