Regulators must clarify regulatory expectation for AI: AIPPF

CUBE comment

Regulators must clarify regulatory expectation for AI: AIPPF

A public-private forum, created by the Bank of England and the Financial Conduct Authority, has published its final report into the use of artificial intelligence within financial services. The report, published by the AI Public-Private Forum, highlights the need for tighter governance and calls for clarity from global regulators.

Artificial intelligence is slowly but surely becoming a foundational element of financial services. From mortgage applications to regulatory technology, AI is changing the way that financial services operate. In many cases, this is for the better, but of course with new innovation comes new challenges and risks. This is a point that the AI Public-Private Forum is quick to highlight in their report, noting that “innovation can change the trade-offs between risk and return. The speed, scale and complexity of AI systems can amplify existing risks and pose new challenges.” That is not to say, however, that AI is bad thing – far from it – as the report highlights.

The final report represents more than a year’s worth of work from the AI public-private forum (AIPPF). It is the result of meetings, workshops and discussions between AIPPF members, who looked to deepen a collective understanding of the technology that underpins AI and “explore how [it] can support the safe adoption of AI within financial services”.

The 47 page report covers the gamut of artificial intelligence for finance – from data and model risk to governance. One of the key questions it poses is how existing regulation and legislation may be applied to AI, and whether AI can be “managed through extensions of the existing regulatory framework, or whether a new approach is needed”.  

The report highlights that, while it chiefly focusses on the technical challenges of AI, all of these fall under the umbrella of “domestic and international regulatory and legislative frameworks”. It goes on to conclude that:

“Clarity of regulatory expectations on the adoption and use of AI is a key component of fostering innovation. Regulators should provide greater clarity on existing regulation and policy. Such clarification and any new guidance should not be overly prescriptive and should provide illustrative case studies. Alongside that, regulators should identify the most important and/or high-risk AI use-cases in financial services with the aim of developing mitigation strategies and/or policy initiatives”.

Moreover, it says that “regulators can support innovation and adoption of AI” and should start by “providing clarity on how existing regulations and policies apply to AI”.

Looking ahead, the AIPPF aims to publish a Discussion Paper on AI later on in the year, which will build on its existing work and extend engagement across a range of stakeholders. The report also highlights the emerging need for an industry body for practitioners, which would aim to build trust in the use of AI, though does not set out a time frame for when such a body could be established.

CUBE comment

The AIPFF report highlights that the safe adoption of AI has only just begun and calls for regulators to provide clarity. We have long heard rallying cries from compliance and surveillance teams for regulatory clarity about what their expectation is from AI. Hopefully this report, endorsed by both the FCA and the BoE, will mark the start of regulatory clarity for the future.

Artificial intelligence within financial services is a double edged sword, or a Swiss army knife. It can be used across the remit of operations to streamline systems, reduce manual processes, increase accuracy…the list goes on. It can be used to process and understand customer data to provide accurate products, it can predict changes and patterns, and it can be used in regulatory compliance to overhaul change management.

This report, in the main, looks to establish the challenges of adopting AI, how to ensure accountability for such use, to ensure it is ethical and that it is not built on bad data. What it does not do is look at the implementation and benefits of using AI within compliance – it is more front of house.

What both of these things have in common, and a pivotal thread across AI more generally, is trust. This is especially pertinent within financial services for myriad reasons. In particular, financial services deals with money and data – two things that are hugely valuable and must be protected. This is the first barrier. The second is more philosophical and pertains to a general fear of the unknown within financial service. ‘We’ve done it this way since inception’ – some might say ‘why fix what isn’t broken?’.

While understandable, both of these standpoints deny an inalienable truth, which is that manual processes often expose firms to more risks, more errors and more regulatory scrutiny (and potentially more regulatory penalties) that those that use AI. As well as this, they spend more money and expend more resources to reach the same (sometimes less accurate) results.

Regulatory clarity and a greater understanding of regulatory expectations will lay the playing field for AI. Only then will firms know what they should – and should not – be doing. With regulatory clarity comes regulatory endorsement, with regulatory endorsement comes trust. AI is advancing fast, but adoption within financial services is comparatively slower.


Find out how CUBE uses AI to transform regulatory change management for financial services.

E


Related resources

SMCR regime: What personal accountability rules are changing for financial services employees?

SMCR regime: What personal accountability rules are changing for financial services employees?

The UK's regulators and HM Treasury are calling for feedback from stakeholders as part of the SMCR r...

The UK’s roadmap to net zero: Green Finance Strategy

The UK’s roadmap to net zero: Green Finance Strategy

Discover the UK's updated Green Finance Strategy to support sustainability and the fight against glo...

The FCA’s 12-month plan unveiled

The FCA’s 12-month plan unveiled

We outline the four key areas in the Financial Conduct Authority's (FCA) 12-month plan to combat cri...

Table Talk series: Consumer Duty roundtable

Table Talk series: Consumer Duty roundtable

Renowned firms from the financial services, asset management and insurance industries attended CUBE'...

View More