
Machine-learning applications are becoming ‘more advanced and increasingly embedded’ in the day-to-day operations of financial services companies in the UK, according to a survey by regulators.
The ‘Machine Learning in UK Financial Services’ report, based on a survey that elicited responses from 71 of 168 firms contacted, is published by the Bank of England (BoE) and Financial Conduct Authority (FCA) three years after undertaking their first joint-survey on the same topic.
Machine learning (ML) is defined in the report as a methodology whereby computer programmes build a model to fit a set of data that can be used to make predictions, recommendations or decisions without being explicitly programmed to do so, instead learning from sample data or experience.
The number of financial services companies in the UK that use ML continues to increase, the 41-page report notes, stating that 72 per cent of firms that responded to the survey reported using or developing ML applications. These applications are also becoming increasingly widespread across more business areas.
‘This trend looks set to continue and firms expect the overall median number of ML applications to increase by 3.5 times over the next three years,’ the report states. The largest expected increase in absolute terms is in the insurance sector, followed by banking.
Clever computers on the rise
Seventy-nine per cent of ML applications are in the latter stages of development – explained in the report as meaning ‘either deployed across a considerable share of business areas and/or critical to some business areas’ – according to the findings. In contrast, in 2019, 44 per cent of applications are still in the pre-deployment phase (proof-of-concepts) and only 32 per cent are deployed across a considerable share or all of a business area.
Financial services firms are thinking about ML strategically, with 79 per cent of respondents that use ML having a strategy for the ‘development, deployment, monitoring and use of the technology’.
The most commonly identified benefits of investing in ML are enhanced data and analytics capabilities, increased operational efficiency, and improved detection of fraud and money laundering. Respondents do not see ML, as currently used, as high risk. The top risks identified for consumers relate to data bias and representativeness, while the top risks for companies are considered to be the ‘lack of explainability and interpretability’ of ML applications.
The greatest constraint to ML adoption and deployment, meanwhile, is legacy (old) IT systems. The difficulty integrating ML into business processes is the next highest-ranked constraint.
The survey’s results are published as jurisdictions worldwide, including the UK, work out how to regulate AI. Almost half of the firms that responded said there are regulations (for which the PRA and/or FCA are the competent authorities) that constrain ML deployment, with 11 per cent of those saying that these are a ‘large’ constraint. Of those who thought that regulation was a constraint, more than half said it is a lack of clarity with existing regulation.
Discussion paper published
The BoE and FCA’s joint-survey on ML’s use in the UK financial services sector in 2019 produced some similar findings and sentiments.
One of the earlier research’s findings was the need for better dialogue between the public and private sector to ensure safe and responsible ML adoption. The two authorities established the Artificial Intelligence Public-Private Forum (AIPPF) in October 2020, which explored barriers to adoption and challenges related to the use of artificial intelligence (AI)/ML, as well as ways to address such barriers and mitigate risks.
The AIPPF’s final report was published earlier this year. This 47-page publication contained a 21-point summary structured around three areas – data, model risk and governance – as well as setting out next steps. ‘The discussion on the safe adoption of AI has only just begun,’ the report noted, stating that regulators ‘could start’ to support innovation and AI adoption by providing clarity on how existing regulation and policies apply to AI. ‘Regulatory alignment will catalyse progress,’ the report said.
The BoE and FCA last week (11 October) published a 47-page ‘Artificial Intelligence and Machine Learning’ discussion paper in response to the AIPPF’s final report.
*** The European Securities and Markets Authority (ESMA), the European Union (EU)’s financial markets regulator and supervisor, has published its 2023-2028 strategy. The 31-page document includes a commitment to ‘further strengthen [ESMA’s] role as data and information hub in the EU and contribute to extending the effective use of data in financial market supervision.’
FURTHER READING
‘AI-powered government finances: making the most of data and machines’ – write-up of our international webinar on 4 October 2022 titled ‘How can AI help public authorities save money and deliver better outcomes?’
‘UK publishes plans to regulate artificial intelligence’ – our news story (21 July 2022) on a Department for Digital, Culture, Media & Sport (DCMS) paper titled ‘Establishing a pro-innovation approach to regulating AI – an overview of the UK’s emerging approach’
‘Regulatory alignment will catalyse progress’: UK financial authorities on AI’ – our news story (28 February 2022) on the final report from the AI Public-Private Forum
‘UK government presents artificial intelligence strategy’ – our news story (24 September 2021) on the UK’s first national AI strategy
‘UK regulators: machine learning deployments set to double in financial services’ – our news article (24 October 2019) on the first BoE/FCA ‘Machine learning in UK financial services’ joint-report