Research by the UK’s Bank of England (BoE) and Financial Conduct Authority (FCA) has found that the country’s financial services businesses are fast deploying machine learning (ML) technology to tackle money laundering and fraud.
The survey found that ML – defined as “the development of models for prediction and pattern recognition, with limited human intervention” – is increasingly being deployed, with use expected to more than double in the next three years. As well as addressing crime, businesses are developing ML tech for customer-facing applications such as customer services and marketing.
The central bank and regulator combined forces to run the survey, having pinpointed ML as a ‘principal driver’ of how innovative technology is transforming global finance. The survey was sent to organisations such as e-money institutions, banks, financial market infrastructure firms and investment managers.
The 36-page report is published as governments worldwide seek to explore the potential of ML – and the broader related discipline, artificial intelligence (AI) – in delivering government services, and while concern is growing about the technologies’ impact in public services and the wider economy.
Risks and rewards
Fast-developing technology can help governments to, for example, personalise services around individuals’ needs, process transactions more quickly and accurately, and develop better predictive models and simulations. But both ML and AI raise profound questions around the use of data and the automation of processes, systems and decision-making.
Both in the private and public sectors, ML and AI technologies can, for example, reduce transparency in decision-making, or result in systems ‘learning’ from skewed datasets to discriminate against particular groups of people.
The BoE and FCA say they hope the survey will help “to identify where regulation may help support the safe, beneficial, ethical and resilient development and deployment of ML both domestically and internationally”. They also plan to establish a public-private group to explore issues raised.
In the UK, Ollie Buckley, executive director of the country’s new Centre for Data Ethics and Innovation (CDEI), warned earlier this year that a loss of public trust could undermine work to develop and implement AI systems. The CDEI published its own report series just last month on what it describes as “issues of public concern” in AI ethics, covering issues such as “smart speakers and voice assistants”, and AI and personal insurance.
Also during 2019 organisations including the OECD, the European Commission, New Zealand’s Law Foundation and the UK’s National Audit Office have all published reports, while investigations and consultations have been launched by the UK’s Committee on Standards in Public Life and the Australian government. The Canadian and French governments, meanwhile, have teamed up to launch an International Panel on Artificial Intelligence.
AI in action
The BoE and FCA received 106 responses to their survey to produce the ‘Machine learning in UK financial services’ report, with two-thirds of respondents reporting that they already use ML in some form. The median firm uses live ML applications in two business areas, and this is expected to more than double within three years.
In many cases, ML development has passed the development phase, and is entering more advanced stages of deployment. As well as for anti-money laundering (AML) and fraud detection, customer services and marketing, firms also use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
Regulation is not seen as a barrier, but some firms said that additional guidance on how to interpret current regulation could serve as an enabler for ML deployment. The biggest reported constraints are internal to firms, such as legacy IT systems and data limitations.
Firms use a variety of safeguards to manage the risks associated with ML. The most common safeguards are alert systems and so-called ‘human-in-the-loop’ mechanisms. These can be useful for raising the alert if the model does not work as intended.