Plans to regulate the use of the fast-growing field of artificial intelligence (AI) have been published by the UK government.
AI, which refers to machines that learn from data how to perform tasks undertaken by humans, has a growing number of applications within the fintech sector and financial services more broadly. It is used, for example, to help with fraud prevention or to help decide whether someone is credit-worthy. But risks include algorithmic bias, discrimination, and misuse of personal and financial data.
Proposals for a UK AI rulebook ‘to unleash innovation and boost public trust in the technology’ were announced as the data protection and digital information bill was introduced to Parliament on 18 July.
The government laid out its first national strategy for AI last September and the ‘Establishing a pro-innovation approach to regulating AI – an overview of the UK’s emerging approach’ paper now sets out its regulatory aspirations.
Under the proposals individual regulators – for example the Financial Conduct Authority (FCA), Competition & Markets Authority (CMA), Information Commissioner’s Office and Medicine & Healthcare Products Regulatory Agency – will be asked to interpret and implement the principles. They will be encouraged to consider what are described as ‘lighter-touch options’ that could include guidance and voluntary measures or creating sandboxes – trial environments where businesses can run the rule over AI tech before introducing it to market.
Six core principles to apply ‘with flexibility’
The proposed approach is based on six core principles that individual regulators should apply ‘with flexibility to implement in ways that best meet the use of AI in their sectors’. The principles require developers and users of AI to: ensure that it is used safely; ensure that it is technically secure and functions as designed; make sure that it is appropriately transparent and explainable; consider fairness; identify a legal person to be responsible; and clarify routes to redress or contestability.
The extent to which existing laws apply to AI can be ‘hard for organisations and smaller businesses to navigate’, the government acknowledges. It states that ‘overlaps, inconsistencies and gaps in the current approaches by regulators can also confuse the rules’ and warns that if rules ‘fail to keep up with fast-moving technology, innovation could be stifled and it will become harder for regulators to protect the public’.
The UK’s Alan Turing Institute this week published a report entitled ‘Common Regulatory Capacity for AI’. This described an ‘urgent’ need for ‘increased and sustainable forms of co-ordination on AI-related questions across the regulatory landscape’. The report was commissioned by the Office for AI (which is part of both the Department for Culture, Media & Sport and Department for Business, Energy & Industrial Strategy).
The government said in its announcement that it intends to ‘consider ways to encourage co-ordination between regulators as well as looking at their capabilities to ensure that they are equipped to deliver a world-leading AI regulatory framework’.
Businesses, academics and civil society organisations have been invited to comment on the plans by 26 September. A white paper on AI regulation is also due ‘towards the end of the year’.
The government also this week published what it describes as an ‘AI Action Plan’. This comprises an update on AI activity across government since the national strategy was published, as well as priorities for the year ahead. The action plan includes a government commitment to developing ‘appropriate metrics to track delivery against our vision’.
Other UK AI initiatives include an ‘AI Standards Hub’ platform – an attempt to influence global AI standards – that is led by the Alan Turing Institute, supported by the British Standards Institution and National Physical Laboratory. It is due to launch this autumn.
Financial services and markets bill introduced
This week has also seen the financial services and markets bill introduced to Parliament.
The bill implements the outcomes of the Future Regulatory Framework Review, established to consider how the country’s financial services regulatory framework should evolve to be ‘fit for the future’, in particular reflecting the UK’s exit from the European Union (EU) in 2020. It follows the Financial Services Act, hailed by HM Treasury as ‘an important first step in shaping [the UK’s] own financial services regulation outside the EU’, receiving Royal Assent in April 2021.
The wide-ranging bill will enable certain types of stablecoins to be regulated as a form of payment in the UK and the establishment of financial markets infrastructure (FMI) sandboxes – allowing firms to test the use of new technologies and practices (in financial markets).
The bill also intends to give, for the first time, financial regulators a new secondary objective to promote the growth and competitiveness of the UK economy, including the financial services sector.
*** A 30-page report, ‘State of the Sector: Annual Review of UK Financial Services 2022’, was released this week by the City of London Corporation in partnership with HM Treasury. The aim of the report, which will be repeated annually, is to ‘help improve competitiveness year-on-year’.
‘Data and financial services reform among UK legislative priorities’ – our news story (10 May 2022) on this year’s Queen’s Speech (the overall package comprised a programme of 38 different bills)
‘Regulatory alignment will catalyse progress’: UK financial authorities on AI’ – our news story (28 February 2022) on the final report from the AI Public-Private Forum (set up by the Bank of England and FCA in October 2020)
‘UK government presents artificial intelligence strategy’ – our news story (24 September 2021) on the UK’s first national AI strategy
‘UK regulators: machine learning deployments set to double in financial services’ – our news story (24 October 2019) on a report, ‘Machine learning in UK financial services’, produced by the Bank of England and FCA