The UK government has laid out its first strategy for becoming a world leader in artificial intelligence (AI) in order to meet increasing global competition in the emerging field.
The National AI Strategy, published during ‘London Tech Week’, sets out a 10-year plan to strengthen investment in innovation and secure public trust in how AI is used to protect money and data as well as provide healthcare.
Its ambitions include making sure all regions of the country enjoy the benefits of AI – which it states is ‘thriving in the UK, backed by our world-leading financial services industry’ – and to position the UK as “the best place to live and work with AI, with clear rules, applied ethical principles and a pro-innovation regulatory environment”.
Speaking at the launch, Department for Digital, Culture, Media & Sport (DCMS) minister Chris Philp said: “We’re laying the foundations for the next 10 years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”
AI has a growing number of applications within the fintech sector and financial services more broadly, for example, being used to help with fraud prevention or to help decide whether someone is credit-worthy.
‘What counts as AI is constantly changing’
Regulation of AI will be a significant challenge for the government, as it will for countries around the world, as they grapple with how to mitigate the harms posed by new technologies. The risks include algorithmic bias, discrimination, and misuse of personal and financial data.
“The boundaries of AI risks and harms are grey, because the harms raised by these technologies are often non-AI, or extensions of non-AI, issues, and also because AI is rapidly developing and therefore what counts as the AI part of a system is constantly changing,” the 35-page strategy states.
A white paper on AI regulation is set to be published in early 2022. This is expected to build on the government’s Plan for Digital Regulation, published in July, by determining the requirements for a level playing-field on which regulators can operate effectively across different sectors.
The strategy report notes that where AI is used internally by government, such as within the Ministry of Defence (MoD), ‘rigorous codes of conduct and regulation which uphold responsible AI use’ would apply. It added that the MoD is “working closely with the wider government on approaches to ensure clear alignment with the values and norms of the society we represent”.
‘Transformative’ technology’s benefits and risks
Seán Ó hÉigeartaigh, co-director of the Centre for Study of Existential Risk at the University of Cambridge, said it was encouraging to see a strategy from government that prioritised the long view of the UK’s future with AI.
“Any transformative technology may pose unprecedented risks as well as benefits; the government is taking a leadership position by recognising its responsibility to anticipate and manage these risks.”
He added, however, that the government would need to prioritise investment in research if the UK is to compete alongside other global powerhouses, most notably China and the US.
“The government has made important commitments around horizon-scanning, progress monitoring, AI safety research and boosting compute capacity for academics. It must now back up these commitments with required funding in the upcoming Spending Review.”
In terms of the private sector, in 2020 UK firms that were adopting or creating AI-based technologies received £1.78bn in funding, compared to £525m raised by French companies and £386m raised in Germany, according to data included in the strategy document.
Part of ‘levelling-up’ vision
The National AI Strategy echoes the government’s wider pledge to ‘level-up’ neglected regions of the UK by launching a joint Office for AI (OAI) and UK Research & Innovation (UKRI) programme. The aim will be to extend research and development beyond London and the South-East.
The OAI is part of DCMS and Department for Business, Energy & Industrial Strategy.
Professor Adrian Hilton, director of Centre for Vision, Speech and Signal Processing at the University of Surrey, said that world leadership in AI from the UK would require it to invest in technology centred on “the needs of individuals and communities [and] which are ethical, responsible and inclusive”.
“The country’s first National AI Strategy lays the groundwork to this brighter future, but it is up to academia, government, and industry to work together to make AI the force for good that it should be,” Hilton added.
Felicity Burch, recently appointed as executive director of the UK’s Centre for Data Ethics & Innovation (CDEI), welcomed the strategy’s publication, saying in a tweet that her organisation, which is part of the DCMS, ‘looked forward to supporting it’.
The Bank of England (BoE) and Financial Conduct Authority (FCA) set up an ‘Artificial Intelligence Public Private Forum’ last October to discuss the use and impact of AI in financial services.
The forum’s most recent meeting was in June, during which members ‘acknowledged that the topic of regulatory responses to AI is highly complex’, according to the meeting’s minutes. ‘One example is the European Commission proposal for AI regulation [in April 2021]. Members also highlighted that some jurisdictions already have explicit guidance that applies to AI models, like Singapore. Therefore, international regulatory fragmentation could pose a significant challenge,’ the minutes state.
This is an extended version of an article first published by our sister title Global Government Forum
‘UK regulators: machine learning deployments set to double in financial services’ – our news story (24 Oct 2019) on research by the BoE and FCA that found that use of machine-learning technology (defined as ‘the development of models for prediction and pattern recognition, with limited human intervention’) was expected to more than double in the next three years