Buckley: 'Public trust in these technologies will be vital'. (Image courtesy: Ian Hall).

The head of the UK’s new technology ethics unit has warned that a loss of public trust could undermine work to develop and implement artificial intelligence (AI) systems. “Public trust in these technologies will be vital,” he said, adding that if people “suspect that these technologies are working against them, instead of for them, then we will have a problem.”

Ollie Buckley, executive director of the new Centre for Data Ethics and Innovation (CDEI), was speaking in the City of London on 2 July at an event attended by Global Government Forum. Addressing an audience of finance professionals, he discussed some of the ethical questions raised by the capabilities of emerging technologies.

“What new trade-offs do we make in a world where AI makes things possible that simply weren’t possible before?” he asked. “For example, if banks are able to use AI to identify vulnerable customers from their transaction data, to identify gambling addicts from their pattern of spend online, should they do that, and take steps to protect those people? Do they have a responsibility to do that, or is that a gross infringement of personal privacy? If better predictive power can lower insurance premiums for some but raise the cost for others to prohibitive levels, is that okay and in what circumstances?”

“Trust is the lifeblood of financial services, and it’s a fragile thing,” he added. “For the UK to reap the full benefits of AI, public trust in these technologies will be vital.”

Ethical robots

The CDEI is one of three new organisations created by the UK government to guide policy and harness the potential of AI. The other two are the Office for Artificial Intelligence – run by the culture and business departments – and the AI Council. Buckley said that the CDEI is “grounded in the belief that good governance is something that we in the UK can excel at: we are a global financial services centre, in no small part thanks to our regulation.” There’s an opportunity for the UK to extend that strong, effective governance to cover the use of AI in financial services, he added, creating an opportunity for the country to “set the rules of the road that can set a standard for the rest of the world”.

Among the activities that the CDEI is undertaking is the development of what it has dubbed an “AI barometer” to assess the opportunities and risks around deploying AI across the economy. It is also focusing on what Buckley described as “hot topic issues”, such as facial-recognition technology and the role of AI in insurance. Its final recommendations should be out “before spring next year “, Buckley said.

Buckley was speaking at the launch of ‘AI in Financial Services: Impact on the Customer’, a report published by law firm Pinsent Masons and Innovate Finance, an industry body. “There are huge opportunities to create better and fairer services through AI,” he said. “That’s true in financial services but also true across all sectors, from health to transport, from retail to recruitment. Success is ultimately going to be about impact on citizens, and if they suspect that these technologies are working against them, instead of for them, then we will have a problem.”

The 28-page report outlines common uses of AI in the financial services sector, including fraud detection and prevention; chatbots allowing for more immediate responses to customer inquiries; assessing mortgage applications; customer identity verification; and providing insurance quotes.

Advice overload

The CDEI’s work on ethics in AI will add to a growing body of evidence and analysis, being produced by bodies around the world. Just last month, the UK’s Government Digital Service (GDS) and the Office for AI published joint guidance on ‘How to build and use artificial intelligence (AI) in the public sector’.

This came less than a month after the Organisation for Economic Co-operation and Development (OECD) published its ‘Principles on AI’, which comprise five values-based principles for the responsible deployment of trustworthy AI, and five recommendations for governments and international institutions. The principles have been adopted by 36 OECD member countries including the UK, US, Canada, Australia and Germany.

Other government-backed initiatives include the European Commission High Level Expert Group on AI’s ‘Ethics Guidelines for Trustworthy AI’ in April 2019; the creation of an International Panel on Artificial Intelligence by the French and Canadian governments in May 2019; and Singapore’s Personal Data Protection Commission’s Proposed Model AI Governance Framework in January 2019. A recent report called for the creation of a dedicated regulatory body in New Zealand, while the UK’s Committee on Standards in Public Life is examining the use of AI in public services.

A version of this article first appeared on our sister publication Global Government Forum


Please enter your comment!
Please enter your name here