Home Artificial Intelligence AI-powered government finances: making the most of data and machines

AI-powered government finances: making the most of data and machines

Panel (clockwise from top left): Dr Joseph Castle, Sam Cannicott, Peter Kerstens, Steve Keller, Siobhan Benita (moderator) and Stela Solar

Artificial intelligence (AI) is being used by a growing number of governments and public authorities across a range of areas. Ian Hall reports on a discussion that examined the opportunities and challenges that arise when applying AI to public finances

Governments are paying growing attention to the potential of artificial intelligence (AI) – the simulation of human intelligence processes by machines – to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances, Global Government Fintech convened an international webinar on 4 October 2022 titled ‘How can AI help public authorities save money and deliver better outcomes?’.

The online audience heard from speakers representing the European Commission’s Directorate-General for Financial Stability, Financial Services and Capital Markets Union (DG Fisma); the US Department of the Treasury’s Bureau of the Fiscal Service; the UK’s Centre for Data Ethics & Innovation (CDEI); US-headquartered analytics company SAS; and Australia’s federal government-funded National Artificial Intelligence Centre.

The discussion, organised in partnership with SAS and US-headquartered technology company Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. “Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein…” said the European Commission’s Peter Kerstens, setting the scene. “That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.”

Like most technology-based areas, it’s a field that is also moving very quickly. “If the last class you took in data science was three years ago, it’s already dated,” cautioned the Bureau of the Fiscal Service’s Steve Keller in his own opening remarks.

Alternative ways to consider AI

Peter Kerstens, who is adviser for technological innovation and cyber-security at DG Fisma, began by describing the very name ‘artificial intelligence’ as a “big problem”, asserting that AI is “neither artificial nor is it particularly intelligent – at least not in a way that humans are intelligent.”

“A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data,” he explained. “That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.”

Continuing this theme of caution he further explained: “Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And that’s why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.”

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. “It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information,” he said.

Kerstens then mentioned AI’s potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering “requires very powerful systems”. “But this is also risky because it comes very close to mass surveillance,” he said. “So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of ‘Big Brother’.”

Kerstens also touched on AI’s use in understanding macroeconomic developments. “Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable,” he said.

Connecting the dots

Keller, who is acting director of data strategy in the Fiscal Service, described the ultimate aim of AI as being to “improve decision accuracy, forecasting and speed… trying to use data to make scientific decisions”. This includes, he continued, “testing and verifying our assumptions with data to help make sure that we don’t break things, but also help us ask important questions.”

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was “turning bills into literally a dataset” – the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (“and how we detect that at scale”); in the third area, uses include monitoring whether “people are using services correctly”.

“We’re collecting data from so many elements, and often in large public-sector areas, the left hand doesn’t talk to the right hand,” he said, in the context of entity resolution. “We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen”.

Keller also raised ethical, upskilling and cultural considerations. “If people start buying IT products that are going to have AI organically within them, or they’re building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?”.

He concluded his opening remarks by outlining how the bureau was building an internal ‘data ecosystem’, including a data governance council, data analytics lab, ‘high-value use case compendium’ and ‘data university’.

RELATED ARTICLE ‘Biden sets out AI Bill of Rights to protect citizens from threats from automated systems’ – a news article (11 October 2022) from our sister title Global Government Forum on a blueprint intended to provide a guide to the development of artificial intelligence across the US

‘Driving responsible innovation’

The CDEI, which is part of the Department for Digital, Culture, Media and Sport, was established three years ago to “drive responsible innovation” across the UK public sector.

“A huge focus is around supporting teams to think about governance approaches,” the centre’s deputy director, Sam Cannicott, explained. “How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?”.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore ‘responsible’ AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in children’s social care. These are all “really sensitive – often controversial – areas, but also where data can help inform decision-making,” he said.

The CDEI does not prescribe what should be done. Instead it helps different teams to “think through these questions themselves”.

“Ultimately, the questions are complex,” Cannicott said. “While lots of teams might seek an easy answer, [to] be told ‘what you’re doing is fine’, it’s often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.”

The CDEI’s current work programme is focused on three areas: building an “effective” AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); ‘responsible data access’, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with the Central Digital and Data Office to develop the UK’s first public sector algorithmic transparency standard).

This is “underpinned” by a ‘public attitudes function’ to ensure citizens’ views inform the CDEI’s work – important when it comes to the critical challenge of trust.

Italy and Belgium examples

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italy’s Ministry of Economics and Finance (MEF), and with Belgium’s Federal Public Service – Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for “improved systematic liquidity and risk management” during Covid-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

“The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation,” he said. “Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.”

AI’s ability to ‘scale’

Australia’s National Artificial Intelligence Centre launched in December 2021. It aims to accelerate ‘positive’ AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centre’s director, described AI’s ability to ‘scale’ as “incredibly powerful”. But, she said, it is “incredibly important” that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centre’s focus, she proposed three factors that would be important to help maximise AI’s impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A ‘national listening tour’ organised by the centre had found, she said, low awareness of AI’s capabilities. “Unless we empower every business to be connected to those opportunities, we won’t really succeed,” she warned.

Her second point focused on small- and medium-sized businesses. “Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI,” she said. “But small and medium business is really struggling in this area, which is ironic as AI really presents as a great ‘equaliser’ opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.”

Her third point focused on “community understanding”, which she described as a “critical” factor in accelerating the uptake of AI technologies. This includes achieving engagement from “diverse perspectives in how AI is shaped, created [and] implemented.”

Risks of bias and data deserts

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinar’s Q&A.

In terms of trust, what goes ‘in’ to any AI tool affects what comes ‘out’. “How reliable they are [AI systems] depends on how good and how unbiased the dataset was,” Kerstens said. “Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. People’s addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If you’re not careful, your artificial intelligence engine is going to build in these biases, and therefore it’s going to be biased.”

“It’s not just about bias within AI, it’s bias in the data,” said Castle, emphasising the importance of ‘responsible innovation’ across the ‘analytics lifecycle’.

Solar provided a further dimension, adding that organisations can often find themselves working with “substantial” gaps in data (which she referred to as “data deserts”). “It’s actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data,” she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to “help shape and steer investments” and fill gaps in ‘elderly health data’.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects “will go a long way to address some of the biases – and also will go a long way into the question of what should be done and what shouldn’t be done.”

‘Trust gap’ over data use

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

“Sometimes there’s a push to use these technologies because they can be seen as a way to save money,” observed Cannicott. “There is also nervousness because some have seen where things have gone wrong, and they don’t want to be to blame.”

He emphasised the importance of experimentation, governance (“having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them”) and public engagement.

“Some polling we did fairly recently suggested that around half of people don’t think the data that government collects from them is used for their benefit,” he said. “There’s quite a bit of a ‘trust gap’ there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits people’s lives.”

Keller emphasised the importance of incorporating ‘recourse’ into AI systems. “If I build a system that detects fraud, and flag somebody is a villain and they’re not, we need to give them an easy route to appeal that process,” he said.

‘Testing and experimentation important’

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

“To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do,” pointed out Kerstens.

Organisational leaders driven by “doctrines” could struggle to accept a role for AI. “If you run an organisation or a governmental entity based on politics, artificial intelligence isn’t something you’re going to like very much because it is the data speaking to you,” he continued. “They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, they’ll dismiss it.”

Public sector agencies also need to be savvy about AI solutions they are buying. “Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, that’s quite a dangerous space to be in,” said Cannicott. “Because, for example, if you [look at] children’s social-care – different geographies, different populations – there’s all sorts of different factors in that data. If you’re not clear on where the data is coming from to build those tools initially, then you probably shouldn’t be using that technology. That’s also where testing and experimentation is very important.”

‘It’s very early days’

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

“Often I’ve seen AI be implemented by someone who’s very passionate, and it stays as this hobby experiment and project,” said Solar, emphasising the importance of developing a base-level understanding of AI “across all levels” of an organisation. “For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders – the entire organisational chain,” she said.

Kerstens concluded by emphasising that the story of AI’s growing deployment across the public sector (and beyond) remains in the early chapters. “AI is very powerful. It’s just very early days,” he said. “But what people are most afraid of is… that they don’t understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.”

AI’s advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

WATCH

Watch the webinar, which was held on 4 October 2022, in full (1hr 19min 48sec) =>

Source: Global Government Forum YouTube page