Financial regulators and central banks are investing in technology to modernise how they manage and extract maximum value from rapidly growing data volumes. Global Government Fintech, in partnership with AWS, brought together representatives from leading authorities to discuss their data transformation journeys

Public sector financial authorities worldwide are looking to develop and implement comprehensive modern data strategies, and identify investments that will prove critical to achieving their missions and growing their impact.

Possibilities include using machine learning (ML)/artificial intelligence (AI) to drive efficiencies and allocate greater resource to proactive supervisory tasks, and harnessing the power of data analytics to become more effective regulators.

Challenges include the variable quality of data available to regulators and ‘analysis paralysis’, as well as hiring and retaining people with the right skills – a perennial challenge across the public sector.

To explore these issues Global Government Fintech, in partnership with AWS, organised a roundtable event to ask: ‘What is the role for managing and controlling data to drive efficient and effective regulatory supervision?’.

The event, held on 13 September 2022, brought together (in ‘virtual’ format) senior representatives from financial authorities including the Bank of England, Reserve Bank of India, Bank of Japan and New Zealand’s Financial Markets Authority to discuss common ground and aspirations.

Unlocking new value from data

John Cann, AWS’s Head of Market Development APJ – Public Sector Financial Institutions, opened the discussion and listed five ingredients for data modernisation success.

These ingredients are: databases to store and process data; data lakes to unite data stored in different places; analytics tools to develop insights and understand trends; machine learning and AI to build models, make predictions and add intelligence to applications; and appropriate security and governance to control “where data is, who has access to it, and what can be done with it at every step of the data journey”.

AWS’s Head of Finance and Regulation for Australia, New Zealand and Oceania, Saket Narayan, described what a “data-driven organisation” should look like.

“Data-driven organisations seek the truth by treating data like an organisational asset – no longer the property of individual departments. They set up systems to collect, store, organise and process valuable data and make it available in a secure way to the people and applications that need it,” Narayan explained. “They also use technologies such as machine learning to unlock new value from the data, improving operational efficiency, optimising processes, developing new products and revenue streams, and building better customer experiences.”

Exponential growth in the volume of data being generated, as well as the growing variety of data types (for example, log files, clickstream data, voice and video), creates possibilities but also challenges, Narayan added. Further roadblocks to success include many AI proofs-of-concept failing to progress into production and organisational struggles to “define, monitor and manage” who can access specific sets of data.

‘Converting tech into insight’

But what, ultimately, is the big picture? The scope and ambition of one country’s regulators’ data modernisation journeys was summarised by one central bank participant as “trying to get the data we need into the building at the right frequency and at the right quality so our supervisors can turn it into fantastic and beautiful insights – and, ultimately, better and safer regulation”.

Investment in technology is fundamental, helping with data collection, storage and analytics, for example. “The challenge for us is to convert all that fantastic tech into genuine insight that’s useful for us as an organisation,” the participant said, adding that moving more machine-learning tool proofs-of-concept into ongoing use was among his team’s priorities.

Defining objectives of any investment is essential. “It’s so crucial to understand why the data is being used and what the problem is you’re trying to solve, because that’s going to inform you about what quality you’re going to be happy with,” one participant advised.

“Often for central banking use cases, we can live with quite a lot of ‘noise’ in our data, because by the very nature of the kinds of things we’re doing, we’re looking at big questions, big problems where there’s a lot of uncertainty anyway.”

The challenge here for financial regulators is, as one participant put it, not “suppressing the noise” but “spotting the signal”.

Data quality challenges

What goes into the sausage machine affects what comes out. “Perhaps the biggest challenge we have is the data itself. The data we collect currently is very inconsistent,” said one participant, picking up on this theme.

“Ultimately, the sources of the data are from firms and their business processes,” he pointed out. “Perhaps the biggest challenge of all for us as central banks and regulators is to get the industry to change and to improve the way that they manage data, the quality of their data, the accessibility of their data, to make it easier for us in turn to do our analysis.”

Data quality is a shared challenge. “I think there is no such thing as clean data. Everybody’s chasing it, but it doesn’t exist,” one participant observed. “In the public sector, you have such diverse data, that you could spend your whole budget plus some to create a data-cleansing methodology that will never work.”

The importance of communicating internally on the importance of data quality was emphasised. “I think that a lot of people, even for the most crucial data, don’t understand why it’s important for them to manage data properly,” said one participant.

It boils down to understanding how data-collection and analysis tallies with the authority’s overall mission.

“It’s really ‘working backwards’ from the key questions that the business needs to get answers for,” commented one participant. “Even if it’s three questions per business theme, it just helps identify the datasets that are really required, and then helps to answer the granularity, the quality of data. And you then address what’s absolutely essential in terms of data quality.”

Flipping the pyramid

One important aim of investing in technology for data modernisation is to allow team members to spend less time on data collection and more time providing analysis and adding value.

A specific example was shared of one central bank where automation has saved staff members reading 13,000-sentences documents to assess them for tone.

“When the process is manual, a lot of time is spent doing transactional processes, and less time on compliance and control and even less time on insight,” said one participant. “Organisations are trying to flip that pyramid to use supervisory technology and data analytics to make the process faster and then allocate more time to compliance, control and developing insights.”

The importance of looping back to the authority’s overall mission was again raised.

“As an organisation, you can create a fantastic dashboard that’s got fantastic data that generates great insights. But that’s also got to feed into the way that your supervisory model works and how, based on those insights, the supervisor is going to change what they do day-to-day,” said one participant. “That’s a really, really massive challenge because without that, you’re never really going to drive value [and] its business value is going to be limited.”

Transformative change from ‘busy’ to ‘smart’

The theme of how technology investment as part of a data modernisation programme must progress in tandem with an evolution in the organisation’s culture kept bubbling to the surface.

“Transformative change across a whole organisation is incredibly difficult because it implies a great deal of coordination and change in a large number of different areas. In the public sector, I don’t think we’re typically very good at change,” said one participant.

Internal communications are crucial to help generate enthusiasm to embrace change – and to do so quickly.

One participant described data modernisation transformation as being “two per cent about technology and 98 per cent about people”, observing a need for many workers to evolve from being “busy” to being “smart”.

“I keep telling my teams that we have to do it exactly as if we were in private sector, as if we were in a competition, because we want to catch the bad guys, we are competing with them – and they definitely are investing in the latest tech. So, we have to keep pace with them,” he said.

‘Data stewards’ point the way

One participant spoke of his organisation – which is moving to the cloud and building a “modern data platform” – having begun a process of breaking through departmental silos and “merging knowledge assets”, with collaboration visibly increasing and team members referring to its data store via an internal acronym (“the first sign that we are getting somewhere”).

Another example was provided of an organisation establishing ‘data stewards’ in order to have – and demonstrate – who has responsibility for the quality of the data products they own and related questions.

“We made visible what bad data does, how it makes the organisation less efficient,” said the participant. “Once people started to feel that they ‘own’ the space, that there is actually an impact of work they are doing with data in their daily work, that started to change the process, and it also started to change how people understood data.” Enablers for this that were described included the importance of training programmes, cultural changes and communication both internally and externally.

Participants also discussed the importance of ‘user experience’ – how positively team members feel about new tools and techniques. It is key to make data easily accessible and understandable, for example through providing ‘data marts’ and a semantic layer that translate underlying data into a more comprehensible form.

“I used to say that the best way to sell data is not to mention the word ‘data’, because then people will just step back,” said one participant, stating a preference for the phrase ‘data-informed decision intelligence’ ahead of ‘data-driven decision-making’. This, he explained, communicates a message that “I’m not taking authority away from you, I’m just providing intelligence for you to make better decisions”. Making this message “more flattering” helps to “hand back the authority to the people who actually wants to use this data”.

Dangers in the data deluge

As was highlighted by AWS’s Narayan at the start of the discussion, the volume of data available to regulators is growing rapidly.

The digital finance revolution is gaining momentum and CBDCs were mentioned as one example of new financial products and asset classes bringing data-generation possibilities.

One participant also highlighted the opportunities of alternative data sources, for example card-payments data, to supplement traditional statistics, saying that his authority was using such data to help it monitor and analyse economic conditions.

Amid the ‘data deluge’ there are also dangers associated with gathering a “vast reservoir” of data, one participant highlighted, warning of two specific potential risks

“One is that [authorities] have become targets for hackers and nefarious actors,” he cautioned. “The other is that they have to ensure that the information is validated because if it isn’t, then the risk is that their decisions or interpretations are going to be incorrect. And this could lead to the wrong supervisory action as well.”

‘This is a big, long-term project’

In contrast with the data explosion, resourcing for public-sector authorities – which is already constrained – rarely grows proportionally to increases in their regulatory remits.

Data modernisation has the potential – and indeed should – lead to a reallocation of resource into areas where impact proves most significant. This is fundamentally important but not necessarily easy to execute.

“It’s much easier to build a dashboard and to generate some insights than it is to actually then change how your supervisory organisation model works, and to use that dashboard to change how you prioritise your resource,” said one participant.

“Sometimes we see that – despite large investments in technology and data transformation programmes – often decisions are still being made based on gut instincts, and not driven by data points,” another observed. “I really see the need for doubling down on how organisations think about, particularly in the public sector, how the executives are incentivised to tap into those data points to drive decision-making.”

The sense was that the journey is only just beginning. “This is a big, long-term project,” said one central bank participant. “For us we think it’s going to take about 10 years.”

ROUNDTABLE ATTENDEES

• Angus Moir, Head of Data Collection Transformation, Bank of England
• Masaki Beshho, Head of Fintech Centre, Bank of Japan
• Mark MacKenzie, Senior Financial Sector Specialist, South East Asian Central Banks (SEACEN) Research and Training Centre
• Endre Fáklya, Intelligence and Analytics Manager, Financial Markets Authority – New Zealand
• Makoto Seta, Reserve Bank of New Zealand
• JC Somers, Senior Policy Advisor, Reserve Bank of New Zealand
• Virendra Charan, CGM Supervision Department, Reserve Bank of India
• Mukhil Jayasankar, CGM Supervision Department, Reserve Bank of India
• Manaswini Panda, DGM Supervision Department, Reserve Bank of India
• John Cann, Head – Market Development APJ – Public Sector Financial Institutions, AWS
• Saket Narayan, Head of Finance and Regulation, Australia, New Zealand and Oceania, AWS
• David MacKeith, Principal Technology Advisor, AWS
• Varun Karulkar, Account Manager, AWS
• Ian Hall, Editor, Global Government Fintech
• Siobhan Benita, Event Moderator, Global Government Fintech

For further information on Data Modernisation please feel free to contact: