Home Artificial Intelligence US Treasury consults on ‘rapidly evolving’ AI use across financial sector

US Treasury consults on ‘rapidly evolving’ AI use across financial sector

US Treasury: Janet Yellen announced the RFI’s issuance at the ‘2024 Conference on Artificial Intelligence and Financial Stability’ in Washington DC | Credit: Karolina Kaboompics (Pexels)

The Department of the Treasury has become the latest US government authority to seek public input on the ‘rapidly evolving’ use of artificial intelligence (AI) across the financial services industry. 

A ‘request-for-information’ (RFI) has been issued with a 60-day deadline for responses across 19 questions on the ‘uses, opportunities and risks’ of AI for companies providing financial products and services.

The department is interested in perspectives on topics including potential obstacles for facilitating ‘responsible’ use of AI within financial institutions, the extent of impact on consumers, investors, financial institutions, businesses, regulators, end-users and any other entity impacted by financial institutions’ use of AI, and recommendations for enhancements to legislative, regulatory and supervisory frameworks applicable to AI in financial services. 

The request-for-information’s issuance comes just over four months after the Commodity Futures Trading Commission (CFTC) issued a request-for-comment on current and potential uses and risks of AI in derivatives markets – a move that itself came a couple of months after the White House issued an executive order on ‘safe, secure and trustworthy’ AI.

“Treasury is proud to be playing a key role in spurring responsible innovation, especially in relation to AI and financial institutions. Our ongoing stakeholder engagement allows us to improve our understanding of AI in financial services,” under-secretary for domestic finance Nellie Liang said in a Treasury press release. 

RELATED ARTICLE US regulator explores views on AI use and risks in derivatives markets – our news story (7 February) on the CFTC request-for-comment

FSOC has AI in its sights

The RFI was announced by Treasury secretary Janet Yellen on the first day of a two-day conference organised in Washington DC by the Financial Stability Oversight Council (FSOC) – a body established following the 2008 global financial crisis – in partnership with the Brookings Institution.

“If we define AI broadly, the financial services sector has already been capitalising on these opportunities,” Yellen, who also chairs the FSOC, told the audience on 6 June. “For many years, the predictive capabilities of AI have supported forecasting and portfolio management. AI’s ability to detect anomalies has contributed to efforts to combat fraud and illicit finance. Many customer support services have been automated. Across these and many other use-cases, we’ve seen that AI, when used appropriately, can improve efficiency, accuracy and access to financial products.” 

AI’s “rapid evolution” could also generate additional use-cases, she continued. “Advances in natural-language processing, image recognition and generative AI, for example, create new opportunities to make financial services less costly and easier to access.”

But she pointed out that the FSOC’s annual report for 2023 identified the broader adoption of AI in financial services as a vulnerability for the first time; and that an ‘analytic framework’, published by the FSOC in November 2023, provided “insights into the range of potential risks that AI can pose to the financial system”.

“Specific vulnerabilities may arise from the complexity and opacity of AI models; inadequate risk management frameworks to account for AI risks; and interconnections that emerge as many market participants rely on the same data and models,” Yellen said. “Concentration among vendors developing models, providing data and providing cloud services may also introduce risks, which could amplify existing third-party service provider risks. And insufficient or faulty data could also perpetuate or introduce new biases in financial decision-making.” 

RELATED ARTICLE US government taps agencies for AI project ideas – a news article (13 February 2024) in our sister title Global Government Forum on the US government’s Technology Modernization Fund (TMF) calling for AI project proposals from federal agencies as part of the 2023 executive order

Yellen: ‘we have our work cut out’

Yellen emphasised that the Treasury was “not starting from scratch or seeking to reinvent the wheel” as it seeks to address AI-related risks but that authorities “have our work cut out” as they strive to keep up with the pace of change.

The Treasury itself in March published a 52-page report on ‘Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector’.

“We are also in regular communication with federal financial regulators on their AI-related efforts,” she continued, pointing out that one of the priorities in the Treasury’s ‘2024 National Illicit Finance Strategy’ was harnessing technology to mitigate illicit finance risks.

“We’ve engaged with the public and private sectors on using AI to detect some of the greatest risks we face, from money laundering, to terrorist financing to sanctions evasion,” she said. “At Treasury we are building our capacity to keep up with new technologies and leverage them in our own operations, such as the Internal Revenue Service’s use of AI for enhanced fraud detection.”

She added that engagement was also ongoing internationally, including through bodies including the Financial Stability Board, “to consider AI’s impacts on the international financial system and global economy.” 

RELATED ARTICLE UK government directs funding to ‘agile’ AI regulation – an article (6 February 2024) in Global Government Forum on the UK government publishing its response to a consultation on an AI regulation white paper, allocating more than £100m ($125.5m) in funding to support regulators and advance research and innovation in AI

Hsu: ‘AI holds promise and peril’

Acting comptroller of the currency Michael Hsu also spoke on the first day of the ‘2024 Conference on Artificial Intelligence and Financial Stability’.

In his speech – titled ‘AI Tools, Weapons and Accountability: A Financial Stability Perspective’ – Hsu said that “what starts off as responsible innovation can quickly snowball into a hyper-competitive race to grow revenues and market share, with a ‘we’ll deal with it later’ attitude toward risk management and controls”; and that, “in time, risks grow undetected or unaddressed until there is an eventual reckoning.”

He argued for a “shared responsibility” model for fraud, scams and ransomware attacks in the banking and finance arena.

“From a financial stability perspective, AI holds promise and peril from its use as a tool and as a weapon,” Hsu said in his speech. “The controls and defences needed to mitigate those risks vary depending on how AI is being used. At a high level, though, I believe having clear gates and a shared responsibility model for AI safety can help. Agencies like the OCC [Office of the Comptoller of the Currency] and bodies like the FSOC and the US AI Safety Institute can play a positive role in facilitating the discussions and engagement needed to build trust in order to maintain US leadership on AI.”

Conference participants included staff from FSOC member agencies, as well as representatives from technology, finance and civil society organisations.

RELATED ARTICLE Luxembourg government co-finances R&D project on AI in banking – a news story (8 February 2024) on the EU member state’s government co-funding a research and development (R&D) project focused on AI involving one of Europe’s biggest banks and the University of Luxembourg

EU’s ESMA provides ‘initial guidance’

Across the Atlantic, the European Securities and Markets Authority (ESMA) recently (30 May) issued a statement providing ‘initial guidance’ to firms using AI technologies when they provide investment services to retail clients.

EMSA, which is the financial markets regulator and supervisor for the 27-member European Union (EU), set out in the seven-page ‘public statement on AI and investment services’ how firms can use AI – whose ‘diffusion’, it states is ‘still in its initial phase’ – without contravening the EU’s MiFID II (Markets in Financial Instruments Directive 2014) requirements.

The Paris-headquartered authority states that although AI technologies offer potential benefits to firms and clients, they also pose inherent risks. It references: algorithmic biases and data-quality issues; ‘opaque decision-making by a firm’s staff members’; ‘over-reliance on AI by both firms and clients for decision-making’; and privacy and security concerns linked to the collection, storage and processing of the large amount of data needed by AI systems.

ESMA and national competent authorities (NCAs) ‘will keep monitoring the use of AI in investment services and the relevant EU legal framework to determine if further action is needed in this area’, the authority states.

The EU AI Act was adopted by MEPs in March and will be fully appliable in 2026.