Home Artificial Intelligence US regulator explores views on AI use and risks in derivatives markets

US regulator explores views on AI use and risks in derivatives markets

Artificial intelligence: CFTC commissioner Caroline Pham (inset) discussed the US regulator’s request-for-comment during a conference held last week | Credit: Screenshot by Global Government Fintech from Afore Consulting (virtual) event, overlaid on photo by Gerd Altmann (Pixabay)

A request-for-comment on current and potential uses and risks of artificial intelligence (AI) in derivatives markets has been issued by the US Commodity Futures Trading Commission (CFTC).

The regulator is seeking public views on the definition of AI and its applications, including its use in trading, risk management, compliance, cybersecurity, recordkeeping, data processing and analytics, as well as customer interactions. The request also seeks perspectives on the risks of AI, including risks related to market manipulation and fraud, governance, explainability, data quality, concentration, bias, privacy and confidentiality, and customer protection. Staff will consider responses in analysing possible future actions by the CFTC, such as new or amended guidance, interpretations, policy statements or regulations.

The move – described in the request-for-comment document’s introduction as ‘part of a broader staff effort to monitor the adoption of AI, including machine learning, and other uses of automation in CFTC-regulated markets – comes a couple of months after the White House issued an executive order on ‘safe, secure and trustworthy’ AI.

Rostin Behnam, chairman of the Washington DC-headquartered regulator, described the call for views as “complementing the directives the Biden Administration established for the safe, secure and trustworthy development of artificial intelligence”, as well as “embodying good government”.

“It prioritises promoting responsible innovation and ensuring we understand current and potential AI use cases and the associated potential risks to our jurisdictional markets and the larger financial system. This allows us to better align our supervisory oversight and evaluate the need for future regulation, guidance, or other Commission action,” Behnam said. He added that responses would “further support the CFTC as we strategically identify the highest priorities and return-on-investment projects with AI use cases internally to optimise our data-driven approach to policy, surveillance, and enforcement.”

RELATED ARTICLE UK government directs funding to ‘agile’ AI regulation – an article (6 February 2024) in our sister title Global Government Forum on the UK government publishing its response to a consultation on an AI regulation white paper, allocating more than £100m ($125.5m) in funding to support regulators and advance research and innovation in AI

Twenty topics to tackle

‘This request [for comment] uses the term “AI” in a manner consistent with the [White House] executive order to broadly refer to “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”, the introduction to the 12-page document states, adding that ‘while this request-for-comment seeks responses relating to use cases consistent with this definition, the staff is also interested in better understanding concerns raised by the broader set of similar technologies within and beyond the scope of this definition of AI.’

Twenty overarching questions (or topics) for respondents to consider are provided, with the bulk breaking into more specific questions. Deadline for responses is 24 April.

The first eight questions address current and potential uses of AI by CFTC-regulated entities, with the second question – which is focused on ‘general uses’ of AI – specifying that activities of interest include ‘but are not limited to’: trading; data processing and analytics; risk management; compliance; books and records; systems development; cybersecurity and resilience; and customer interactions.

Questions nine to 18 address concerns regarding the use of AI in CFTC-regulated markets and by CFTC-regulated entities. These include explainability and transparency (question 11); data quality (question 12); and bias (question 15). The final two questions (19 and 20) focus, respectively, on risks to competition and ‘other risks’.

A statement by Kristin Johnson, one of the CFTC’s five commissioners, was released simultaneously with the request-for-comment document, in which she states that the CFTC Market Risk Advisory Committee (MRAC) had recently discussed how its ‘Future of Finance’ sub-committee ‘might address AI in 2024’. She writes that MRAC (of which she is the ‘sponsor’) ‘anticipates offering formal recommendations to the Commission on a number of related topics including the integration in our markets of generative AI, the relationship between AI and blockchain technology, and the risks (including systemic risks) presented by each of these new technologies’ and that the request-for-comment is ‘an important step towards that goal’.

RELATED ARTICLE CFTC commissioner proposes US digital assets regulatory sandbox – a news story (23 September 2023) on Caroline Pham calling for a ‘time-limited CFTC pilot programme to support the development of compliant digital asset markets and tokenisation’

‘Humans have used tools since we have been in caves’

Another CFTC commissioner, Caroline Pham, discussed the request-for-comment – which is issued by the CFTC’s divisions for market oversight, clearing and risk, market participants and data, as well as its Office of Technology Innovation – at the ‘8th Annual FinTech and Regulation Conference’ organised by Belgium-based company Afore Consulting last week (30 January 2024).

“I think one of the things in my recent conferences and engagements, particularly in Europe, that I have found meaningful is hearing technology leaders say that AI is really a tool that can be used to enhance human productivity,” she observed. “I think if we remember that no matter what the technology is, that these are tools that humans are using, that humans have used tools since we have been in caves, that it’s not something to be scared of, but something that we should understand better so that we can harness its abilities in a responsible and controlled manner.”

Asked about the separate topic of tokenisation in securities markets, she said that her focus was on whether the introduction of new technology required amendments to regulation rather than the relative merits of different technologies. “I certainly don’t know how to build a mainframe or how the systems are actually working, although I’m very happy to review them for compliance or surveillance or regulatory purposes,” she said. “So I think it’s not so much what is the underlying technology, but are you able to deploy it in a controlled manner, and able to mitigate your risks. That’s fundamentally how I approach the concept of tokenisation.”

“With that in mind, I actually feel that the concept of regulating tokenisation of financial products and services is actually a rather boring conversation because we should use our existing laws and regulations, and work through these technical implementation issues – such as the application of record-keeping requirements to blockchain technology, or how you maintain an audit trail, or some of these other nuances – in just a very pragmatic way,” she added.

She went on to acknowledge the “advantages and efficiencies that tokenisation can bring to settlement and clearing, for example, and other back-office processes, can propel perhaps some dramatic shifts in market structure”, going on to praise “great work” on this topic led by the Switzerland-headquartered Bank for International Settlements (BIS).

RELATED ARTICLE UK regulators: machine learning becoming ‘more advanced and increasingly embedded’ – a news story (17 October 2022) on a ‘Machine Learning in UK Financial Services’ report, based on a survey that elicited responses from 71 of 168 firms contacted, published by the Bank of England and (UK) Financial Conduct Authority

‘Capacity-building is critical’

Pham’s appearance at the conference (which was held ‘virtually’) closed with a brief discussion about how regulators can ensure they have the right people and skills to handle advances in technology.

“When I was in the private sector, I developed and deployed training programmes for over 17,000 staff all around the world, and over a dozen countries and in different languages,” Pham reflected (she moved to the CFTC just under two years ago from the international bank Citi, where she spent more than seven years). “So, the capacity-building part of the equation is absolutely critical. In my work with developing economies and frontier markets, as well as, as the leader of an organisation, capacity-building is something that must be intentional and it must be something that is part of the overall plan.”

She again referenced the BIS as having undertaken “excellent initiatives” in this regard, as well as (in the US) the Federal Reserve and Office of the Comptroller of the Currency (OCC). Both have “been engaged for quite a period of time in developing capabilities and capacity within their examination teams, and then their engagements with the private sector,” she said.

At the CFTC she mentioned an ‘expert speaker series’ on new and emerging areas of technology, as well as practical workshops focused on “foundational skills around compliance and risk management”.

She also referenced a skills-related challenge shared by many regulators, and the public sector more broadly, across the globe. “It’s always going to be a bit of a disadvantage when you’re in the public sector and you cannot offer a competitive salary and benefits,” she said. “So that’s why we have to be creative and we have to do what we can in order to ensure that we, too, are engaging [with technology firms].”

*** The CFTC’s Office of Customer Education and Outreach – which seeks to combat fraud and to give people the tools to watch out for fraudulent activity – has also (on 25 January) issued a customer advisory, warning the public about AI scams: ‘AI Won’t Turn Trading Bots into Money Machines’ seeks to help investors to identify and avoid potential scams, urging people to ‘be wary of the hype around AI especially when promoted by social media influencers and strangers you meet online’.