As Prepared for Delivery
Good afternoon. I would like to join Sandra in welcoming you to the Treasury Department for the first day of the Financial Stability Oversight Council’s Conference on Artificial Intelligence and Financial Stability.
This is a topic that I—and my colleagues across Treasury, FSOC member agencies, and the Biden Administration—take very seriously. And it will only become more important in the years ahead. This Conference provides a key opportunity for the public and private sectors to discuss the opportunities and risks arising from AI and for us to share the Council’s work to date and our agenda for the future.
I. Opportunities and Risks
Let me start by briefly addressing the opportunities and risks of artificial intelligence.
As I know many of you here today recognize, AI offers tremendous opportunities for the financial system. And if we define AI broadly, the financial services sector has already been capitalizing on these opportunities. For many years, the predictive capabilities of AI have supported forecasting and portfolio management. AI’s ability to detect anomalies has contributed to efforts to combat fraud and illicit finance. Many customer support services have been automated. Across these and many other use cases, we’ve seen that AI, when used appropriately, can improve efficiency, accuracy, and access to financial products.
More recently, AI’s rapid evolution could mean additional use cases. Advances in natural language processing, image recognition, and generative AI, for example, create new opportunities to make financial services less costly and easier to access.
But I know all of you here also recognize that the use of AI by financial institutions comes with risks alongside these opportunities. Last year, in its 2023 annual report,[1] FSOC identified the broader adoption of AI in financial services as a vulnerability for the first time. And the Council and member agencies have been working to deepen our collective understanding of financial stability risks associated with AI, while also recognizing that AI can improve financial services.
The Council’s new Analytic Framework, published last November, provides helpful insights into the range of potential risks that AI can pose to the financial system. Specific vulnerabilities may arise from the complexity and opacity of AI models; inadequate risk management frameworks to account for AI risks; and interconnections that emerge as many market participants rely on the same data and models. Concentration among vendors developing models, providing data, and providing cloud services may also introduce risks, which could amplify existing third-party service provider risks. And insufficient or faulty data could also perpetuate or introduce new biases in financial decision making.
II. Treasury Actions
The Biden Administration has been focused on harnessing AI’s potential to fuel innovation while mitigating risks, as reflected in President Biden’s landmark Executive Order on AI last year. Treasury is proud to be playing a key role in spurring responsible innovation, especially in relation to AI and financial institutions.
As we look to address AI-related risks, we are not starting from scratch or seeking to reinvent the wheel. Treasury, the Council, and member agencies have frameworks and tools that can help mitigate risks related to the use of AI, such as model risk management guidance and third-party risk management. That said, there are also new issues to confront, and this is a rapidly evolving field. We have our work cut out for us and are pursuing a variety of initiatives to identify and address emerging risks.
We have carried out in-depth research and analysis, including on AI’s potential financial and economic impacts. Under the President’s Executive Order on AI, in March, Treasury released a detailed report providing an extensive overview of current use cases and best practices related to AI for cybersecurity and fraud prevention in the financial sector. The report also highlights key steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges.
We are also in regular communication with federal financial regulators on their AI-related efforts. One of the key priorities in Treasury’s 2024 National Illicit Finance Strategy is harnessing technology to mitigate illicit finance risks, and we’ve engaged with the public and private sectors on using AI to detect some of the greatest risks we face, from money laundering, to terrorist financing, to sanctions evasion. At Treasury we are building our capacity to keep up with new technologies and leverage them in our own operations, such as the Internal Revenue Service’s use of AI for enhanced fraud detection.
AI is of course not just a domestic issue, and our work has not been confined to the United States. We’ve been engaging internationally with our allies and partners, including financial regulators, bilaterally and through bodies like the Financial Stability Board, to consider AI’s impacts on the international financial system and global economy.
III. Work Ahead
Our work must continue to expand and evolve. I’ll highlight a few of our key initiatives as we look ahead.
First, we are continuing our stakeholder engagement to improve our understanding of AI in financial services. I am pleased to announce today that Treasury is launching a formal public request for information to seek comments from financial institutions, consumers, advocates, academics, and other stakeholders on the current uses, opportunities, and risks of AI in the financial services sector.
I am also pleased to announce that Treasury’s Federal Insurance Office will convene a roundtable on AI and insurance to discuss the benefits and challenges associated with the use of AI by insurers, best practices, and potential consumer protections to prevent discrimination. Together, these two initiatives will contribute to Treasury’s improved understanding of how AI impacts different types of financial institutions.
Second, FSOC will continue its efforts to monitor AI’s impact on financial stability, facilitate the exchange of information, and promote dialogue among financial regulators. The Council will also continue to support efforts to build supervisory capacity to better understand associated risks. Scenario analysis, often used by firms and governments to understand opportunities and risks in the context of uncertainty, could also be beneficial. Given how quickly AI technology is developing, with fast-evolving potential use cases for financial firms and market participants, scenario analysis could help regulators and firms identify potential future vulnerabilities and inform what we can do to enhance resilience.
Let me end here for now. The tremendous opportunities and significant risks associated with the use of AI by financial companies have moved this issue toward the top of Treasury’s and the Financial Stability Oversight Council’s agendas. And this Conference is a valuable opportunity to hear your perspectives on how AI could make our system more resilient and on the risks you see, as well as on how the Council can enhance how it identifies, assesses, and mitigates potential risks in this area. Thank you for joining us here.
###
[1] https://home.treasury.gov/system/files/261/FSOC2023AnnualReport.pdf.
Official news published at https://home.treasury.gov/news/press-releases/jy2395