- Pursuant to President Biden's October 2023 AI Executive Order, the U.S. Department of the Treasury (Treasury) released a report on cybersecurity risks in the financial services sector in March 2024.
- While the report recognizes the benefits that AI-based cybersecurity tools offer, it also highlights the particular vulnerabilities of AI-based tools and how AI can be used by threat actors, defined as individuals or groups who intentionally cause harm. Financial institutions are cautioned to be aware of the new capabilities that both provide. Digital device or system – attempting to carry out a targeted cyberattack against a financial institution.
- To address these risks, the Treasury Department recommends that financial institutions implement risk management procedures that are consistent with principles contained in “existing laws, regulations, and supervisory guidance.”
- Treasury will also help industry and regulators create a common AI lexicon, expand the National Institute of Standards and Technology's AI Risk Management Framework (“NIST AI RMF”) to more specifically address the financial sector, and We encourage efforts to support further research on the explainability of and address the human capital gap.
On March 27, 2024, the U.S. Department of the Treasury (Treasury) released a report titled: Managing artificial intelligence-specific cybersecurity risks in the financial services sector. Published pursuant to President Joe Biden's October 2023 AI Executive Order.[1] This report provides an overview of “the current state of artificial intelligence (AI)-related cybersecurity and fraud risks in financial services, including an overview of current AI use cases, threat and risk trends, best practice recommendations, challenges and opportunities.” Thing.
In preparing this report, Treasury conducted dozens of interviews with financial institutions of various sizes and market positions. This report examines both the use of AI to detect fraud by financial institutions and its deployment by attackers attempting to commit fraud.
Utilization of AI in fraud detection by financial institutions
As the report acknowledges, financial institutions have been using AI-powered fraud detection tools for “more than a decade.” However, recent advances in AI technology have led many financial institutions to incorporate AI into their existing threat detection tools or fully adopt new AI-based systems. “AI-driven tools are replacing or augmenting many financial institutions' traditional signature-based threat detection cybersecurity approaches,” the report notes.
According to financial institutions interviewed by the Treasury Department, these AI-based cybersecurity tools have the “potential to significantly improve the quality and cost efficiency of cybersecurity and anti-fraud controls” and “help financial institutions better It also helps you adopt a proactive cybersecurity and fraud posture.” ”
Despite these potential benefits, the Treasury report notes that small financial institutions have a relative lack of relevant expertise and data, making it difficult for these institutions to rely on third-party AI fraud detection. They expressed concern that they would be unduly dependent on the tool and could potentially be disadvantaged. “While smaller institutions may have access to these tools through vendors, internal development is limited to sufficient data oversight for model development, testing, transparency, governance oversight and control, and model risk management. “offering advantages in access to Evaluation purpose. ”
The report predicts that “the resource requirements of AI systems will generally result in institutions increasing their direct and indirect dependence on third-party IT infrastructure and data.” As a result, financial institutions need to properly consider how to assess and manage the risks of their extended supply chains, including the potentially heightened risks associated with data and data processing across a wide range of vendors, data brokers, and infrastructure providers. there is. ”
Additionally, the report notes that financial companies are exposing themselves to a unique set of cybersecurity challenges when implementing AI fraud detection tools. Compared to traditional fraud detection solutions, AI tools present new vulnerabilities “due to the reliance of AI systems on data used for training and testing.” This report identifies four vulnerabilities that financial institutions should consider when implementing AI-based cybersecurity tools.
- Data poisoning: Corrupting the training data of an AI model to “compromise the training process or obtain the desired output of the model.”
- Data leakage during inference: Protect sensitive information from your model during the training process.
- Avoidance: Get the desired output from your model through strategic queries.
- Extracting the model: Stealing AI models at scale by “repeatedly querying the model.”
Use of AI for fraud by threat actors
The second topic addressed in the Treasury report is the use of AI by threat actors to carry out targeted cyberattacks against financial institutions. In interviews with various financial institutions, the Ministry of Finance found that market participants are concerned that increasing access to AI, particularly generative AI tools, could make it easier for bad actors to commit financial fraud. I discovered that.
“Most of the concerns identified by financial institutions are related to lowering barriers to entry for attackers, increasing sophistication and automation of attacks, and reducing time to exploitation,” the report states. Generative AI can help existing threat actors develop and test more sophisticated malware, providing them with complex attack capabilities previously available only to the most resourceful attackers. ”
This report details the four main ways cyber threat actors are using AI against institutions that handle financial and other sensitive data.
- Social engineering: It uses generative AI to facilitate “targeted phishing, business email compromise, and other fraud.” Generative AI systems allow threat actors to more realistically disguise themselves to reflect different backgrounds, languages, statuses, and genders. ”
- Malware/Code Generation: Threat actors could use generative AI to quickly develop malware code, such as “a fake copy of a financial institution's entire website to harvest customer credentials.”
- Finding vulnerabilities: Using AI-based tools typically deployed for cyber defense, attackers can discover vulnerabilities in financial institutions' IT networks.
- False information: Attackers may combine targeted cyberattacks against financial institutions' IT networks with AI-generated disinformation campaigns to increase the effectiveness of their attacks.
AI risk management for financial institutions
Through interviews with financial institutions, the Treasury has found that “existing risk management frameworks may be insufficient to cover emerging AI technologies” and, therefore,
“Financial institutions appear to be slow to adopt widespread use of emerging AI technologies.” To address this situation, a Treasury report recommends that companies seek to responsibly deploy AI-based systems. Provides recommendations and guidance to financial institutions.
In Treasury's view, advances in technology do not mean that “existing risk management and compliance requirements and expectations are no longer applicable…Existing laws, regulations, and supervisory guidance may not explicitly address AI.” “However, the principles contained therein can help promote safe, sound, and fair implementation of AI.” A.I. ” The Treasury therefore recommends that financial institutions “identify, monitor, and control risks arising from the use of AI as they would the use of other technologies.”
One of the documents the report recommends financial institutions refer to when developing an AI risk management strategy is the National Institute of Standards and Technology's AI Risk Management Framework. AI has many risks and promotes the development and use of trustworthy and responsible AI systems. ”
In line with the recommendations of the NIST AI RMF, the Treasury report suggests that financial institutions deploy AI tools according to the company's risk tolerance. “Use cases for AI systems need to consider risk tolerance related to the shortcomings of current generative AI,” the report asserts. “If a higher level of explainability is appropriate for your use case, generative AI may not be a viable option at this time. If your use case is aimed at anti-bias guarantees, follow anti-bias standards. It may be appropriate to train an AI model only on prepared data.”
conclusion
The report identifies “next steps the Treasury Department, in collaboration with other agencies, regulators, and the private sector, can take to address the immediate AI-related cybersecurity and fraud risks to financial institutions. The report concludes by identifying 10 items. These next steps include creating a common AI lexicon, expanding the NIST AI RMF to more specifically address the financial sector, supporting research on algorithmic explainability, and addressing the human capital gap. This includes handling.
This report and its next steps, while Treasury does not oppose the adoption of AI-based tools by the financial sector, will ensure that market participants are aware of the risks associated with such adoption and The implication is that you are trying to put in place risk management procedures to minimize risk. risk.
This goal is consistent with those of other financial services regulators, such as the U.S. Securities and Exchange Commission (SEC), which recently proposed rules governing the use of AI and other predictive analytics technologies by broker-dealers and investment advisors. . The SEC's proposal faced considerable criticism due to its requirement to allow broker-dealers and advisors to fully audit and eliminate conflicts of interest related to the use of AI technology. Meeting this requirement could, for example, significantly limit the use of “black box” technology in trading and other applications. This includes the use of technologies based on large-scale language models that are often not fully descriptive.
Interestingly, the Treasury report stops short of requiring perfect explainability in all circumstances. Instead, it recommends that financial institutions “establish best practices for using generative AI without explainability.” . . [which] This may include practices such as ensuring good data hygiene of the data used to train models and using the system only when explainability is not required. ”
[1] The Executive Order on the Development and Use of Safe, Secure, and Trustworthy Artificial Intelligence states, “Not later than 150 days after the date of this Order, the Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage It is stated that “shall be.” AI-specific cybersecurity risks. ” For more information on the specific provisions of the AI EO, see “Biden’s Timeline of His AI Executive Order.”
Raj Gambhir contributed to this article.