It's no surprise that the private sector is collectively investing countless billions of dollars in AI research and use this year. The president's fiscal year 2025 budget request calls for a much more modest amount, likely in the billions, for AI-related activities across hundreds of large and small non-DoD agencies in the U.S. government.
Given this disparity in effort levels, a logical question to ask is whether the federal government can make a difference in an area as large and fast-moving as AI.
Led by executive orders and frameworks
Last October, the Biden administration released EO 14110 on the safe, secure, and reliable development and use of artificial intelligence. EO believes that while AI has great potential, “Irresponsible use can exacerbate social harms such as fraud, discrimination, bias, and disinformation; lay off workers and disenfranchise workers; stifle competition; and pose risks to national security.” I will bring it.”
The order requires developers of the largest generative AI (GenAI) foundational models to share safety test results and other critical information with the U.S. government before those models are used publicly, among other things. , a set of principles is outlined.
The National Institute of Standards and Technology (NIST) has been directed to set rigorous standards for extensive red team testing to ensure safety, and the Department of Homeland Security has been directed to use these NIST standards on critical infrastructure. We plan to apply it to GenAI. The directive was followed in February last year by EO 14034, which focused on data management and stated that malicious actors could “use access to large data sets to facilitate the creation and improvement of AI. , which could pose a threat to national security, and provide safeguards against adversaries. Using large amounts of data about Americans in the large-scale language model GenAI.
NIST is a veritable treasure trove of guidance, frameworks, and case studies for government agencies and companies that don't have the time, money, or expertise to develop their own strategies, especially in a dynamic field like AI. NIST usually has many answers, and they are usually written in a practical and relatively easy to understand way. A great example is the recently released Artificial Intelligence Risk Management Framework (AI RMF), NIST AI 100-1.
The goal of the AI RMF is to provide resources to organizations that are designing, developing, deploying, or using AI systems. “It helps manage the many risks of AI and promotes the development and use of trustworthy and responsible AI systems.” The NIST framework tends to be widely adopted in the private sector because it is vendor- and solution-agnostic and easily adaptable to organizations of all sizes and sectors, and I expect this to be the case for AI RMF as well. .
Federal AI spending
AI-related spending by federal private agencies is primarily focused on the areas of promoting broader participation in research and managing risk and fraud in the marketplace.
A look at some of the highlights of federal AI research funding: The fiscal year 2025 budget request supports research and development in key emerging technology areas that align with CHIPS and the Science Act priorities of increasing U.S. competitiveness. It will provide $2 billion to the National Science Foundation (NSF) for this purpose. In the field of science and technology, some of these unknowns apply to AI. The budget request also includes $455 million for the Department of Energy to work on AI-related projects that improve the safety, security, and resiliency of AI.
$30 million was requested from NSF to support the second and final year of the National AI Research Resource pilot, albeit on a smaller scale. “We aim to democratize AI research and innovation by providing access to the computing, data, software, and educational resources needed to fully conduct trusted AI research.” and fostering the next generation of researchers. NIST, part of the Department of Commerce, will receive $65 million to protect, regulate, and advance AI, including establishing the National AI Safety Institute, which will be tasked with operating NIST's AI RMF.
Other budget priorities relate to specific sector and agency activities that help manage AI-related risks. We are all well aware that there is an ongoing debate about the safety of self-driving cars, and this budget request requires the Department of Transportation's Automation Safety Office and the National Highway Traffic Authority to address cybersecurity and his AI-related risks. The Department of Safety is funded. .
The Department of Health and Human Services will receive $141 million to improve cybersecurity and information systems. “Promoting the use of artificial intelligence in medicine and public health while protecting against the risks of artificial intelligence.”
These are just a few examples of the various initiatives and activities that the federal government has underway or is about to initiate (due to funding authorizations) regarding AI, but taken together, they are broadly broad yet individual. reflects focused priorities. These are sound priorities for government action in the AI field, although it may be difficult to measure success in the short term.
Jim Richberg is Fortinet's Global Field CISO Head of Cyber Policy and a member of the Fortinet Federal Board of Directors.