AI remains a key focus for both the federal government and industry, and in recent weeks there have been multiple initiatives to address the governance of AI development and use in the United States. On February 26, 2024, the U.S. Department of State-commissioned report, Defense in Depth: An Action Plan to Improve the Safety and Security of Advanced AI (Action Plan), identifies multiple initiatives by the U.S. government and partner nations. Suggested. Address the growing national security risks posed by rapidly expanding AI capabilities, including the promise of artificial general intelligence (AGI). Days later, on March 5, 2024, Speaker Comer and Senior Representative Raskin introduced the Federal AI Governance and Transparency Act. This bipartisan bill would focus government resources on increasing transparency, oversight, and responsible use of federal AI systems and centrally codifying federal governance of agency AI systems. Additionally, on March 28, 2024, the Office of Management and Budget issued final guidance on Memorandum M-24-10. Driving governance, innovation, and risk management for government use of AI As discussed here.
An action plan commissioned by the State Department broadly warns that the federal government must act “swiftly and decisively” to avoid significant national security risks posed by AI. The action plan includes five initiatives ( LOE).
- LOE1 is entitled “Establishing Interim Safeguards to Stabilize Advanced AI Development,” and is designed to provide short-term (1-3 year ) focuses on potential actions by the executive branch to reduce AI risks. Examples of action plans include creating an AI oversight body, mandating the Responsible AI Development and Adoption Interim Set (RADA), and an interagency AI Safety Task Force (ASTF) to coordinate implementation and oversight of RADA safeguards. This includes the creation of .
- LOE2, entitled “Enhanced Advanced AI Capabilities and Capabilities,” identifies specific steps the federal government can take to enhance preparedness to quickly respond to incidents related to the development and deployment of advanced AI. outlines the actions. For example, the action plan recommends coordinating the development of an advanced AI and AGI incident indicators and warning (I&W) framework.
- LOE3 is entitled “Increase National Investment in Technical AI Safety Research and Standards Development” and will increase national investment in advanced AI safety and security, AGI coordination, and other technical AI safeguards. It provides recommendations that the federal government can take to strengthen technical capacity. These activities include directly funding advanced AI safety and security research and promulgating safety and security standards for responsible AI development and deployment.
- LOE4 is entitled “Formalize safeguards for responsible AI development and deployment by establishing an AI regulatory body and liability framework” and provides long-term (4+ years) domestic AI It focuses on concrete actions that legislatures can take to establish a safety and liability framework. Security, including the creation of the Frontier AI Systems Administration (FAISA), a regulatory body with rulemaking and licensing powers to oversee the development and deployment of AI.
- Finally, LOE5 is titled “Establishing AI Safeguards Under International Law and Securing the AI Supply Chain,” which calls for the federal government to establish an effective AI safeguards regime under international law while ensuring the safety of AI. It suggests short-term diplomatic actions and long-term measures that can be taken. supply chain. Recommendations in the action plan include building national and international consensus on catastrophic AI risks and safeguards, and establishing an International AI Agency (IAIA) to monitor and verify compliance with those safeguards. included.
The action plan also recommends establishing civil and criminal liability for “dangerous conduct” by individuals and entities involved in AI supply chains. For example, the report states that failing to accurately report high-performance AI hardware to FAISA or responding to a request for information from FAISA with misleading data can be a misdemeanor, and that AI development activities It suggests it may ignore emergency orders to cease operations or violate the terms of the agreement. Licensing can be a felony.
Similarly, the proposed Federal AI Governance and Transparency Act focuses on creating federal standards, integrating other existing laws affecting AI, and establishing transparency and accountability for AI. Specifically, the bill focuses on the following key objectives:
- Define federal standards for the responsible use of AI by codifying in law key safeguards for the development, acquisition, use, management, and monitoring of AI used by federal agencies.
- Consolidate government-wide federal AI use policy authority and requirements by recodifying and clarifying the Office of Management and Budget's role in issuing government-wide policy guidance in conjunction with existing federal IT and data policy requirements. Strengthen.
- Establish an AI governance charter for your agency. This would require the issuance of a governance charter for high-risk AI systems and other AI systems used by federal agencies that handle sensitive personal records covered by privacy laws.
- Create additional public accountability mechanisms by establishing a notification process for individuals or entities that have been materially and meaningfully affected by AI-influenced agency decisions.
- Streamline and consolidate existing laws regarding government use of AI and repeal repetitive provisions in the Government Act of 2020 and the Advancing American AI Act of 2022.and
- Updates existing Privacy Act Personally Identifiable Information (PII) records notification requirements and FAR procurement rules.
The House Oversight and Accountability Committee considered the bill and voted on the increase. Both the Action Plan and the Federal AI Governance and Transparency Act appear to further Executive Order 14110, “Promoting the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
Government contractors involved in the development or use of AI should be aware of the Action Plan and recent legislative reports. Taken together, they signal far-reaching changes to the federal government and industry's current approach to AI. Additionally, the bill specifically aims to increase U.S. participation of stakeholders, such as government contractors, in critical AI-based activities.