On March 28, 2024, the Office of Management and Budget (OMB) released Memorandum M-24-10. Driving governance, innovation, and risk management for the use of artificial intelligence by government agencies (Memo), updates and implements OMB's November 2023 proposed memorandum of the same name. The memo directs agencies to “advance AI governance and innovation while managing the risks of AI use in the federal government.” In this memo, OMB focuses on his three key areas: strengthening AI governance, promoting responsible AI innovation, and managing risks from the use of AI.
range
This memo addresses only a subset of AI risks that are directly related to the use of AI products by government agencies, that is, those that threaten public safety and rights due to the agency's reliance on AI output for decision-making and actions. . This memo addresses issues that exist in automated and software systems, regardless of whether AI is being used (enterprise risk management, information resource management, privacy, accessibility, federal statistical operations, IT, or cybersecurity). and does not supersede any other. More general policies regarding these issues may also apply to AI. Additionally, this memo does not apply to AI in national security systems.
Strengthening governance
This memo outlines how government agencies can be held accountable for managing the use of AI. All government agencies will be required to appoint a chief AI officer (CAIO) and convene senior officials to coordinate and manage issues arising from the use of AI. The CAIO will be responsible for coordinating the agency's use of AI, fostering AI innovation, managing risks from the use of AI, and fulfilling agency responsibilities regarding AI. Agencies must also submit a compliance plan and inventory of AI use cases.
Driving responsible AI innovation
This memo encourages responsible advancement of AI innovation within federal agencies. Each agency will be responsible for identifying and removing barriers to the responsible use of AI and maturing AI integration across the agency. This includes improving IT infrastructure to handle AI training and inference. Develop the appropriate infrastructure and capacity to share, manage, and manage agency data for use in AI modeling. Cybersecurity updates. and integrating “the potential beneficial applications of generative AI in missions.”
Agencies are also directed to prioritize recruiting, hiring, developing and retaining talent for AI and AI-enabled roles. This includes appointing an AI talent leader and providing employees with resources to train and develop her AI talent.
The memo points out the importance of sharing and collaborating on AI to advance innovation. Agencies should be willing to share custom-developed code and models on public repositories as open source software where possible, or portions of code and models where partial sharing is not possible. there is. They are also required to make all data used to develop and test AI products publicly available. When procuring custom AI code, training data, and enhancements to existing data, agencies are also encouraged to obtain the necessary rights to share and publicly display procured products and services.
Finally, agencies are directed to harmonize AI management requirements across agencies to create efficiencies and opportunities to share resources. This includes, at a minimum, sharing templates and formats, sharing best practices, sharing technical resources, and highlighting successful use of AI within the Agency.
Manage risk using AI
The third focus of this memo is to improve AI risk management in government agencies, focusing on so-called “security-impacting” and “rights-impacting” uses of AI. The memorandum requires all government agencies that utilize AI products that impact safety and rights to implement necessary risk management practices and end the use of non-compliant AI by December 1, 2024. It is mandatory to do so. This memorandum contains limited exemptions and extensions to government agencies that take actions such as: I will not be able to meet the December deadline.
Practices required for all AI that impact safety and rights
The memo's risk management requirements require federal agencies to complete an AI impact assessment before using AI that impacts safety or rights. AI impact assessment should:
- State the intended purpose of AI and its expected benefits.
- Use AI to identify potential risks and identify mitigations beyond the minimum practices outlined in the memo.and
- Assess the quality of data used in AI design and development.
Government agencies also need to test the performance of AI in the real world to ensure it works for its intended purpose. OMB has made clear that agencies should not use AI if the expected benefits of AI do not outweigh the risks, even when attempting to reduce the risks.
After a government agency begins using an AI product that impacts safety or rights, the agency should conduct ongoing monitoring (including human review) to regularly assess risks and identify new risks. need to be mitigated. The memorandum requires agencies to ensure that their staff is properly trained to evaluate and supervise AI, and to Requires additional human oversight and accountability to be provided when actions are not permitted. relief. Government agencies should provide timely public notice and plain documentation about AI that affects safety or rights in use, preferably before the AI takes actions that affect individuals.
Additional AI practices that impact rights
Using AI to impact rights will require additional safeguards. Before deploying AI to impact rights, government agencies must first identify and assess the impact of AI on fairness and equity and mitigate algorithmic discrimination where it exists. Specifically, the memo requires agency assessments to:
- Identify in your AI impact assessment that the AI uses data that includes information about federally protected classes, e.g..race, age, gender, etc.).
- Analyze whether AI in real-world contexts makes a significant difference in program performance across demographic groups.
- Alleviate the disparities that perpetuate discrimination.and
- Discontinue the use of AI in agency decision-making if the agency cannot reduce the risk of discrimination against protected classes.
The memo also requires agencies to consult and incorporate feedback on the use of AI from all affected communities and the public. When evaluating feedback, OMB has made clear that if an agency determines that the use of AI causes more harm than good, the agency should cease using AI.
The memo directs government agencies to conduct ongoing monitoring and mitigation of discrimination after the introduction of AI that impacts rights. If mitigation is not possible, agencies should safely discontinue the use of AI capabilities. Government agencies will also be required to notify individuals if the use of AI results in an adverse decision against them. In such cases, if the affected person wishes to contest or challenge the adverse effects of the AI, the agency will provide a timely human review and, if necessary, remedies for the use of the AI. need to do it. Agencies should also provide opt-out options for AI-powered decision-making for individuals who prefer human review, and an appeals process for individuals who would be adversely affected by the use of AI. there is.
Managing risk in federal AI procurement
This memo includes additional guidance on responsible sourcing of AI. First, agencies are encouraged to ensure that the AI they procure complies with all laws and regulations that consider privacy, intellectual property, cybersecurity, and civil rights and freedoms. Government agencies are also expected to ensure transparent and appropriate AI performance from any vendor. To support this requirement, the memo recommends that each agency obtain appropriate documentation to evaluate AI capabilities and its known limitations. Get information about the data used to train, fine-tune, and operate your AI. Regularly evaluate federal contractors' claims regarding the effectiveness of their AI products. Consider contractual provisions that encourage continuous improvement of AI. And it is necessary to monitor the AI after winning the award.
This memo encourages agencies to ensure competition in federal AI procurement practices. Government agencies are encouraged to obtain appropriate data rights, including data remediation to enable continued design, development, testing, and operation of AI systems. The memo asks government agencies to ensure that AI developers and their vendors do not rely on test data to train their AI systems.
This memo recommends that agencies include risk management requirements when procuring Generative AI. As is commonly required when procuring goods and services, the memo urges government agencies to consider the environmental impact of AI systems, including considering carbon emissions and resource consumption from supporting data centers. We encourage you to consider this.
Definitions and examples of AI impacting safety and rights.
In this memo, we define AI that impacts safety as: (1) human life, (2) climate or environment, (3) critical infrastructure, or (4) strategic assets or resources. Note Appendix I further describes the purposes of AI that are presumed to impact safety.
- Safety-critical functions in dams, power grids, traffic control, fire protection systems, and nuclear reactors
- Physical movement of a robot or robotic system
- Vehicles that move autonomously or semi-autonomously
- Control of industrial emissions and environmental impact
- Performing medical-related functions of medical devices
- Controlling access or security to government facilities
This memo defines rights-impacting AI as (1) “a specific person or entity whose output has a legal, material, binding, or similarly significant effect on civil rights; AI that serves as the primary basis for decisions or actions regarding 2) equal opportunity; or (3) access to important government resources or services. Appendix I further describes the purposes of AI that are presumed to impact rights.
- Block, delete, hide, or limit protected audio
- Law enforcement situations, including personal risk assessment, identification, tracking, and monitoring;
- Educational status, including plagiarism detection, admissions, and disciplinary decisions
- Reproducing a person's likeness or voice without their explicit consent
- Screen tenants for housing, appraisal, mortgage underwriting, or insurance
- Determining employment conditions such as selection, hiring, promotion, performance management, and dismissal
Important points
OMB's final memo continues the trend of increasing AI accountability and implementing a risk-based framework for AI evaluation and governance. This memo is an important step forward and increases the sophistication of the government's approach to governing the use of AI systems. This memo is expected to influence new regulations regarding the development, procurement, and use of AI in general, both at the state and federal level.