WASHINGTON — The Biden administration announced three new policies Thursday to guide the federal government's use of artificial intelligence, touting the standards as a model for global action on rapidly evolving technology.
The policy is based on an executive order signed by President Joe Biden in October and comes amid growing concerns about the risks AI poses to the U.S. workforce, privacy and national security, and the potential for discrimination in decision-making. It was.
- The White House Office of Management and Budget will require federal agencies to ensure that the use of AI does not endanger the “rights and safety” of Americans.
- To increase transparency, federal agencies will be required to publish online a list of the AI systems they use, as well as their assessments of the risks those systems may pose and how those risks are managed.
- The White House is also directing all federal agencies to appoint a chief AI officer with a technology background to oversee the use of AI technology within the agency.
Vice President Kamala Harris announced the rule on a call with reporters, saying the policy was shaped by input from the public and private sectors, computer scientists, civil rights leaders, legal scholars and business leaders.
“President Biden and I intend for these domestic policies to be a model for global action,” said Harris, who is leading the administration's efforts on AI and speaking at a global summit in London last November. He provided an overview of US efforts regarding AI.
Preparing to vote: See who's running for president and compare their positions on important issues with our voter guide
“All leaders in government, civil society and the private sector are tasked with ensuring that artificial intelligence is deployed and advanced in a way that protects the public from potential harm and allows everyone to benefit from it to its fullest.” There is a moral, ethical and social obligation to benefit.'' Harris said.
The federal government has published more than 700 current and planned AI use cases across government agencies. According to the nonpartisan Congressional Research Service, there are more than 685 unclassified AI projects in the Department of Defense alone.
Disclosures from other agencies show documentation of suspected war crimes in Ukraine, testing to see if coughing into a smartphone can detect COVID-19 in asymptomatic people, fentanyl smugglers AI has been shown to be used to stop people from crossing the southern border, rescue sexually abused children, and detect illegal rhinos. Horns in airplane luggage – among other things.
To assess the safety risks of AI, federal agencies will require by December to “reliably assess, test, and monitor” the impact of AI on the public, reduce the risk of algorithmic discrimination, and ensure that the government It will be mandatory to introduce safeguards to publicly disclose how it is being used.
Mr. Harris gave an example. If the Veterans Administration wants to use artificial intelligence in VA hospitals to help doctors diagnose forbearance, it needs to show that the AI system doesn't produce “racially biased diagnoses.” said Harris.
Biden's AI executive order invokes the Defense Production Act to require companies developing cutting-edge AI platforms to notify the government and share safety test results. These tests are conducted through a risk assessment process called “red teaming.”
Under the order, the National Institute of Standards and Technology is developing standards for red team testing aimed at ensuring safety before release to the public.
Contributor: Maureen Grope