Under an announcement released by the White House, Vice President Kamala Harris told reporters last Thursday that U.S. federal agencies must demonstrate that their artificial intelligence tools are not harming the public or stop using them. He said there is.
“When government agencies use AI tools, we will require them to verify that those tools do not endanger the rights and safety of Americans,” Vice President Kamala Harris said.
The announcement promises that by December, government agencies will have a set of tangible safety features that will guide everything from facial recognition screening at airports to AI tools that control the power grid and help make mortgage and home insurance decisions. It became clear that measures needed to be put in place.
The move was applauded by many civil rights groups, including those who have spent years pushing federal and local law enforcement to use facial recognition and reduce false arrests of Black men. There is also.
A September report from the U.S. Government Accountability Office examined multiple federal law agencies, including the FBI, and found that in more than 60,000 investigations that used face-scanning technology, officials questioned how the technology works and how to interpret the results. Turns out he had no training.
This new policy directive issued to agency heads also aligns with the comprehensive AI Executive Order announced by President Joe Biden in October 2023.
This recent directive provides a wake-up call for government agencies that have been using outdated AI tools to support decisions about immigration, housing, child welfare, and a variety of other services.
As an example, Harris said, “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they must first prove that AI does not make racially biased diagnoses.'' There will be a need.” According to a White House announcement, agencies that cannot apply safeguards “must discontinue the use of AI systems and explain why doing so poses an increased risk to their overall safety or rights, or to critical agency operations.” unless agency leaders justify why there is an unacceptable impediment to
Additionally, the new policy said federal agencies must hire a chief AI officer with “experience, expertise, and authority” to oversee all AI technologies used by the agency. Another policy is that government agencies must annually publish an inventory of their AI systems, including an assessment of the risks they may pose.
New agency rules also require establishing an AI governance committee and submitting an annual report that will be published online. The report provides an overview of the AI systems in use, identification of their risks, and detailed plans to address those risks.
“Leaders in government, civil society, and the private sector need to ensure that artificial intelligence is deployed and advanced in a way that protects the public from potential harm while ensuring everyone can maximize its benefits.” There is a moral, ethical and social obligation to do so,'' Harris said. He also said the Biden administration intends for its policies to be a global model.
In another notable move, the Department of Homeland Security announced last week that it would expand the use of AI to train immigration officials, protect critical infrastructure, and investigate drug and child exploitation.
These announcements are very positive and signal that they will strengthen guardrails around the use of AI and improve privacy and security safety for the general public.
No matter how positive these developments are, the U.S. government must accelerate the passage of new legislation that sets ground rules for the AI industry. This is the most important measure, and the new law is not expected to come into force until 2025.
While these recent White House announcements are consistent with the European Union's AI law, the clear difference is that the European Union has already given final approval to this type of artificial intelligence law, and is no stranger to the United States when it comes to AI regulation. The point is that it has surpassed it.