This article is part of a series of papers inspired by discussions at the R Street Institute's Cybersecurity – Artificial Intelligence Working Group sessions.group visit web page For additional insights and perspectives from this series, click here.
Rapid advances in artificial intelligence (AI) highlight the need for nuanced governance frameworks that actively engage stakeholders in defining, assessing, and managing AI risks. A comprehensive understanding of risk tolerance – defining what risks are considered acceptable in order to leverage the benefits of AI, identifying the actors responsible for defining these risks, and assessing and subsequently managing the risks. It is essential to be clear about the processes that can be tolerated or mitigated.
The practice of assessing risk tolerance is more important than less restrictive alternative and complementary solutions, such as issuing recommendations, sharing guidance for best practices, and launching awareness campaigns. It also creates the necessary space for stakeholders to ask questions and assess the extent to which they are needed. The clarity gained through this exercise will also prepare you for the evaluation of her three risk-based approaches to AI in cybersecurity. Build safeguards in the design, development, and deployment of AI. Strengthen AI accountability by updating legal standards.
1. Implement a risk-based AI framework
A risk-based cybersecurity framework provides a structured, systematic approach for organizations to identify, assess, and manage the evolving risks associated with AI systems, models, and data. The National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) builds on established cyber and privacy frameworks to assist organizations in the responsible design, development, deployment, and use of risk management. It is one of the notable examples of AI-based frameworks. About AI system. The NIST AI RMF outlines how AI risks differ from traditional software risks, including the scale and complexity of AI systems, so organizations can improve their evolving cybersecurity environments with greater confidence, alignment, and precision. We will help you prepare for and deal with it. The voluntary nature of the NIST AI RMF also gives organizations the flexibility to tailor the framework to their specific needs and risk profiles. Congress has already taken steps to integrate the NIST AI RMF into federal agencies and AI technology procurement through the bipartisan, bicameral introduction of the Federal Artificial Intelligence Risk Management Act.
NIST AI RMF is designed for agility. Agility is essential to ensure safety and security protocols evolve to keep pace with technological innovation and the expanding role of AI. To complement the efforts of the NIST AI RMF, the Biden Administration's Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence will continue to improve AI governance by expanding its coverage and robustness. It emphasizes the importance of continuous improvement and adaptation. Initiatives like the newly formed US AI Safety Institute and the AI Safety Institute Consortium build on the core focus of the NIST AI RMF by advancing the framework's ability to address safety and security challenges within the AI domain. will help you expand. Fostering collaboration and innovation, these exemplify proactive steps to ensure the NIST AI RMF is responsive to the dynamic nature and impact of AI.
2. Building safety measures in AI development and implementation
Safeguards ensure that AI systems operate within defined ethical, safety, and security boundaries. Some AI companies have taken the initiative to put safeguards in place, such as rigorous internal and external security testing procedures, before going public. This strategy is essential to maintain user trust and ensure responsible deployment and use of AI technology.
However, some organizations may find it difficult to obtain the resources necessary to implement these safety measures. Creating and implementing safeguards throughout AI development and deployment can delay the achievement of key innovation milestones. Furthermore, the risk of safeguards being circumvented or removed highlights significant challenges in ensuring these safeguards are effective and durable. These challenges require leveraging a variety of protection strategies and continuously evaluating and adapting them to the evolving AI technology landscape. Traditional cybersecurity principles such as security by design and defaults can also be incorporated into AI systems to increase the effectiveness of protection strategies.
3. Promoting AI accountability through updated legal standards
The ongoing debate around AI accountability is leading to legal standards that can address the complexity of the risks AI poses and encourage stakeholders to proactively mitigate cybersecurity and safety risks. It reflects the desire of some people to act on the basis. Most recently, the National Telecommunications and Information Administration released his AI Accountability Policy Report, which calls for greater transparency and independent evaluation of AI systems, among other things. But some skeptics cite the need for balance and the potential harms that could result if such efforts result in a broad, top-down regulatory regime with high compliance and innovation costs. have expressed concerns.
The three proposed policy measures are:
- License system. Introduce a licensing regime that requires organizations to obtain a license or certification demonstrating compliance with specified standards before working on an AI system or model. For “high-risk” AI applications like facial recognition, companies should rigorously test AI models for potential risks before deployment, expose adverse practices, and allow independent third parties to audit AI models. You must obtain a government license to ensure that you are For example, the Food and Drug Administration's review process for approving AI-based medical devices requires rigorous premarket evaluation and ongoing Supervision required. This approach has the potential to strengthen AI accountability by increasing transparency and oversight, and requiring AI systems to meet stringent security standards before deployment. Nevertheless, licensing systems can stifle innovation by introducing bureaucratic delays and compliance costs, making it more difficult for small businesses and new entrants in the United States to succeed.
- Corporate responsibility system. This approach holds AI companies accountable if their systems or models cause harm or can be misused to cause harm. For example, Congress could hold AI companies accountable through enforcement or private rights of action if their models or systems violate privacy. Increased corporate responsibility may lead companies to prioritize the safety of their AI, responsible AI, and cybersecurity considerations. Upfront accountability. Guarantees compensation for damage caused by AI systems. Critics argue that rushing to introduce a corporate responsibility framework could create regulatory hurdles that impede AI innovation and development, and risk it being exploited for financial gain. . Congress also proposes preemptively removing Section 230 immunity protections for generative AI technologies. While proponents of this approach argue that it would give consumers the tools to protect themselves from harmful content created by generative AI technology, critics argue that the approach would interfere with free speech and They argue that it will hinder algorithmic innovation and have a devastating economic impact on the United States.
- Tiered responsibilities and accountability structure. Drawing on ideas proposed in the existing National Cybersecurity Strategy, the proposed update includes establishing a legal framework that recognizes the varying degrees of risk and liability associated with different AI applications. Under such a regime, companies would face varying levels of responsibility and liability, depending on the nature and severity of the harm caused by their AI systems. For example, a company developing an AI-powered medical diagnostic system would be subject to higher accountability standards and reporting requirements than a company deploying AI for personalized advertising due to the potential for life-threatening misdiagnoses. may face. Although tiered responsibilities and accountability regimes provide flexibility and proportionality in assigning accountability, they can also lead to less transparency, ambiguity, or inconsistency in law enforcement. Additionally, large companies may be at an unfair advantage over new entrants and small businesses.
These proposed legal updates to promote AI accountability aim to force companies to prioritize cybersecurity and AI safety considerations, but each has its drawbacks. These complexities highlight the need for continued discussion and informed decision-making among policymakers.
conclusion
It is essential to ensure that new policy measures proposed to mitigate potential AI risks do not inadvertently stifle innovation or undermine U.S. leadership in innovation. AI systems only exist within the parameters of the real world. [they] When fraud occurs, its effects are multifaceted. ” To reduce the potential for AI to pose amplified or new cybersecurity threats, policymakers must align AI systems closely with both disparate and overlapping ethical and legal frameworks. They need to be thought of holistically as linked and integrated technologies. Incorporating risk tolerance principles into AI regulation and governance solutions is essential to ensuring a balance between the significant benefits that AI brings and its potential risks.