On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act), establishing the world's first extensive legal framework dedicated to artificial intelligence. This imposes EU-wide regulations that emphasize data quality, transparency, human oversight, and accountability. Fines can reach up to €35 million, or 7% of global annual turnover, and the law has a significant impact on a wide range of companies operating within the EU.
The AI Act classifies AI systems according to the risks they pose, with higher-risk categories requiring stricter compliance. This regulatory framework prohibits certain AI practices deemed unacceptable and carefully outlines obligations for entities involved in all stages of an AI system's lifecycle, including providers, importers, distributors, and users. I am.
For cybersecurity teams and organizational leaders, the AI Act represents a critical transition stage that requires immediate and strategic action to align with new compliance standards. Here are some key focus areas for your organization.
1. Conducting a thorough audit of AI systems
EU AI legislation requires regular audits, requiring organizations to regularly verify that both the AI software provider and the organization itself maintains a robust quality management system. This includes performing detailed audits to map and classify AI systems according to the risk categories specified by law.
These external audits scrutinize the technical elements of AI implementations and examine the contexts in which these technologies are used. This includes data management practices to ensure compliance with high-risk category standards. The audit process includes providing reports to AI software providers and may include further testing of certified AI systems based on the Coalition's technical documentation assessment. The more specific scope of these audits is not yet clear.
It's important to realize that Generative AI, which is essential to supply chains, shares similar security vulnerabilities with other web apps. For these AI security risks, organizations can rely on established open source resources. OWASP CycloneDX provides a comprehensive bill of materials (BOM) standard to enhance your ability to manage AI-related cyber risks within your supply chain.
Current frameworks, such as OVAL, STIX, CVE, and CWE, aimed at classifying vulnerabilities and disseminating threat information, are increasingly relevant to emerging technologies such as large-scale language models (LLMs) and predictive models. has been improved for.
As these enhancements continue, it is expected that organizations will also use these established and known systems for their AI models. Specifically, CVE is utilized to identify vulnerabilities, and STIX plays a key role in disseminating cyber threat intelligence to help effectively manage risks associated with AI/ML security audits.
2. Investing in AI literacy and ethical AI practices
Understanding the capabilities and ethical implications of AI is important for all levels of an organization, including the users of these software solutions.
Promote ethical AI practices to guide the development and use of AI in a way that upholds social values and legal standards, according to Tania Duarte and Ismael Kerubi García of the Joseph Rowntree Foundation. There is a need to.The lack of a concerted effort to improve AI literacy in the UK means that the public conversation about AI does not start with a practical, fact-based assessment of these technologies and their capabilities. means there are many”.
3. Establishing a strong governance system
Organizations need to develop a robust governance framework to proactively manage AI risks. These frameworks must include policies and procedures that ensure ongoing compliance and adapt to the evolving regulatory landscape. Governance mechanisms must not only facilitate risk assessment and management, but also incorporate transparency and accountability, which are essential to maintaining public and regulatory trust.
OWASP's Software Component Verification Standard (SCVS) supports a community-driven effort to define a framework that includes identifying the activities, controls, and best practices needed to reduce risks associated with AI software supply chains. . This could be a starting point for anyone looking to develop or enhance an AI governance framework.
4. Adopting best practices for AI security and ethics
Cybersecurity teams must be at the forefront of adopting AI security and ethics best practices. This includes protecting AI systems from potential threats and ensuring ethical considerations are integrated throughout the AI lifecycle. Best practices should be informed by industry standards and regulatory guidelines tailored to an organization's specific circumstances.
The OWASP Top 10 for LLM (AI workload) application is designed to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing large-scale language models. is. This project provides a list of the top 10 most critical vulnerabilities commonly found in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.
5. Dialogue with regulators
To foster understanding and effective implementation of AI laws, organizations must engage in ongoing dialogue with regulators. Participating in industry consortia and regulatory discussions can help organizations stay abreast of interpretive guidance and evolving expectations, while also contributing to shaping a pragmatic regulatory approach.
If you are still unsure how upcoming regulations will impact your organization, the official EU AI Law website offers a compliance checker to determine whether your AI systems are subject to regulatory standards .
The EU AI Act is an innovative legislative measure that sets a global benchmark for AI regulation. For cybersecurity teams and organizational leaders, this poses both challenges and opportunities to be a trailblazer in the AI security and compliance space. By embracing a culture of transparency, accountability, and proactive risk management, organizations can not only comply with AI laws, but also lead by example in the responsible use of AI technology and foster a trusted AI ecosystem. Masu.
Image credits: Tanahonte / Dreamstime.com
Nigel Douglas, Senior Developer Advocate, Open Source Strategy, Sysdig.