Artificial Intelligence and Machine Learning , Region of focus: United Kingdom , region-specific
Guidance is the first step towards global standards, says AI minister
Akshaya Asokan (asokan_akshaya) •
May 16, 2024
The UK government has published voluntary guidance aimed at helping artificial intelligence developers and vendors protect their models from hacking and potential sabotage.
Related item: Does Office 365 provide the email security and resiliency your business needs?
The UK government's AI Code of Practice, published on Wednesday, lists recommendations such as monitoring the behavior of AI systems and conducting model tests.
Jonathan Camrose, Secretary of State for AI and Intellectual Property, said: “UK organizations face a complex cybersecurity landscape and we want to enable them to deploy AI into their infrastructure with confidence. I'm thinking about it,” he said.
The UK government said businesses need to strengthen the security of their AI supply chains and reduce potential risks from weak AI systems, including data loss. This guidance includes ensuring that secure software components such as models, frameworks, and external APIs are sourced only from verified third-party developers, and ensuring the integrity of training data obtained from publicly available sources. We recommend measures such as:
“Particular attention should be paid to the use of open source models, which complicate model maintenance and security responsibilities,” the guidance states.
Other measures include training AI developers on secure coding, implementing security guardrails for various AI models, and making sure they can interpret and explain AI models.
The UK Government intends to turn this guidance into a global standard to promote security by design in AI systems. As part of the plan, the government has launched a consultation until July 10, seeking answers.
The Conservative government pledged at its November summit to promote a common global approach to AI safety (see below) UK AI Safety Summit focuses on risk and governance).
The guidance comes days after the UK AI Safety Association released an AI model assessment platform called Inspect. This allows startups, academia, and AI developers to evaluate specific features of individual models and generate scores based on the results.
The US and UK AI Safety Associations announced in April that they would collaborate to develop safety assessment mechanisms and guidance for emerging risks (see below) US and UK team up to collaborate and share resources on AI safety).