The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems and is issuing new guidance to help companies protect their AI from hackers.
As AI becomes increasingly integrated into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. N.S.A. Cyber security information sheet Provides insight into AI-specific security challenges and provides steps businesses can take to strengthen their defenses.
“AI presents unprecedented opportunities, but it can also present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis.” said NSA Cybersecurity Director Dave Lubar in a Monday (April 15) article. news release.
Strengthening against attacks
The report suggests that organizations using AI systems must take strong security measures to protect sensitive data and prevent misuse. Key countermeasures include conducting continuous compromise assessments, hardening IT deployments, implementing strict access controls, using robust logging and monitoring, and restricting access to model weights.
“AI is vulnerable to hackers because of its complexity and the sheer amount of data it can process.” john clayVice President of Threat Intelligence at a cybersecurity company trend micro, he told PYMNTS. “Because AI is software, it can have vulnerabilities that can be exploited by adversaries.”
as Reported by PYMNTS, AI is revolutionizing the way security teams deal with cyber threats by speeding up and streamlining processes. With its ability to analyze large data sets and identify complex patterns, AI can automate the early stages of incident analysis, allowing security professionals to start with a clear understanding of the situation and respond faster. I'll make it.
Cybercrime continues to rise as we increasingly embrace a connected global economy.according to FBI reportIn the United States alone, losses due to cyberattacks in 2022 will exceed $10.3 billion.
Why is AI vulnerable to attack?
Clay said AI systems are particularly vulnerable to attacks because they rely on data to train their models.
“AI and machine learning rely on providing and training data to build models, so compromising that data is an obvious way for bad actors to contaminate AI/ML systems.” says Clay.
He highlighted the risks of these hacks, explaining that they can lead to the theft of sensitive data, injection of harmful commands, and biased results. These issues can upset users and even lead to legal issues.
Clay also pointed to challenges in detecting vulnerabilities in AI systems.
“It can be difficult to determine how to process input and make decisions, making it difficult to detect vulnerabilities,” he said.
He noted that hackers are looking for ways to circumvent AI security and change the results, and this method is being talked about in secret online forums.
When asked about steps companies can take to strengthen AI security, Clay emphasized the need for a proactive approach.
“While it would be unrealistic to ban AI completely, organizations need to be able to manage and regulate it,” he said.
Clay recommends hiring Zero trust security model Strengthening safety measures using AI, etc. This method means AI can analyze emotion and tone in communications and check web pages to deter fraudulent activity.He also said strict access rules and multi-factor authentication Protect your AI systems from unauthorized access.
As companies deploy AI to increase efficiency and innovation, they are also exposed to new vulnerabilities. Malcolm HarkinsChief Security and Trust Officer at a Cybersecurity Company hidden layerhe told PYMNTS.
“AI was the weakest technology deployed in production systems because it was vulnerable on multiple levels,” Harkins added.
Harkins advised businesses to take proactive measures, including implementing purpose-built security solutions, regularly assessing the robustness of AI models, continuous monitoring, and developing comprehensive incident response plans.
“Without real-time monitoring and protection in place, AI systems are certain to be compromised, and that breach is likely to go unnoticed for long periods of time, potentially resulting in broader harm,” Harkins said. states.