As organizations become more dependent on networks, online platforms, data, and technology, the risks associated with data leaks and privacy breaches have never been more serious. Combine this with the increasing frequency and sophistication of cyber threats, and it's clear that strengthening your cybersecurity defenses has never been more important. Cybersecurity analysts are on the front lines of this battle, working around the clock in Security Operations Centers (SOCs), the units that protect organizations from cyber threats, sifting through vast amounts of data while monitoring for potential security incidents.
They're faced with a massive stream of information coming from a variety of sources, from network logs to threat intelligence feeds, while trying to prevent the next attack. In short, they're overwhelmed. But for artificial intelligence, too much data is no problem, which is why many experts are turning to AI to bolster cybersecurity strategies and reduce the burden on analysts.
Steven Schwab, director of strategy for the network and cybersecurity division at the USC Information Sciences Institute (ISI), envisions symbiotic human-AI teams working together to improve security, with AI assisting analysts to improve overall performance in these high-risk environments. Schwab and his team have developed testbeds and models to study AI-assisted cybersecurity strategies in smaller systems, such as protecting social networks. “We want to be able to mitigate these fears through machine learning processes and reduce the workload of human analysts,” he said.
David Balenson, associate director of network and cybersecurity at ISI, emphasizes the important role that automation plays in reducing the burden on cybersecurity analysts. “SOCs are bombarded with a flood of alerts that analysts have to analyze quickly, in real time, to determine which ones are indicators of actual incidents. This is where AI and automation can help, spotting trends and patterns in the alerts that may be potential incidents,” says Balenson.
Seeking transparency and explainability
But integrating AI into cybersecurity operations is not without challenges. One major concern is the lack of transparency and explainability inherent in many AI-driven systems. “Machine learning (ML) can help monitor networks and end systems where human analysts are exhausted,” Schwab explains. “But they're black boxes and can fire unexplained alerts.
This is where explainability comes in, because human analysts need to trust that ML systems are behaving rationally.' Schwab's proposed solution is to build explainers that show the behavior of ML systems in a computerized English that resembles the natural language that analysts can understand. Marjorie Friedman, a principal scientist at ISI, is researching this. “We've been thinking about what it means to generate explanations and what we look for in explanations, and also how explanations can help validate model generation,” she said.
The art of flagging
One example of AI decisions in cybersecurity is the process of online authentication: When authenticating to a system, a user enters a password or PIN code. However, different people have different patterns of entering data, which can cause the AI to flag the code even if it is entered correctly.
These “potentially suspicious” patterns may not actually be security breaches, but the AI takes them into account. When an explanation is provided to a human analyst in addition to the flagging, citing the input pattern as one of the reasons for the flagging, the analyst can better understand the reasoning behind the AI's decision. And with that additional information, the analyst can make a more informed decision and take appropriate action (i.e., verify or override the AI's decision). Friedman believes that cybersecurity operations need to run the best ML models in tandem with an approach that effectively explains the decisions to experts to predict, identify, and address threats.
“If someone is trying to shut down a system that is going to cause significant damage to the company, that's a critical situation and they have to make sure it's the right decision,” Friedman said. “The explainer might not be the exact same thing that led the AI to how it got there, but it might be information that a human analyst needs to know to determine whether it's right.”
Keeping your data safe and private
“One of the challenges with AI in cybersecurity is trust between the human analyst and the machine, but also trust that confidential or proprietary information used to train the AI remains private.” For example, organizations may use operational details or security vulnerabilities when training machine learning models to keep data safe or protect systems.
The potential for such sensitive information about an organization's cyber posture to be leaked is a concern when integrating AI into cybersecurity operations. “Once you put information into a system, such as a large language model, there's no guarantee that you've prevented that information from being discussed, even if you try to remove it. We need to explore ways to make shared spaces safe for everyone,” Schwab said.
Schwab, Friedman and the ISI team hope that their research will lead to new ways to leverage the strengths of both humans and AI to bolster cyber defenses, stay ahead of advanced attackers, and ease the strain on SOCs.
Issued on May 29, 2024
Last updated: May 29, 2024