While AI is still far from mature, there are still offensive and defensive uses of the technology that cybersecurity professionals should pay attention to, according to a presentation today at the Gartner Security & Risk Management Summit in National Harbor, Maryland.
Jeremy Doin, research vice president for AI and cybersecurity at Gartner, told conference attendees that the large language models (LLMs) that have been getting so much attention are “not intelligent.” He gave the example of ChatGPT being recently asked what the most severe CVEs (Common Vulnerabilities and Exposures) will be in 2023, and the chatbot's response was basically gibberish (screenshot below).
Deepfakes are the biggest threat to AI
While LLM tools aren't sophisticated yet, D'Hoinne points out that there is one area where AI threats need to be taken seriously: deepfakes.
“Deepfakes should be security leaders' immediate focus, because the attacks are real and reliable detection technology doesn't yet exist,” Doin said.
Deepfakes are not as easily defended against as traditional phishing attacks, which can be countered with user training, and stronger business controls such as spending and financial approvals are essential, he said.
He recommended stronger business workflows, security behavior and culture programs, biometric controls and modern IT processes.
AI speeds up security patching
One potential use case for AI security that D'Hoinne pointed to is patch management, citing data that shows that AI could help cut patching time in half by prioritizing patches based on likely threats and exploits, and performing tasks like checking and updating code.
Other areas where GenAI security tools can be useful include alert enrichment and summarization, interactive threat intelligence, attack surface and risk overviews, security engineering automation, and mitigation assistance and documentation.
AI Security Recommendations
“Generative AI will neither save nor destroy cybersecurity,” Dwanne concluded. “How cybersecurity programs adapt to it will determine its impact.”
Among his recommendations to attendees were to “focus on deepfakes and social engineering as problems to be solved urgently” and to “experiment with AI assistants to augment rather than replace staff,” and outcomes should be measured based on defined metrics for use cases, “rather than ad-hoc AI or productivity metrics.”
For more coverage from this week's Gartner Security Summit, check out Cyber Express.