Latest release of OpenAI’s famous generative AI (GenAI) platform GPT-4o We're smarter than ever. But while we express our awe at his GPT-4o, hackers are probably busy finding ways to use his GPT-4o for nefarious purposes. In fact, researchers using an earlier release, his GPT-4, found 87% of day-only vulnerabilities can be exploited.
One-day vulnerabilities are vulnerabilities for which a fix is available but the system administrator has not yet applied it to the machine, leaving it vulnerable. Not surprisingly, exploiting vulnerabilities like this is one of the most common ways hackers break into computers.
Alarmingly, this study shows that GPT-4 is not only capable of exploiting such systems, but is able to do so autonomously. Although no such use of GenAI as an attack vector has yet been reported in the real world, GenAI is already causing headaches for cybersecurity professionals.
Cyber attack using GenAI
Sharef Hlal, Head of Digital Risk Protection Analysis Team for the Middle East and Africa at Group-IB, says GenAI is already being weaponized by cybercriminals. “Generative AI is a great tool, but it has a dual nature in the cybersecurity field,” he says.
Read: Generative AI tools like ChatGPT are writing papers, raising integrity concerns
Mike Isbitski, Director of Cybersecurity Strategy at Sysdig, agrees. “From a security perspective, (GenAI) is definitely a nuisance. A threat actor has access to an environment where he can find one vulnerability and quickly move laterally with the help of GenAI. “Isbitzky says.
He explains that many cloud environments are homogeneous and built on similar public images and infrastructure. It is this homogeneity, Isbitsky argues, that allows attackers to automate much of the process, from reconnaissance to executing the actual attack itself.
Meanwhile, fraudsters are taking advantage of advances in AI to refine their fraud techniques, Hlal said. This, he says, is evidenced by the number of his compromised ChatGPT credentials flooding the dark web. “The staggering increase in compromised hosts accessing ChatGPT indicates a worrying trend,” he says.
Social engineering is one area where attackers use GenAI, and Isbitsky said the technology can help attackers improve email phishing campaigns, as well as deepfakes used to convince victims to give up something of value.
“Consider the recent high-profile use of AI in New Hampshire in fake Joe Biden robocalls aimed at disrupting and suppressing the vote. There is no shortage of publicly available, easy-to-use AI tools that allow even less-skilled attackers to fool unsuspecting people into handing over the keys to their castle,” Isbitski says.
Unfortunately, Hlal believes the use of AI in cyberattacks will only grow. He expects cybercriminals will refine their tactics, making their current schemes more effective or introducing further innovations.
It's time to turn the tables
But it's not all doom and gloom. “To the extent that threat actors can automate processes, security professionals can leverage GenAI to thwart attackers,” Isbitsky said.
read: Why businesses should approach generative AI with caution
He says there are several key security use cases where GenAI can be useful for security professionals.
For example, there is system hardening, which modern architectures can achieve through a code-as-code approach, Isbitsky says. “And GenAI is good at processing all kinds of code faster than humans.”
Similarly, GenAI can help you understand the risk landscape. Isbitsky explains that security vulnerabilities typically accumulate faster than most security teams can address them. “GenAI is also a great fit here, contextualizing the actual risk based on other factors such as what is being used, what is exposed, and what is its importance compared to other issues in the environment. ,” he says.
Hlal also believes that AI represents an important turning point in cybersecurity. While it's not a panacea, it could revolutionize defense mechanisms by enhancing human expertise, he says. But ultimately, Hlal believes, “the successful use of AI will depend on companies' skillful navigation.”
All things considered, he argues, the debate over AI's security implications goes beyond the technology itself. This requires a holistic approach that emphasizes responsible use and ethical implementation, Frall says.
“AI algorithms require human intervention for citizen innovation, but they also mandate rigorous protections against malicious exploitation,” Frall says. “Therefore, we should not focus only on the potential of technology, but on how to use it for the betterment of society and ensure that it does not become a tool for nefarious activities.”
For more technology news, click here here.