A few weeks ago, I was reading LinkedIn posts by several chief information security officers, and one post by Jeff Brown, a CISO in Connecticut, caught my eye. When linking to an article in wall street journalJeff wrote in his post:
“Welcome to the dark side of AI and the rise of BadGPT and FraudGPT. These are not the AI chatbots you use every day. They create convincing phishing emails and develop powerful malware with surprising efficiency. A groundbreaking study by researchers at Indiana University uncovers more than 200 dark web services that offer hacking tools for large-scale language models. This fact is a sobering reminder of the evolving cyber landscape, where some specialized hacking tools are priced as low as $5 per month.
“The emergence of ChatGPT coincided with a staggering 1,265% spike in phishing attacks, which was further exacerbated by the emergence of deepfake audio and video technology. A man was tricked into transferring a staggering $25.5 million during a deepfake conference call. The incident has CIOs and CISOs on high alert for a wave of advanced phishing scams and deepfakes. became.
“These 'good model gone bad' stories highlight an important point: While public models like ChatGPT are fortified with security controls, they are also being honed for darker purposes. While we continue to take advantage of advances in AI, we must remain vigilant, recognizing that the law does not eliminate all AI risks. ”
I've worked with Jeff Brown for over four years, leading the cybersecurity efforts for the Connecticut state government. He is a well-respected leader among his CISOs in the state and I asked him if he would be willing to be interviewed on this subject for my blog. He agreed and the interview is recorded below.
Dan Lohrmann (DL): What concerns you most about BadGPT, FraudGPT, and other similar tools?
Jeff Brown (JB): My biggest concern is that while good people are putting AI guardrails in place, attackers are removing them. These purpose-built AI tools are a way to democratize attacker knowledge that only highly skilled attackers have access to. The misuse of these tools by malicious actors for harmful purposes, such as creating deepfakes and spreading misinformation, is a real and growing threat. For skilled attackers, these tools enable larger-scale attacks and more sophisticated phishing and spear-phishing attacks. In other words, it lowers the bar for attackers and raises the bar for what we need to defend.
DL: Has Connecticut seen an increase in phishing, spear phishing, and other advanced cyberattacks over the past year?
JB: We have implemented a number of new security controls that allow us to both increase visibility and respond and recover more quickly if an issue occurs. Email continues to be the most popular attack vector due to its ubiquity and the fact that it is easy for attackers to exploit. Phishing attempts are steadily increasing, and these attacks are also becoming more sophisticated. As we continue to improve our ability to detect and respond to phishing-based attacks, we expect this problem to be exacerbated by generative AI. Of course, we're also using AI tools to defend employee inboxes, which have shown great promise so far, so AI isn't all bad news from a defender's perspective.
DL: Have you seen any cyberattacks using BadGPT and FraudGPT (or similar) tools?
JB: Due to the nature of these attacks, it can be difficult to pinpoint the exact tools being used, but we can safely say that the frequency of email-based attacks has increased significantly. The number of attackers as well as their sophistication has increased, indicating that attackers are constantly evolving and enhancing their methods.
DL: Where do you think this trend is heading? Will the new GenAI make things worse, or will it help overall cybersecurity?
JB: While there are concerns about the misuse of GenAI, AI tools also offer new methods for stronger cybersecurity defense controls. As technology evolves, we can expect to see AI being adopted for improved threat detection and response capabilities, and ultimately for further automation. I believe the arms race between attackers and defenders will continue, but tools like Microsoft's Security Copilot are promising, not only making defenders' lives easier, but also saving busy security analysts' time. This could also help address skills shortages.
DL: What can governments do to prepare for what happens next?
JB: Governments need to invest in advanced cybersecurity tools as well as training and awareness programs for their staff. The important thing is not to be complacent. Threats never stop evolving, so defenses must evolve with them. As states continue their digital government journey, cybersecurity must also have a seat at the table and ensure they have the resources to build reasonable defenses against the growing number of cyber threats.
DL: How are GenAI tools helping the state of Connecticut defend against new forms of cyberattacks?
JB: The speed and scope of attacks is increasing every day, and defenders must adapt to the changing environment. GenAI tools are already helping by enhancing threat detection capabilities and response times. These expectations make it possible to quickly and efficiently analyze vast amounts of data and identify potential threats that would be difficult or impossible to detect manually. These tools are also much faster than manually examining log files or performing simple searches. In the future, AI capabilities will be a key element in most security products.
DL: Where can CISOs, security professionals, and other government officials go to learn more about cyberattack trends using GenAI tools? What's the best way to learn about this rapidly changing topic? Or?
JB: This field is changing very rapidly, so we recommend following trusted cybersecurity news sources, attending relevant webinars and conferences, and joining professional cybersecurity forums and discussion groups. To do. The most important thing is to not bury your head in the sand and embrace the possibility and possibility that AI can help on the defensive side of the equation. Ignoring or banning AI tools will not be a winning strategy in the future.
DL: Is there anything else you would like to add?
JB: Strengthening collaboration and information sharing between government agencies and the private sector will be key to our long-term success. By simply discussing tools, processes, and best practices, you can refine your existing strategy and quickly respond to evolving threats. Making a difference will require a combination of tools, information sharing, and stronger defensive tactics.