Cybersecurity has always been a complex field. Its adversarial nature means the margin between failure and success is much narrower than in other sectors. As technology evolves, these margins become even finer, and attackers and defenders compete to exploit them to gain a competitive advantage. This is especially true for AI.
In February, the World Economic Forum (WEF) published an article titled “AI and Cybersecurity: How to navigate the risks and opportunities,” highlighting the existing and potential impact of AI on cybersecurity. The bottom line? AI benefits both good and bad people, so it's essential that good people do everything they can to embrace it.
In this article, we will consider and elaborate on some of the key points of WEF.
Advantages and opportunities for attackers
Before we dive into how AI can enhance cybersecurity, it's worth considering some of the opportunities AI presents to cybercriminals. After all, it's hard to fight a threat if you don't really understand what it is.
Of all the issues being raised, deepfakes are perhaps the most concerning. As the WEF states, over 4 billion people will be eligible to go to the ballot box this year, and deepfakes are sure to play a role. In the UK alone, both the Prime Minister and the Leader of the Opposition have fallen victim to his AI-generated fake content. It may be tempting to think that modern voters can identify digitally manipulated videos. Yet, one need only look at the example of WEF, which used deepfakes to defraud a Hong Kong financial official out of $25 million to understand that this is not necessarily the case.
In keeping with the theme of social engineering, AI has made phishing scams easier to create and harder to detect. Until his ChatGPT launch in November 2022, it felt like we were on the verge of confronting phishing scams. Obviously, they're not going away, but awareness of them is improving day by day, and people know more and more how to identify them. Spelling mistakes, poor grammar, and awkward English were all telltale signs of a scam. However, today's fraudsters have easy access to large-scale language models (LLMs) that allow them to create and distribute large-scale phishing scams without making mistakes that would have gone unnoticed in the past. I did.
Advantages and opportunities for defenders
But it's not all doom and gloom. AI also offers significant benefits for cybersecurity professionals. WEF provides a broad overview of how cybersecurity departments can leverage AI, but it's worth looking a little deeper into some of those use cases.
AI frees up time for security teams. By automating mundane and repetitive tasks, security teams can spend more time and energy innovating and improving the enterprise environment and protecting themselves from more advanced threats.
AI is also an invaluable resource for improving detection and response times. AI tools continuously monitor network traffic, user behavior, and system logs for anomalies and flag problems to your security team. This means security teams can proactively prevent attacks, rather than simply reacting after an incident occurs.
According to the ISC2 Cybersecurity Workforce Survey, there is currently a shortage of 4 million workers in the cybersecurity sector. This is an alarming number, but AI could help bring it down. The WEF argues that AI can be used to educate people about cybersecurity and train the next generation of professionals, both valid points, but this does not mean that AI can help cybersecurity workers do their jobs. It overlooks the fact that automating much of this could reduce the need for cybersecurity workers. need to do it.
AI regulation and collaboration
According to the WEF, AI regulation is undoubtedly important to “develop, use, and deploy AI technologies in ways that benefit society while limiting the harm they may cause,” but perhaps More important are government, industry, and academic institutions. and civil society sing from the same hymn sheet. Conflicting motivations and priorities can have disastrous consequences.
That’s why WEF’s AI Governance Alliance, launched in April 2023, brings these groups together around the common goal of championing responsible global design and releasing transparent and inclusive AI systems. In a world where competition is paramount, efforts like this are essential to ensure safety is kept in mind when developing AI systems.
Recent examples of AI regulation include:
- EU AI law
- United Nations Advisory Council on AI Governance
- UK AI White Paper
- US Executive Order on AI Safety
However, while many of these are well-intentioned, they have been met with backlash. Most notably, the EU AI law adopted by the EU parliament in March has been heavily criticized by industry for stifling innovation. This brings us to an important lesson from the WEF article. Collaboration is essential to safely develop AI. It is important that all groups with a stake in AI, especially cybersecurity professionals, are involved in the regulatory process, as the WEF seeks to do in collaboration with the AI Governance Alliance. It's uncharted territory, but we'll all be safer.
Editor's note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect the opinions of Tripwire.