In this Help Net Security interview, Caleb Sima, chair of the CSA AI Security Alliance, talks about how AI is empowering security professionals, and how AI can help improve skills and productivity, rather than replace staff. emphasizes the role of
AI is seen as empowering rather than replacing security experts. How do you expect the role of AI to change in the future?
While the future of AI replacing jobs remains uncertain, I believe it is not imminent. AI is a tool that can be used to empower rather than replace security professionals. In fact, CSA's recent study with Google, The State of AI and Security Research Report, found that the majority of organizations plan to use AI to enhance their teams, and that they are expanding their skills and knowledge base. It was found to mean reinforcement (36%) or improvement. Improve detection time (26%) and productivity (26%) without completely replacing staff.
In the near future, many repetitive tasks (such as reporting) will be automated across teams with the help of AI. This frees up significant time currently spent on tasks such as creating management reports, allowing those teams to focus on higher-priority work. This is in line with survey results, where 58% of respondents said they believed AI would enhance their skill set or generally support their current role. Additionally, 24% believe that AI will replace some of their jobs, allowing them to focus on other activities.
For example, security teams can leverage AI algorithms to identify and remediate threats significantly faster and more effectively than through human action alone. In response, security teams can use AI to predict potential threats by inputting historical data and plan mitigation strategies before threats escalate. Either way, security professionals need to learn how to make the most of AI in both their organizational and personal roles.
How do security professionals perceive their organization's cybersecurity maturity regarding AI integration?
Integrating AI primarily involves applying standard security measures. A small part of the process addresses emerging AI risks. Security professionals often fear this uncharted territory until they delve deeper into their organizations.
Either way, this year will be transformative when it comes to companies implementing AI. The research I mentioned earlier found that more than half of organizations plan to implement a GenAI solution this year, and executives are driving adoption. It was also revealed that more than 80% of respondents consider their organizations to be moderately to highly mature. However, what this alone does not tell us is whether respondents' perceptions are based on reality.
But AI is here, ready or not. That's why I encourage companies, no matter where they are in their AI journey, to integrate this technology into their current processes and ensure that their staff is properly trained to use this innovative technology. We warn you to understand that you will face challenges whether you have received them or not. That's as expected. As a cloud security community, we will all learn together how to best leverage this technology to improve cybersecurity.
There is considerable awareness about the potential abuses of AI. How should organizations prepare to mitigate these risks?
First, companies need to focus on best practices and treat AI the same way they would treat humans in a given capacity. You also need to determine the capabilities of your AI. If you just provide support data in a chat with your customers, the risk is minimal. But when it comes to accessing internal and customer data to integrate and run operations, prioritizing strict access controls and separate roles is essential. Most risks can already be mitigated with current controls. This challenge stems from unfamiliarity with AI, leading to a belief in a lack of safeguards.
What do you think about the current state of AI-related cybersecurity training, and what improvements are needed to better prepare your workforce?
We've been talking about the skills gap in the security industry for years, and AI will only deepen that gap in the near future. We are in the early stages of learning, but of course training has not yet caught up. AI evolves rapidly, so training materials can quickly become outdated. As organizations increasingly focus on training their employees on how to make the most of AI, they are focusing on stable concepts and ensuring that AI security aligns with established best practices for current applications and infrastructure. It must be emphasized that a great deal depends on
74% of organizations plan to create a dedicated AI governance team, but how do you think these teams will shape the future of cybersecurity?
Given its uncertain impact, AI monitoring is critical today. Over time, as AI literacy increases and AI is integrated into all technologies, the risks will become more apparent and AI governance will move from specialized teams to broader technology management.
In the short term, the creation of a governance team signals that companies are serious about integrating and managing AI. These teams will likely be tasked with tackling everything from corporate policy development and ethical considerations to risk management and regulatory compliance. We're already seeing issues surrounding transparency in the news when it comes to images and copy, and this will continue to be the case. As a society, we expect a certain level of trust from the companies and media we interact with, so it is essential that that trust is not broken.