CAMBRIDGE, Mass. — As AI tools and systems proliferate across the enterprise, organizations are beginning to question the value of these tools compared to the security risks they can pose. .
At the 2024 MIT Sloan CIO Symposium held this week, industry leaders discussed the challenges of balancing the benefits of AI with security risks.
Generative AI has become a particular concern since the introduction of ChatGPT in 2022. These tools have many use cases in business environments, from virtual help desk assistance to code generation.
”[AI] I think I have moved from the theoretical to the practical, which has raised my level. [its] It increases visibility,” Jeffrey Wheatman, cyber risk evangelist at Black Kite, said in an interview.
Jan Shelley Brown, Partner at McKinsey & Company, helps companies in the financial sector and other highly regulated industries assess the risk profile of new technologies. This increasingly involves the integration of his AI, which can bring both business value and unforeseen risks.
“Cybersecurity challenges have become extremely important as technology is embedded in every corner of business,” Brown said in an interview.
balancing act
Introducing AI into your enterprise brings cybersecurity benefits as well as drawbacks.
On the security front, Wheatman said AI tools can quickly analyze and detect potential risks. Incorporating AI can enhance existing security methods such as incident detection, automated penetration testing, and rapid attack simulation.
“AI runs millions of iterations and starts to get very good at determining which are actually real risks and which are not,” Wheatman said.
Generative AI is increasingly used across the enterprise, but its security applications are still in its infancy.
Fahim Siddiqui, Executive Vice President and Chief Information Officer (CIO) of Home Depot, said during the panel “AI Barbarians at the Gate: The New Battleground of Cybersecurity” that “GenAI is at the core of cyber defense. I think it's still too early to say that.” and threat intelligence. ”
But despite these concerns, especially around generative AI, Siddiqui pointed out that many of the cybersecurity tools in use today already incorporate some type of machine learning.
Andrew Stanley, chief information security officer and vice president of global digital operations at Mars Inc., discusses the advanced benefits that generative AI can bring to enterprises in his presentation, “The Goldilocks Path: Balancing GenAI and Cybersecurity.” I explained about it. One of these benefits is bridging gaps in technical knowledge.
“The really powerful thing that generative AI brings to security is the ability to allow non-technical people to participate in technical analysis,” Stanley said in his presentation.
Due to the various benefits of this technology, companies are increasingly using AI, including generative AI, in their workflows, often in the form of third-party and open-source tools. Brown said he has seen widespread adoption of third-party tools within organizations. But organizations often don't know exactly how these tools use AI or manage their data. Instead, you must rely on the reputation and trust of external vendors.
“This presents an entirely different risk profile to the organization,” Brown said.
The alternative, custom LLM and other generative AI tools, is currently less widely adopted among enterprises. Brown said that while organizations are interested in custom-generated AI, the process of identifying valuable use cases, acquiring the right skillsets, and investing in the necessary infrastructure is difficult to do using off-the-shelf tools. I pointed out that it's much more complicated than that.
Whether an organization chooses a custom or third-party option, AI tools introduce new risk profiles and potential attack vectors such as data poisoning, prompt injection, and insider threats.
“Data is starting to show that in many cases, threats may not be external to the organization, but rather internal,” Brown said. “Your own employees can be a threat vector.”
This risk includes shadow AI, where employees use unapproved AI tools, making it difficult for security teams to accurately identify threats and develop mitigation strategies. An explicit security breach could also occur if a malicious employee exploits inadequate governance or privacy controls to gain access to her AI tools.
The widespread availability of AI tools also means that external bad actors can use AI in unforeseen and harmful ways. “Defenders have to be perfect, or close to perfect,” Wheatman said. “All the attacker really needs is his one way into one attack vector.”
If your cybersecurity team is not AI-savvy, threats from bad actors are even more concerning. AI is one of many AI-related risks that organizations are beginning to address. “The percentage of cybersecurity professionals who have a really relevant AI background is very low,” Wheatman says.
Transition to cyber resilience
Brown says that it is impossible to completely eliminate risk when using AI in business settings.
As AI becomes integral to business operations, the key is to deploy it in a way that balances the benefits with an acceptable level of risk. Planning for AI cyber resilience in your enterprise requires a comprehensive risk assessment, collaboration across teams, an internal policy framework, and responsible AI training.
Risk level assessment
First, organizations need to determine their risk appetite, or the level of risk they are comfortable introducing into their workflows, Brown said. Organizations need to evaluate the value that a new AI tool or system can provide to their business and compare that value against the potential risks. With proper controls in place, organizations can determine whether they are satisfied with the risk-value trade-off.
Wheatman proposed a similar approach, suggesting that organizations consider factors such as revenue impact, customer impact, reputational risk, and regulatory concerns. In particular, prioritizing concrete risks over more theoretical threats can help companies efficiently assess their situation and move forward.
Collaboration between teams
Almost everyone in a company has a role in using AI safely. “Organizationally, this is not an issue that one team can assess or address,” Wheatman said.
Data scientists, application developers, IT, security and legal personnel are all exposed to potential risks from AI, but “they're all having very different conversations right now,” he said.
Brown made a similar point, explaining that teams from a wide range of departments, from cybersecurity to risk management, finance and human resources, need to be involved in risk assessments.
This level of cross-collaboration may be new to some organizations, but it's gaining traction. Wheatman said his science and security teams are starting to work more closely together, which hasn't been the norm in the past. Integrating these different aspects of your AI workflow will strengthen your organization's defenses and ensure that everyone knows what AI tools and systems are deployed in your organization.
Internal policy framework
After the initial connection, the team must find a way to get on the same page. “If an organization doesn’t have that, [a] If you don't fit into that framework, these conversations become very difficult,” Brown said.
”[In] “In many organizations, most people don't even have a policy,” Wheatman said. This can make it very difficult to answer questions such as what AI tools are used for, what data they touch, who uses them, and why.
While the details of an AI security framework will vary from organization to organization, a comprehensive policy will typically include permission levels, regulatory standards for AI use, security breach mitigation steps, and employee training plans.
Responsible AI training
Brown said that with all the use cases and hype surrounding AI in the enterprise, especially generative AI, there is a real concern about creating over-reliance and false trust in AI systems. Even with proper collaboration and policies, users still need to be trained in the responsible use of AI.
“Generative AI, in particular, can very aggressively undermine what we all agree is right…and it does so through natural means of trust,” Stanley said during his presentation. Ta. He encouraged business leaders to tell users that “it's okay to be skeptical” about AI and to reframe internal conversations around trust.
Generative AI has been responsible for misleading outputs, including creepy deepfakes, biased algorithms, and hallucinations. Companies must implement rigorous training programs to educate employees and other users on how to use AI responsibly, with a healthy skepticism and a deep understanding of the ethical issues raised by AI tools. You need to plan.
For example, the data on which LLMs are trained is often implicitly biased, Brown says. In reality, models could propagate these biases, leading to harmful consequences for marginalized communities and adding a new dimension to the risk profile of AI tools. “This is not something that can be mitigated by cyber management,” she says.
Therefore, instead of relying solely on AI systems, organizations should constantly check the output of tools and train their employees and technology users to be skeptical of the use of AI. Investing in the changes needed to safely incorporate AI technology into an organization can be even more expensive than investing in his actual AI products, Brown said.
This may involve a wide range of necessary changes, such as responsible AI training, implementation of frameworks, collaboration across teams, etc. But if companies invest the time, effort, and budget required to protect against AI cybersecurity risks, they will be better placed to reap the benefits of the technology.
Olivia Wisbey is an associate site editor at TechTarget Enterprise AI. She graduated from Colgate University with a BA in English Literature and Political Science and served as a peer writing consultant at the university's Writing and Speaking Center.