Artificial intelligence is dramatically transforming the business landscape. It streamlines operations, provides critical insights, and enables companies to make data-driven decisions more efficiently. Through machine learning, predictive analytics, and automation, AI helps identify trends, forecast sales, and streamline supply chains, leading to increased productivity and improved business outcomes. Unfortunately, it's not without its challenges.
We spoke with Matt Hillary, VP of Security and CISO at Drata, about key security and compliance issues surrounding AI.
BN: How is AI increasing the threat of ransomware and how is it dramatically changing the cybersecurity landscape?
MH: The primary strategy for spreading ransomware continues to rely on social engineering tactics such as phishing and exploiting weaknesses in externally accessible systems, such as Virtual Private Network (VPN) endpoints, Remote Desktop Protocol (RDP)-exposed endpoints, and application zero-days. AI is enabling cyber attackers to create highly sophisticated deceptive messages, reducing the typical telltale signs of a phishing attack and making them more attractive to unwary users.
Cybercriminals can also use AI to improve various aspects of their activities, such as reconnaissance and coding, to strengthen their exploit vectors. Leveraging AI, threat actors can efficiently analyze extensive data sets, identify weaknesses in an organization's external systems, and create customized exploits, whether they are exploiting known vulnerabilities or discovering new ones.
BN: Conversely, how is AI helping to improve defensive and preventative solutions?
MH: AI-powered systems can analyze vast amounts of data to detect patterns indicative of cyber threats such as malware, phishing attacks, anomalous network activity, etc. These (large-scale language models) LLMs can identify indicators of compromise and other threats faster and more accurately than traditional or manual review methods, enabling faster response and mitigation.
AI models can also review activity to learn the normal behavior of users and systems in your network and detect deviations that could indicate a security incident. This approach is particularly effective at identifying insider threats and advanced attacks that evade traditional signature-based detection methods.
BN: What benefits does AI offer in automating governance and compliance with evolving regulations and industry standards?
MH: AI tools can be fed log data to continuously monitor systems, detect anomalies, and respond to signs of security incidents, misconfigurations, or process activity that could lead to compliance violations. These tools also help organizations stay current and compliant by staying on top of evolving governance regulations in real time.
AI algorithms can analyze vast amounts of regulatory data, reducing the risk of human error associated with manual processes, resulting in a more accurate assessment of compliance status and reducing the likelihood of regulatory non-compliance.
BN: What other practical or best practices should leaders implement now to protect their companies from evolving AI threats?
MH: My suggestion is this.
- Provide comprehensive education to your cybersecurity teams on the AI your employees use and how to effectively secure the AI that is built into or already operational in your platforms and systems. Even the most tech-savvy teams will explore not only the application but also the underlying technologies that power AI capabilities.
- Implement phishing-resistant authentication methods to protect your organization from phishing attacks that target the authentication tokens used to access your environment.
- Establish policies, training, and automation mechanisms to equip team members with the knowledge to defend against social engineering attacks.
- To mitigate the impact of such attacks, continually harden your organization's internet-facing perimeter and internal network.
BN: What are the ethical considerations when it comes to AI? What are the practical safeguards leaders can take to ensure AI is used ethically across their organizations?
MH: Companies should establish governance structures and processes for overseeing the development, deployment, and use of AI. This includes appointing individuals or committees responsible for overseeing ethical compliance and ensuring alignment with the organization's values. These governance structures should be extensively documented and understood throughout the organization.
At the same time, we promote transparency by documenting AI algorithms, data sources, and decision-making processes, allowing stakeholders to understand how AI systems make decisions and their potential impacts on individuals and society.
At Drata, we have developed Responsible AI Principles across our systems and processes designed to foster robust, trustworthy and ethical governance while maintaining a strong security posture.
- Privacy by Design: Protect privacy with strict access controls and encryption protocols using anonymized data sets, and simulate compliance scenarios with synthetic data generation.
- Fairness and inclusion: We remove inherent bias through detailed curation, continuously monitor models to ensure unfair outcomes, and provide an intuitive interface that is easy for all users to use.
- Secure and reliable: Rigorous testing combined with 360-degree human oversight provides full visibility, giving users confidence that their AI solutions will work as expected.
BN: What do you think are the future threats from AI?
MH: As AI becomes more accessible and effective, it is inevitable that malicious actors will abuse it to launch sophisticated, highly targeted, automated cyberattacks across multiple domains that evolve in real time and evade traditional detection methods.
At the same time, the rise of AI-generated deepfakes and misinformation threatens individuals, organizations, and democratic processes, as fake video, audio, and text make it nearly impossible to distinguish fact from fiction.
BN: What does the future hold for advanced AI-driven security solutions to strengthen cyber defense capabilities and manage third-party vendor risk?
MH: AI strengthens cybersecurity resilience by employing proactive threat intelligence, predictive analytics, and adaptive security controls. Using AI to predict and adapt to emerging threats allows organizations to stay proactive against cybercriminals and reduce the impact of attacks. Ongoing research and collaboration will be essential to ensure AI continues to serve as a positive force in the fight against cyber threats.
Third-party risk is a critical component of a strong governance, risk, and compliance (GRC) program, especially when addressing AI-induced vulnerabilities. Security teams need comprehensive tools to identify, assess, continuously monitor, and integrate risks with their internal risk profile. This holistic approach provides a unified, clear view of potential risks across the organization to effectively and efficiently manage third-party risks related to AI.
Image credits: Light Studio / Dreamstime.com