As people look to find practical uses for generative AI rather than creating fake photos, Google plans to turn AI toward cybersecurity and make threat reports easier to read.
Google wrote in a blog post that its new cybersecurity product, Google Threat Intelligence, will integrate the work of Mandiant's cybersecurity arm and VirusTotal's threat intelligence with Gemini AI models.
The new product uses Gemini 1.5 Pro's large language model, which Google says reduces the time needed to reverse engineer malware attacks. The company says Gemini 1.5 Pro, released in February, is capable of analyzing the code of the WannaCry virus (a 2017 ransomware attack that disrupted hospitals, businesses, and other organizations around the world) and identifying kill switches. They claim it took just 34 seconds. This is impressive, but not surprising considering LLM's knack for reading and writing code.
But another use for Gemini in the threat space is to summarize threat reports into natural language within Threat Intelligence, allowing businesses to assess how potential attacks could impact their organization. This means ensuring that companies don't overreact or underreact to threats.
Google says Threat Intelligence also has a vast network of information to monitor potential threats before an attack occurs. This allows users to get a complete picture of cybersecurity and prioritize what to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with businesses to block attacks. VirusTotal's community also posts threat indicators regularly.
The company also plans to use Mandiant's experts to assess security vulnerabilities on its AI projects. Mandiant will test his AI model's defenses through Google's Secure AI Framework and support Red Team efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can become prey for malicious actors. These threats can include “data poisoning,” which adds malicious code to the data collected by an AI model, making the model unable to respond to certain prompts.
Of course, Google isn't the only company combining AI and cybersecurity. Microsoft launched Copilot for Security, powered by GPT-4 and Microsoft's cybersecurity-specific AI models, to help cybersecurity professionals ask questions about threats. It remains to be seen whether either is truly a good use case for generative AI, but it's nice to see it being used for something other than a photo of a respectable Pope.