Salt Security this week revealed that it has incorporated a generative artificial intelligence (AI) assistant called Pepper into its application programming interface (API) security platform.
Pepper provides a natural language interface that cybersecurity teams can run queries to discover, for example, how to configure a platform without having to read hundreds of pages of documentation. The tool is based on a large-scale language model (LLM) that the company trained using its own product documentation. Using an interface accessible from the existing Salt Security dashboard, Pepper was able to summarize information in natural language, reducing the time cybersecurity teams spent viewing documentation.
Eric Schwake, director of cybersecurity strategy at Salt Security, said Pepper is the first in a series of AI investments the company will add to its platform in the coming months.
In general, advances in generative AI in the field of cybersecurity are coming in two phases. Typically, the first phase uses generative AI to streamline support functions and enable people to query a corpus of data using natural language. Salt Security claims that Pepper can reduce the time it takes to surface actionable information by up to 91%.
In the long term, as LLM's reasoning capabilities continue to evolve, the types of tasks that can be automated will become more complex. The current challenge is that the more sophisticated the inference engines become, the more parameters are required for the LLMs that automate those tasks. Of course, as the size of the LLM increases, so does the cost of querying it.
It's unclear to what extent cybersecurity teams are implementing AI. However, the ability to use generated AI-invoked queries through natural language interfaces is rapidly becoming important. The race continues to extend these capabilities so that cybersecurity teams can automate tasks to ensure that sensitive data is not accidentally used to train the next iteration of universally accessible general-purpose LLMs.
In the meantime, cybersecurity teams should start developing a generative AI strategy. Many of today's manual cybersecurity tasks create a level of tedium that makes the job tedious. Eliminating boring tasks does not reduce the need for cybersecurity professionals. Rather, existing teams may be able to investigate more security incidents and better respond to cyberattacks. And we know that both volume and sophistication will only increase as cybercriminals also adopt his AI technology.
The debate right now is probably less about whether cybersecurity teams will leverage AI, and more about how quickly and to what extent.
Recent articles by author