Today at Google Cloud Next, the company announced several new generative AI enhancements to its security product line in an effort to make it easier to find information from a massive amount of security data by simply asking questions in plain language.
Steph Hay, head of UX for cloud security at Google, says that these new capabilities are designed to do more with less. “We’re really trying to supercharge security with generative AI to mitigate threats, and in particular prevent downstream impacts that our practitioners face today, to reduce the toil that the security teams deal with having to manage a growing attack surface, and really bridge the cyber talent gap,” Hay said at a press event last week.
“AI is enabling security teams to improve their security posture by generating AI summaries to describe threats, by searching for patterns in security data to identify if teams have been targeted or companies have been targeted, and finally, by recommending actions to take both in response to active threats and also to proactively improve security posture,” she said.
For starters Google acquired security intelligence tool Mandiant last year for $5.4 billion; it was a hefty price to pay, but it provides its customers with valuable data about security threats, which they can put to work to defend against possible attacks. But it’s typically a lot of data, and even if you’re a highly skilled professional, it’s hard to find the data nuggets that matter most to your organization.
To help with that, the company is introducing Duet AI in Mandiant Threat Intelligence, which helps security teams understand the mass of information they are seeing by providing a relevant summary to help quickly grasp the nature of a particular threat. Whether this is useful or not, however, will hinge on the depth and quality of the summaries, and how well less skilled analysts can understand the information they are getting.
Duet AI for Chronicle Security Operations helps teams ask deeper questions about whether a particular threat is a danger to your company, and more importantly, how to respond to a particular threat without having specific knowledge of the syntax of language the tool uses. The usefulness of these answers could depend on the whether the practitioner is asking good questions, and the quality of the summary and recommendations that the model gives back.
Finally, Duet AI in Security Command Center enables less experienced security analysts to ask questions to understand the nature of the threat to the company’s operations by providing analysis of security findings, potential attack paths and possible proactive actions you could take.
All of these features are taking advantage of generative AI to help teams understand the nature of security threats better, especially those with less experience, who might need a boost to understand what’s happening. It has the potential, depending on the quality of the answers, to make every analyst a little better.
Of course, the hallucination problem, where large language models make up things when they don’t have a clear answer, could be a huge issue when it comes security, but Nenshad Bardoliwalla, AI/ML product Leader for Vertex AI at Google Cloud, says providing a more limited dataset, based on the information these security tools, could help mitigate that problem, at least to some extent.
“We believe that a comprehensive set of grounding capabilities on authoritative sources is one way that we can provide a means of controlling the hallucination problem and making it more trustworthy to use these systems,” Bardoliwalla said.
The three security-related Duet AI products are available in preview now, and will be released later this year.