3 key metrics for cybersecurity product managers

The conventional product management wisdom suggests that one of the responsibilities of a product leader is to track and optimize metrics — quantitative measurements that reflect how people benefit from a specific solution. Anyone who has read product management books, attended workshops or even simply gone through an interview, knows that what is not measured cannot be managed.

The practice of product management is, however, much more nuanced. Context matters a lot, and the realities of different organizations, geographies, cultures and market segments heavily influence what can be measured and what actions can be taken based on these observations. In this article, I am looking at cybersecurity product management and how metrics product leaders are tempted to track and report on may not be what they seem.

Detection accuracy

Although not all cybersecurity products are designed to generate some kind of detections, many do. Detection accuracy is a metric that applies to the security tooling that does trigger alerts notifying users that a specific behavior has been detected.

Two types of metrics are useful to track in the context of detection accuracy:

  • False positives (a false alarm, when the tool triggers a detection on normal behavior).
  • False negatives (a missed attack, when the tool misidentifies an attack as normal behavior and does not trigger a detection).

Security vendors are faced with a serious, and I dare to say, an impossible-to-win challenge: how to reduce the number of false positives and false negatives and bring them as close to zero as possible.

The reason it is impossible to accomplish this is that every customer’s environment is unique and applying generic detection logic across all organizations will inevitably lead to gaps in security coverage.

Product leaders need to keep in mind that false positives make it more likely that a real, critical detection will be missed, while false negatives mean that the product is not doing the job the tool was bought to do.

Conversion rate

Conversion rate is one of the most important metrics companies, and subsequently — product teams, obsess about. This metric tracks the percentage of all users or visitors who take a desired action.

Who owns conversions in the organization will depend upon who can influence the outcome. For example:

  • If the product is fully sales-led and whether the deal gets closed is in the hands of sales, then conversion is owned by sales.
  • If the product is fully product-led and whether a free user becomes a paying customer is in the hands of product, then conversion is owned by marketing and product teams (marketing owns the sign-up on the website, product owns in-app conversion).

Conversion rate is an incredibly important metric, as it means the company can efficiently capture customer interest and translate it into revenue. Although cybersecurity product managers should be monitoring conversion closely, they must understand some nuances that surround it.

In cybersecurity, the buying process is incredibly complex: Not only does it rely heavily on trust but it also takes a long time and involves a large number of people — security practitioners, CISOs, legal, compliance, purchasing and so on (most of the cybersecurity market is enterprise sales).

Even the so-called product-led companies are not an exception: The in-product experience is just one of many factors that influence the buying process. Moreover, there is typically no one-to-one relationship between users and customers: Ten anonymous users who signed up at different times with gmail.com domains could all be a part of the same security team who evaluates the tool from different perspectives. Tracking conversion is therefore best done on the customer level rather than the user level.

Usage and engagement

Traditionally, product teams are tasked with growing engagement and ensuring that users not only come back but that they spend a lot of time using the product. This is driven by the understanding that the more time users spend in the product, the more likely they are to experience real value, develop a “habit” and therefore continue using it.

Low usage is often linked to a higher probability of churn while high user engagement helps companies validate that they are solving an important problem, raise capital (investors are looking for revenue and engagement), and identify new expansion opportunities. However, this approach doesn’t work the same way in cybersecurity.

Let’s start by looking at a real example. Say a company bought a security automation and orchestration platform (often abbreviated as SOAR) to eliminate unnecessary manual tasks and simplify the communication between many of its security products. At first, the security team will be very active in configuring workflows, setting up automation and creating playbooks to make the security operations center (SOC) run smoothly.

After the initial setup is done, the team will stop engaging with the product and may only see one or two users log in twice a month to tweak some configuration here and there. While conventional wisdom may ring the alarm that the customer is disengaged with a high potential for churn, the reality might be quite the opposite: The product is saving the security team a lot of time, and it is happy that everything works smoothly — so smoothly in fact that it doesn’t even need to check on the product often.

This example highlights a simple yet often forgotten truth: Most cybersecurity products that add a lot of value to the security stack are invisible. Having to navigate 20-70 tools, security teams will only actively engage with a limited few, while leaving the rest to do their job in the background.

That’s why product teams must adopt a different angle when seeking to understand if the product is being used: Instead of seeking engagement, they can track the flow of data, alerts and other core tasks that show the product is alive.

After learning that an average security team will use tens of product dashboards to do their jobs, many vendors became excited about the idea of a “single pane of glass” — one screen to rule them all. Every vendor wants their product to be the one dashboard that is left, and with that, there is a constant fight for user’s attention.

Product leaders need to understand this reality and embrace the fact that being a well-loved tool that works great in the background may be a better idea than being a shiny dashboard everyone complains about.

Focusing on what matters

Building high-performing products that solve hard security problems is hard, and taking them to market is even harder. It is no wonder product leaders are tempted to look for shortcuts and easy ways to measure how they are tracking toward their goals. Unfortunately, although quantitative metrics have value, they can easily become a distraction.

The vast majority of cybersecurity companies are early-stage startups, primarily operating in the B2B space. Security teams themselves are quite small, too. In practical terms, all this means that product managers see a low volume of users and in-product activity, and metrics such as daily active users (DAU) and monthly active users (MAU) don’t provide a lot of useful information. In this environment, trying tools like A/B testing or making decisions based on quantitative data alone is not feasible.

To identify user needs and focus on what matters, product managers need to look for qualitative insights. I have found the following to be quite useful:

  • Talking to sales and sales engineering teams to get visibility into the sales process — what questions people ask, what problems they are looking to solve, what gaps have they experienced in other solutions, etc. Sometimes, being a “fly on the wall” in the sales call can provide more insights than several customer surveys combined.
  • Working with sales on conducting win-loss analysis. In particular, it can be very useful to connect with prospects who decided to go with competitive offerings and learn about their evaluation process, opportunities for improvement and gaps they’ve identified.
  • Analyzing questions coming to support can resurface what areas of the product are hard to use, what critical features are missing and which are hard to discover.

Most importantly, practicing continuous discovery habits and talking to users, prospects and non-users will help PMs understand the needs, pain points and challenges cybersecurity teams are facing, and then design products that solve them.

When in doubt, it’s always better to ask a question and understand the actual motivators of users instead of going with what the data, devoid of a broader context, seems to be suggesting.