Building a PLG motion on top of usage-based pricing

I spent several years as a general manager at Amazon Web Services and my teams launched two Tier 1 services: Amazon CloudSearch and Amazon OpenSearch.

Like every product at AWS, these were scaled with a product-led growth (PLG) market motion. There were no gated features or subscription tiers to choose from. Instead, a usage-based pricing (UBP) model has always been used to charge based on consumption, directly correlated to the value being delivered to the user.

AWS pioneered the product-led movement by offering its entire suite of offerings on a fully pay-as-you-go basis when it came to market in 2011. Developers could immediately begin using the services in the free tier to realize value, and there was no mandatory bundling or gatekeeping account managers. In 2011, AWS was far ahead of its time. It is only now that we are seeing companies of all sizes pivoting to a product-led motion.

Usage-based pricing is an essential component of any PLG strategy. In fact, it’s my belief that you cannot have true PLG without UBP. To be truly product led there should be no friction when adopting the product and realizing value. The doors should be wide open and new users should be able to come in and use the tools.

To succeed with any product-led strategy, it’s essential to have real-time awareness and granular visibility into what your users are actually doing with the product, which features are being used and how value is being realized. The following steps are informed from the process we followed in the early days of launching and scaling services at AWS from day one. This process allows you to remove emotion and gut feel from pricing and product decisions while allowing customer usage and consumption to lead your decision-making and help your business scale.

When the time comes to make decisions about product packaging and pricing, the first place you turn to should be the metering pipeline for historical usage data.

Step 1: Invest in usage instrumentation

Usage data is the foundational building block of any product-led motion. Usage provides the intelligence that drives all other functions, from pricing and packaging to sales, support engagements and even product roadmap development. This data shows which features are driving traffic and adoption and where you need to scale your efforts to continue meeting user needs.

Most organizations cannot easily implement usage instrumentation at scale, and the existing tool sets for monitoring and observability do not deliver on requirements for total accuracy and auditability. Metering solutions were born out of this need for a new category of technology that could accurately track usage for any resource, at any scale, in real time and make this data available for analytics and reporting.

Continue to instrument new features and products as they are developed to curate an exhaustive set of usage data that you can analyze and base business decisions on.

Step 2: Make usage data available throughout your business

Consistently assess and ensure that each department has access to the insights they need to do their work. In a product-led motion, the product is the primary vehicle through which you engage with customers; other go-to-market actions are designed to provide support. Roles across the organization can effectively leverage usage data with the correct tools and strategies.

Ensure that your meter data pipeline is the single source of truth for data on usage and consumption. You should take care to identify what product and usage data each role across your organization needs to be successful. The data should be indexed and accessible to permissioned users.

Some common examples include:

  • Integrating the usage pipeline with CRM solutions so customer-facing teams like sales and support have detailed and real-time access to exactly how customers are using the product. This can help inform personalized, proactive outreach for cross/upselling and for setting up positive support experiences that build trust and goodwill.
  • Real-time revenue recognition as usage occurs. Revenue recognition in usage-based pricing is more complicated; the entire contract value cannot be fully booked at the moment of payment (in the common case where customers prepay for bulk usage) but instead needs to be recognized in real time as the corresponding usage takes place.
  • Identify features that have high or low adoption to inform marketing efforts. Develop case studies and collateral to demonstrate value for high-adoption features to maintain momentum while also identifying low-adoption areas and creating content and campaigns to increase awareness.
  • Product management and engineering. Having a granular, real-time view into product usage and consumption is the holy grail for product and engineering teams in a product-led business. Use this data to inform roadmap development and feature prioritization based on what customers are using and where gaps emerge.

Step 3: Analyze usage data to understand value drivers and usage patterns

Having this data aggregated and organized allows you to eliminate any decisions based on “gut feel” or “instinct.” It is critical to know precisely how customers are using your product. While a savvy operator may have an intuitive idea of usage and adoption, digging into the data will always yield fresh and surprising results. It is critical not to become complacent and always use customer usage as the north star metric to guide your business.

That said, take care not to let analytics become a substitute for interacting with customers. When you have robust instrumentation in place, you should be able to clearly see which features or areas of the product are driving adoption and delivering value. But, real-world interactions deliver so much more nuance and color to the overall user story that is invaluable for completely understanding customer needs and pain points.

Over time, you should begin to uncover more sophisticated insights from the usage pipeline, such as checkpoints on the typical customer onboarding journey from signup, or identifying the important levers for activating, increasing and retaining users on the platform.

Complement these usage analytics with proactive customer outreach and interactions to build deeper relationships and complete the full picture of users goals, pain points and typical use cases. The insights gathered from the usage pipeline and customer feedback should be leveraged in a continuous feedback loop by all functional areas of the business to operate and scale successfully.

Step 4: Leverage usage and adoption data to inform product pricing

When the time comes to make decisions about product packaging and pricing, the first place you turn to should be the metering pipeline for historical usage data. Meet the associated product management team and lean on the usage data to answer the following questions:

  1. Which elements of product usage are most tightly aligned with customer use and value realization?
  2. What is an appropriate scaling function for this metric?
  3. How do the usage patterns vary over time, industry, company size, etc.? Look to understand the factors that are correlated with usage.
  4. Define the goal for this product launch. Is it a land grab where max adoption is prioritized or is the goal to increase profitability?
  5. Identify the set of metrics that scale correctly and align with your business goals. For example, if the goal is max adoption, then charging based on the number of users may not be the best choice as it disincentivizes new users to sign up.

Following this process, you should be able to remove the guesswork and identify the appropriate vectors for fees. Once you’ve done that, you can set the pricing. This is where it becomes critical to have a wealth of granular historical data representative of your user base so you can back-test pricing models. Without ample instrumentation, you will be guessing and estimating to arrive at product pricing.

With historical usage data, you can apply the pricing logic from your test pricing plans to see the revenue generated from that usage. It is important to iterate this process over time and as usage profiles change to ensure your pricing model remains optimal and aligned with your business goals.

An example from AWS

I can tie this together with a real-world example from my time with Amazon OpenSearch. The service was being released to the market and we had to identify an optimal pricing model.

We dove into the usage data from the metering pipeline to understand the patterns and usage profiles. We found that there are two main categories of users for most search use cases:

  1. High storage with low query, where there is a lot of data stored and indexed but that data is queried less frequently.
  2. Low storage with high query, where there is less data to manage but queries are run more frequently on the data.

From this exercise, we identified that the amount of data stored and the query volume make suitable vectors for billing. After we built pricing around this and presented it to leadership, it emerged that the model was over-engineered and didn’t adequately address the full spectrum of variance in use cases.

So we reconvened and considered the data again. The solution was to simplify the pricing model so it was only based on data egress (like for all AWS services), data storage, query volume and a reindexing charge that is applied to protect against usage variance. In the event that data needs to be reindexed for a new use case (changing requirements), then an additional charge would apply to the customer account.