The Value Metric Decision: How to Choose the Number Your Pricing Hangs On
Every usage-based pricing model has a value metric at its center — the unit that customers buy and consume, the number that grows as they get more value. Getting this decision right is the most important pricing decision you'll make. Get it wrong and you're charging customers for the wrong thing, creating misaligned incentives, and building a pricing model that fights your product's growth mechanics instead of amplifying them.
The value metric is what separates great pricing from pricing that just works. Stripe charges per transaction. Twilio charges per message sent. Databricks charges per DBU (Databricks Unit of compute). These are not arbitrary — they're the result of deliberate analysis of what correlates most directly with customer value.
The 5 Criteria for a Great Value Metric
1. It scales with customer success
The best value metrics grow when customers are getting more value from your product. Stripe's transaction-based pricing grows when merchants process more payments — i.e., when their business grows. Twilio's per-message pricing grows when developers build products that send more messages — i.e., when their applications scale. The metric should go up when things are going well for the customer, not when they use an arbitrary product feature more often.
2. It's easy for customers to predict and understand
Customers need to be able to budget for your product. If the value metric is opaque or requires a statistics PhD to forecast, you'll get friction at procurement, surprises at billing time, and churn conversations that start with "I didn't understand how you charge." DBUs are complex but Databricks provides estimators and cost controls. Per-message is simple. Per-"credit" is usually fine. Per-"processing unit" for an undefined unit is not.
3. It aligns your cost structure with revenue
A value metric that drives more revenue when customers use more of your expensive infrastructure is good. A value metric that drives more revenue when customers do something that costs you nothing is better. Stripe's infrastructure cost scales with transaction volume — that works. Twilio's infrastructure cost scales with messages — that works. A metric that generates revenue when customers view dashboards (low cost to you) is asymmetrically good. A metric that generates revenue when customers run ML inference (high cost to you) needs careful margin analysis.
4. It's measurable without customer effort
Your metering needs to be automatic. If customers have to self-report usage, you have a trust problem and an operational nightmare. The value metric should be derived from events that your system generates automatically as a byproduct of normal product usage. If you're asking customers to submit reports of how much value they got, you've already lost.
5. It doesn't create adversarial optimization
The worst value metrics are the ones customers actively optimize against. Per-token AI pricing drives customers to aggressively compress prompts and reduce generation length. Per-seat pricing drives companies to share accounts. Per-query database pricing drives users to cache aggressively. Some optimization is fine — it shows customers are engaged. But if optimizing the metric means customers get less value from your product, the metric is wrong.
The Anti-Patterns
The most common value metric failures in the wild:
- Per-seat for a product where value is individual — An analytics dashboard where each user has private data has no collaboration value. Charging per seat there is just a tax on the organization, not a measurement of value.
- Storage-based pricing for a product where storage is not the primary value — Charging for data storage when the product is a CRM creates perverse incentives to keep bad data.
- Feature-gated tiers with no usage dimension — If all customers use roughly the same amount regardless of tier, feature tiers are arbitrary barriers, not value metrics. Usage grows with value; feature access doesn't.
- Per-outcome without defining the outcome — Outcome-based pricing sounds compelling until you realize "outcome" is undefined in the contract. This is where AI pricing is right now: a lot of "per conversation" or "per resolution" metrics with meaningfully different interpretations of what counts.
How to Test Before Committing
Before locking in a value metric, test two things. First, run a correlation analysis: take your current customer base and correlate every available usage metric against NRR, LTV, and expansion revenue. Which metric shows the strongest positive correlation with customers who grow? That's your value metric candidate. Orb's blog documents this as "value metric discovery" — a data exercise that often reveals the right metric is different from the intuitive one.
Second, a16z's UBP research recommends a price sensitivity analysis across different usage levels: take 10 customers at each decile of your candidate metric's distribution and analyze their price sensitivity at their current usage level. If customers at 10x usage are 10x as happy (and show 10x NRR), the metric scales well. If customers at 10x usage are price-sensitive at 3x price, the metric has a ceiling problem.
Change your value metric once. Changing it twice signals to customers that you don't understand your own pricing, which is a negotiation gift they will use against you at every renewal.
Sources
- a16z — The Usage-Based Pricing Playbook — value metric selection criteria, price sensitivity analysis methodology
- Orb — Choosing Your Value Metric — value metric discovery process, correlation analysis approach
- Databricks Pricing — DBU Architecture — example of compute unit value metric design
- Stripe Pricing — per-transaction value metric as canonical example