Back to Blog
Business
January 20, 2026
7 min read
1,383 words

Why We Stopped Tracking NPS. The Vanity Metric Trap.

Our NPS was 72. We celebrated. Then we had our worst churn quarter ever. NPS measures willingness to recommend, not willingness to renew. We switched to tracking leading indicators of churn.

Why We Stopped Tracking NPS. The Vanity Metric Trap.

Net Promoter Score was supposed to be the ultimate metric. One number that predicted growth. One question that captured customer loyalty. Bain & Company's research showed NPS leaders outperformed competitors. Every SaaS company tracked it. We tracked it religiously.

Our NPS was 72. Industry average was 35. We were doing great! Customers loved us! We celebrated in all-hands meetings. We put NPS in our investor updates. We gave bonuses tied to NPS improvements.

Then Q3 hit. Our worst churn quarter in company history. 15% of customers didn't renew. Revenue that we'd counted on evaporated.

We went back through the data. Many of the churned customers had given us 9s and 10s on NPS surveys just months before canceling.

"How likely are you to recommend XQA to a colleague?" → 10/10

Two months later: [Cancellation request received]

What happened? NPS measured the wrong thing. Customers might genuinely recommend us—and still cancel because of budget cuts, organizational changes, or competitive options. Willingness to recommend is not willingness to renew.

We stopped tracking NPS as a primary metric. We replaced it with leading indicators of churn that actually predicted retention. Here's why NPS fails and what works instead.

Section 1: What NPS Actually Measures—And What It Doesn't

NPS asks one question: "How likely are you to recommend [product] to a friend or colleague?" Scores of 9-10 are "Promoters," 7-8 are "Passives," and 0-6 are "Detractors." NPS = % Promoters - % Detractors.

This measures sentiment. It does not measure behavior.

The Recommendation ≠ Renewal Gap

A customer might genuinely recommend your product while having no intention of renewing. Consider these scenarios:

Scenario 1: Budget Cuts

"I love XQA. It's the best testing tool I've used. I'd recommend it to anyone."

But: "Our budget got cut 40%. We're eliminating all non-essential tools."

Scenario 2: Organizational Change

"XQA was essential for our QA team. Highly recommend."

But: "We restructured. There is no QA team anymore. We don't need the tool."

Scenario 3: Competitive Switch

"XQA is great for what it does. I'd recommend it."

But: "Our new VP standardized on a competitor. Out of my hands."

In all these cases, NPS is high. Renewal doesn't happen. The metric provided false confidence.

The Responder Bias Problem

Who responds to NPS surveys? Often the most engaged users—the power users, the champions, the people who like the product. The marginal users, the ones at risk of churning, often don't respond.

This creates systematic bias. Your NPS reflects your happiest customers, not your customer base. The signal is skewed toward positivity.

We analyzed response rates by customer health (measured by product usage). High-usage customers responded at 45%. Low-usage customers responded at 12%. Our NPS was being calculated on a biased sample.

The Point-in-Time Problem

NPS is a snapshot. It captures how someone feels on the day they answer the survey. It doesn't capture trends.

A customer might give you a 10 after a great support interaction, then hit a product bug the next week and start evaluating competitors. By the time NPS catches up, it's too late.

Leading indicators of churn are continuous. NPS is episodic. For predicting behavior, continuous data wins.

Section 2: The Vanity Metric Dynamic—Why High NPS Feels Good

NPS is seductive because it tells a simple story. "72! Above benchmark! Customers love us!"

This simplicity is the problem.

The Board Deck Metric

NPS looks great in investor updates and board decks. It's a single number that can be compared to industry benchmarks. It goes up and to the right (usually). It's universally understood.

But board deck metrics aren't the same as operational metrics. NPS tells you how you're perceived. It doesn't tell you what to do.

When our NPS was 72 and we were celebrating, no one asked: "But are the right customers giving us high scores? Are the high scores converting to renewals? What about the customers who didn't respond?"

The Optimization Trap

When you make NPS a KPI, people optimize for NPS. Customer Success managers time their NPS surveys to follow good experiences. They exclude unhappy customers from survey lists. They coach customers on how to respond.

We discovered that our CSMs were sending NPS surveys right after successful implementations—when sentiment was highest. They avoided surveying customers with open support tickets. The methodology optimized the score, not customer health.

The Denominator Problem

NPS can improve while your business gets worse. Example:

Q1: 100 customers. 60 promoters, 10 detractors. NPS = 50.

Q2: 80 customers (20 churned). 55 promoters, 5 detractors. NPS = 62.5.

NPS went up! But you lost 20 customers. The detractors churned, improving your score. This is mathematically correct and operationally disastrous.

Section 3: The Replacement—Leading Indicators of Churn

We replaced NPS with a dashboard of leading indicators that actually predict renewal.

Product Usage Metrics

The strongest predictor of churn: declining usage. If a customer used the product 100 times last month and 50 times this month, they're at risk—regardless of what they say on a survey.

We track:

  • Weekly Active Users (WAU): Trend over 8 weeks
  • Feature adoption: Are they using core features or just dabbling?
  • Login frequency: How often do key stakeholders actually log in?
  • Depth of usage: Are they using 3 features or 30?

When usage declines 20%+ over 4 weeks, we flag for proactive outreach. This catches at-risk customers months before NPS would.

Support Ticket Patterns

The nature of support tickets predicts churn:

  • High volume + complex issues: Customer is struggling. At risk.
  • Repeated escalations: Customer is frustrated. At risk.
  • Zero tickets after initial onboarding: Customer may have disengaged. At risk.
  • Moderate ticket volume with quick resolution: Healthy engagement. Likely to renew.

We built a "support sentiment score" that analyzes ticket patterns, not just ticket count.

Champion Engagement

Many renewals depend on one internal champion. If the champion leaves, the renewal is at risk. We track:

  • Champion login activity (are they still engaged?)
  • Champion job changes (did they leave the company?)
  • Multi-threading (are there backup champions if the primary leaves?)

When a champion goes inactive, CSMs reach out immediately—not to survey them, but to help.

Renewal and Expansion Signals

Direct signals about renewal intent:

  • Did they recently negotiate contract terms?
  • Are they asking about competitors?
  • Did they request a pricing call?
  • Are they adding users (expansion signal) or removing them (contraction signal)?

These behavioral signals are more predictive than "would you recommend us?"

Section 4: Assembling the Customer Health Score

We combined these indicators into a composite "Customer Health Score"—our replacement for NPS.

The Model

Health Score = weighted combination of:

  • Usage trends: 30% weight (strongest predictor)
  • Support patterns: 20% weight
  • Champion engagement: 20% weight
  • Renewal/expansion signals: 15% weight
  • Contract characteristics: 15% weight (tenure, payment history, etc.)

Each customer gets a score from 0-100. Below 40 = red (high churn risk). 40-70 = yellow (monitor). Above 70 = green (healthy).

How We Use It

Red customers: Immediate CSM outreach. Understand the problem. Offer solutions. Escalate if needed.

Yellow customers: Proactive check-in. "How's everything going? Anything we can help with?"

Green customers: Expansion opportunities. "You're getting great value. Would more users help?"

The Health Score is operational. It tells you what to do with each customer. NPS just told you how customers felt (maybe).

Validation

We backtested against 2 years of churn data. Health Score predicted churn with 78% accuracy at 60 days out. NPS predicted churn with 31% accuracy—basically random.

Health Score isn't perfect. But it's dramatically better than NPS at predicting actual behavior.

Conclusion: Measure Behavior, Not Sentiment

NPS measures what customers say. Health Score measures what customers do. Words and actions diverge—especially in business relationships.

A customer might say "I love you" while their usage declines, their champion leaves, and their budget gets cut. NPS captures the words. The words don't predict the outcome.

We still collect NPS data. We just don't treat it as a primary metric. It's one input among many, heavily discounted because of its poor predictive value.

If you're using NPS as your primary customer health metric, you're flying blind—feeling good about high scores while churn builds underneath.

Replace vanity metrics with behavioral metrics. Track what customers do, not what they say. Build systems that catch at-risk customers early, when intervention is possible.

Customers don't churn because they stopped recommending you. They churn because they stopped using you. Measure usage, not recommendations.

Tags:BusinessTutorialGuide
X

Written by XQA Team

Our team of experts delivers insights on technology, business, and design. We are dedicated to helping you build better products and scale your business.