BlogHow-To

How to Build a Customer Health Score Dashboard for Your SaaS

How to build a customer health score dashboard for your SaaS: the 5 signals that predict churn, how to weight them, and the playbook for red and yellow accounts.

Davaughn White·Founder
16 min read

A customer health score is the most overrated and most under-implemented metric in SaaS.

Overrated because most teams build a fancy 0-100 score nobody acts on. It sits in a Looker dashboard, gets glanced at on Monday morning, and dies. Under-implemented because the simple version — five weighted signals, color-coded red/yellow/green, refreshed weekly, with a clear playbook attached — actually works. Teams that ship the simple version see churn drop 15-30% within two quarters. Teams that try to ship the perfect version usually ship nothing.

This guide is the playbook for the simple version. What signals to pick, how to weight them, what the dashboard should look like, where to build it, and what to do when an account turns red. By the end you should have a dashboard you can stand up in a week and a CSM playbook your team will actually run.

What a customer health score should answer

Before you pick signals or weights, get clear on what the score is for. A health score is not a vanity number. It is a triage tool. It should answer three questions, in priority order:

  • Who is about to churn? Which accounts in the next 30-60 days are most likely to cancel, downgrade, or stop paying — so a CSM can intervene before the renewal conversation goes south.
  • Who is about to expand? Which accounts are using the product so deeply that they are ready for an upsell, additional seats, or a higher tier — so account management can run a structured expansion play.
  • Who needs intervention right now? Accounts that are not in immediate churn danger but are showing a pattern (declining logins, unanswered support ticket, ignored feature) that compounds into churn 90 days from now.

If your dashboard does not answer those three questions in under thirty seconds, it is decoration. The whole point is to convert a noisy stream of product, support, and billing data into a short list of accounts your team can act on this week.

The 5 signals that actually predict churn in SaaS

There are dozens of signals you could track. Most teams start with too many, drown in data engineering, and never ship. The honest answer is that five signals cover roughly 85% of the predictive power for most B2B SaaS products. Here they are, ordered by how much they typically contribute to a usable score.

Signal 1: Product engagement frequency

How often do users from this account log in and complete a meaningful action? Not page views — meaningful actions. For a CRM, that is creating a contact, sending an email, logging a call. For a project tool, it is creating tasks, completing tasks, commenting. For an analytics tool, it is running a query or building a chart.

The metric to track: weekly active users (WAU) per account, trended over the last 8 weeks. The signal you care about is not the absolute number — it is the slope. A 20-user account averaging 12 WAU that drops to 6 WAU is in trouble. A 5-user account averaging 4 WAU that holds steady is fine.

Quick formula for a 0-100 sub-score: `engagement_score = min(100, (current_WAU / baseline_WAU) * 100)` where baseline is the trailing 4-week average from weeks 5-8 ago. Anything below 60 starts costing the account points.

Signal 2: Feature breadth (depth of adoption)

Accounts that use one feature churn. Accounts that use four or five features rarely do. The mechanism is obvious: every feature an account integrates into their workflow becomes a switching cost.

Define 5-8 "core features" of your product — the ones that, if a customer uses them, they are clearly getting value. For a SaaS CRM that might be: contacts, deals, email sync, automation, reporting, integrations. Track how many of those features each account has used in the last 30 days.

Sub-score: `feature_score = (features_used / total_core_features) * 100`. An account using 6 of 8 core features scores 75. An account using 1 of 8 is at 12 and is one strategic conversation away from churn.

Signal 3: Support ticket volume and sentiment

Support data is a two-way signal. A flood of tickets is bad. Zero tickets is also often bad — it can mean the account is disengaged. The shape that predicts churn most cleanly is rising ticket volume combined with negative sentiment.

Track two things per account over the last 30 days: ticket count and the share of tickets tagged frustrated, escalated, or critical (most helpdesks now expose sentiment via tags or AI classification). Compare to the trailing 90-day average.

Sub-score: start at 100. Subtract 5 points per ticket above the account's 90-day baseline. Subtract 15 points for any escalation. Subtract 30 points for an unresolved escalation older than 7 days. Floor at 0. Cap the deduction at 40 points so a single bad week does not crater the score.

Signal 4: NPS or CSAT (the relationship signal)

Survey scores are noisier than product data, but they capture something product data cannot: how the customer *feels* about you. A power user who hates their CSM will still churn at renewal. A light user who loves the product will renew and refer.

Track the most recent NPS or CSAT response per primary contact at the account, weighted by how recent it is. A 9 from three weeks ago beats a 7 from six months ago.

Sub-score: map NPS 0-10 to 0-100 with a non-linear curve that punishes detractors and rewards promoters. A simple version: `nps_score = nps_value * 10` floored at 0, then subtract 30 if the latest response is older than 90 days (stale responses are unreliable).

Signal 5: Payment status and billing health

The most under-rated signal in customer success. A failed payment, a downgrade, or a card on file expiring next month is often the loudest churn signal you have — and it sits in your billing system, frequently disconnected from the CS dashboard.

Track three states per account: payment status (current / past due / failed), recent plan changes (upgrade / no change / downgrade in last 90 days), and card expiration risk (does the card on file expire in the next 30 days).

Sub-score: start at 100. Past due → 60. One failed payment → 40. Two or more failed payments → 0. Recent downgrade → subtract 25. Card expiring in 30 days with no backup → subtract 10. This sub-score is binary in spirit: anything below 60 is a bright red flag that should ping the AE and CSM the same hour it changes.

How to weight the signals

Weights matter more than the signals themselves. Two teams with the same five signals but different weights will get wildly different lists of "at risk" accounts. There is no universally correct weighting — it depends on your product, your contract length, and where churn actually comes from in your data. But here is a sane starting point that works for most B2B SaaS products with monthly or annual contracts.

Recommended starting weights:

SignalDefault WeightWhy
Product engagement (WAU trend)30%Strongest leading indicator. Disengagement precedes cancellation by 30-60 days.
Feature breadth20%Captures stickiness. Accounts using more features churn dramatically less.
Support sentiment20%Frustration is a faster signal than disengagement when it spikes.
NPS / CSAT15%Captures relationship health that product data misses.
Payment / billing status15%Lower weight only because it is binary — when it fires, override the score.

Final formula: `health_score = (engagement * 0.30) + (feature_breadth * 0.20) + (support * 0.20) + (nps * 0.15) + (billing * 0.15)`.

Two important rules. One: the billing signal is an override. Two failed payments forces the account to red regardless of the calculated score. A power user with a busted card is still about to churn. Two: revisit weights every quarter. Pull the last 90 days of churned accounts, look at their scores in the 60 days before they churned, and ask whether your model would have flagged them. If most churned accounts were green or yellow, your weights are wrong — usually engagement and billing should be heavier.

Color bands: red, yellow, green

A 0-100 score is for engineers. A red/yellow/green band is for the CSM who has 80 accounts and 30 minutes a day. Map the score to bands:

  • Red (0-49): churn risk. Requires intervention this week. CSM owns. Trigger: any score under 50, OR any billing override (two failed payments, recent downgrade, or escalated unresolved ticket older than 7 days).
  • Yellow (50-74): warning. Pattern is slipping but not on fire. Triggers an automated check-in (email, in-app message) and CSM review at next monthly account scrub.
  • Green (75-100): healthy. Eligible for expansion outreach if breadth and engagement are both above 80. Otherwise leave them alone — happy customers do not want to be over-managed.

How to surface the score: dashboard layout

A good health score dashboard has three views, in this order. If you build only one, build the first.

View 1: Account list, sorted by risk

A flat table of every paying account, sorted by score ascending (worst at top). Columns: account name, MRR, current score, score change vs last week, owning CSM, last touchpoint date, primary risk signal. Filter chips at the top: red only, yellow only, my accounts, $5K+ MRR, missed renewal in next 60 days.

This is the view your CSM team opens every Monday morning. The job: scan the top 20 accounts, decide which ones get a call this week, and assign tasks. If a CSM cannot identify their top three intervention targets in 60 seconds from this view, the dashboard is not doing its job.

View 2: Account drill-down

Click an account, see the score breakdown by signal. A radar chart or stacked bar showing engagement, feature breadth, support, NPS, billing — each as its own sub-score. A timeline of score changes over the last 12 weeks with annotations ("score dropped from 78 to 42 on March 14, triggered by escalated ticket and 3-week login gap"). Last 5 support tickets, last 3 product activity events, billing status, NPS history.

This is the view a CSM opens before a save call. The goal: walk into the conversation knowing exactly what to talk about, not just "the score went down."

View 3: Cohort and trend

An aggregate view across the entire customer base. Distribution of scores (how many red, yellow, green) trended over time. Average score by plan tier, by industry, by CSM, by acquisition source. Churned accounts overlaid with their score 60 days before they churned (the validation loop for your weighting).

This view is for the VP of CS, the head of RevOps, and the founder. The job: spot systemic problems. If your enterprise tier has a 25% lower average score than your mid-market tier, something is broken in onboarding for big customers. The score is your early warning.

Where to build it: CRM vs custom dashboard vs Looker/Metabase

There are three reasonable places to put the dashboard, and your choice depends on data maturity and team makeup.

OptionBest ForTime to BuildCost
In your CRM (Deelo, HubSpot, Salesforce)Most teams, especially CS-led1-2 weeksAlready paid for
BI tool (Looker, Metabase, Mode)Data-heavy teams with a warehouse3-4 weeksLooker $50-100K/yr; Metabase from $0
Custom internal appTeams with engineers and unique signals6-12 weeksEngineering opportunity cost

Build it in your CRM if your CS team lives there already, your account record is the natural home for a score, and you want CSMs to see the score in the same place they take notes and log calls. This is what most teams should pick. The score becomes a custom field, the dashboard is a saved view, and intervention tasks are CRM tasks.

Build it in a BI tool if you have a data warehouse with product events, billing, and support data already piped in, and you have an analytics engineer who can model dbt-style. The advantage: every signal is queryable and you can iterate weights in SQL. The disadvantage: BI tools are not built for action — CSMs cannot easily turn a row into a task.

Build a custom internal app if you have signals nobody else handles (industry-specific usage data, custom integrations) and you have engineering capacity. Most teams should not start here. It is expensive and the V1 is rarely better than the CRM version.

The playbook: what to do when a score changes

A score with no playbook attached is a number. A score with a playbook attached is a system. Define what happens automatically when an account moves between bands. Here is the version most teams should run:

  • Account turns red (score drops below 50): Auto-create a high-priority CSM task with a 48-hour SLA. Slack ping the CSM and AE. Pull the account into the next CS team standup. Goal: a save call within 5 business days, with a documented action plan.
  • Account stays red for 14+ days: Escalate to the VP of CS. Generate an executive summary (signal breakdown, support history, billing status, last 3 touchpoints). The VP either jumps on a call personally or signs off on a structured save plan with timelines.
  • Account turns yellow (drops below 75): Trigger an automated, contextual check-in email referencing the dropped signal ("We noticed your team's logins have dropped this month — anything we can help with?"). Add to the CSM's next monthly account scrub.
  • Account turns green and breadth is high: Trigger an expansion play. Notify the AE. Send the account a usage summary that highlights ROI — usage data in dollars saved, hours back, deals closed. Open the door for a tier upgrade or seat expansion conversation.
  • Score does not change for 60 days: Audit. A score that never moves is usually broken — either the signals are wrong or the data is stale. Spot-check a few accounts and re-validate.

Common mistakes that kill health score programs

  • Too many signals. Teams add 12 signals because they can. The score becomes opaque, weights become impossible to reason about, and the CSM stops trusting it. Five signals, transparent weights, beats twelve every time.
  • No playbook. The score updates, nothing happens. Within a quarter the dashboard becomes wallpaper. Always tie band changes to specific actions.
  • No validation loop. You set weights once and never look back. After 90 days, pull every churned account and check whether your model flagged them. Adjust weights based on what you learn.
  • Stale data. A score refreshed monthly is decorative. Weekly is the minimum. Daily is better. Real-time is overkill for most teams and creates alert fatigue.
  • Missing the billing override. Calculated scores are smooth. Failed payments are not. If you do not bypass the score with a hard rule on billing, you will miss the most actionable churn signal you have.
  • One score for every customer type. Self-serve trial users, monthly SMB customers, and annual enterprise customers behave differently. If your weights are the same for all three, the score will be wrong for at least two of them. Segment.

How Deelo's analytics surface health scores

If your CS team already lives in Deelo CRM, the health score lives where they work. Each account has custom fields for engagement_score, feature_breadth, support_signal, nps_score, billing_status, and the calculated overall_score. A saved CRM view sorts every paying account by score ascending and color-codes the row red/yellow/green based on the band.

Deelo Analytics ingests product events from your app, support tickets from Helpdesk, NPS responses from Forms, and billing events from Stripe (or whichever processor you use). A weekly scheduled automation recalculates every account's score, updates the CRM custom field, and triggers the playbook: red accounts auto-create a CSM task with a 48-hour SLA, yellow accounts get an automated check-in email through the email app, green accounts with high breadth surface in an expansion-ready saved view.

The drill-down view lives on each account record — score history sparkline, signal breakdown, recent support tickets, last activity events, billing status. CSMs walk into save calls knowing exactly which signal moved and when. Cost is bundled into the Deelo subscription rather than a separate $50K/year BI seat. For SaaS teams under 50 employees who want a working health score system in a week instead of a quarter, this is the fastest path.

Build your customer health score in Deelo

Free to start. CRM, Analytics, Helpdesk, and Automation in one platform — the four apps you need to wire up a working health score dashboard without stitching together a BI stack.

Start Free — No Credit Card

Implementation checklist

  • Week 1: Define your 5 signals and write down the formula for each sub-score. Pick starting weights. Decide where the dashboard lives (CRM, BI, or custom).
  • Week 2: Wire up data. Product events to engagement, ticket data to support, NPS to relationship, billing webhook events to payment status.
  • Week 3: Build view 1 (account list sorted by risk). Run scores manually for 20 accounts and sanity-check against your gut. Adjust weights if the list looks wrong.
  • Week 4: Build the playbook automations. Red → task. Yellow → email. Green + high breadth → expansion notification. Start running weekly CS reviews from the dashboard.
  • Quarter 2: Validate. Pull churned accounts, check their scores 60 days pre-churn, adjust weights. Add view 2 (drill-down) and view 3 (cohort trends) once the basics are working.

Customer health score dashboard FAQ

How often should the health score refresh?
Weekly is the minimum that creates a useful Monday-morning workflow. Daily is better for high-velocity SMB SaaS where renewal cycles are monthly. Real-time updates create alert fatigue and rarely change CS behavior — by the time a CSM reacts, the noise has averaged out. Refresh weekly, override in real time only for billing failures and escalated tickets.
Should the same score apply to trial users and paying customers?
No. Trial users need a different model focused on activation milestones (signed up, completed onboarding, invited a teammate, hit the aha moment) on a much shorter time horizon. Paying customers need the model in this guide. Run them as two separate scores with two separate dashboards. Mixing them produces a score that is wrong for both populations.
How do I weight the signals if I have less than 90 days of data?
Start with the default weights in this guide (30/20/20/15/15) and revisit them every 30 days for the first quarter. Do not try to learn weights from a regression model on three months of data — the sample is too small and you will overfit. Use the defaults, validate by sanity-checking the worst 20 accounts each week, and adjust based on what your team observes. After 6 months of data and 50+ churn events, you can run a proper logistic regression to learn empirical weights.
What if a customer has a high score but still churns?
It happens, and it is the most useful learning signal you have. Every churned account that was green at 60 days pre-churn means your model missed something. Investigate: was it a champion who left and lost the relationship? Was it a budget cut nobody saw? Was it a competitor displacement nobody flagged? Add the missing signal to your model — champion changes, recent contract end-date proximity, competitor mentions in support tickets. Treat false negatives as model bug reports.
Do I need a data warehouse to build a health score dashboard?
No. A data warehouse helps if you want to slice cohort trends across many dimensions, but the operational dashboard — the one your CSMs use Monday morning — works fine off CRM custom fields populated by scheduled jobs that pull from your product, support, and billing systems. Most teams should ship the CRM-based version first, prove the workflow, then add a warehouse if cohort analysis becomes a bottleneck.
How do I get product engagement data into the dashboard if I do not have a product analytics tool?
Three options. Cheapest: instrument 5-8 core feature events directly in your app and write them to a database table; aggregate weekly per account. Medium: use a product analytics tool with an API (Mixpanel, Amplitude, PostHog) and pull WAU and feature usage on a schedule. Most integrated: if your CRM and analytics live in the same platform like Deelo, the events flow into the account record automatically. Pick based on existing tooling — do not buy a new tool just for this.

Explore More

Related Articles