Predicting and Preventing Churn with Customer Health Scoring

ABOUT THE EXPERT

Ed Powers is a Principal Consultant at Service Excellence Partners, where he helps companies take an enterprise-wide approach to addressing churn. He’s previously led customer success at simPRO, a field service management software company, and InteiliSecure, a managed security services provider. In his consulting work, he has also helped over thirty different SaaS and Managed Service Provider companies improve their customer experience, reduce churn, and grow installed base revenue. In this guide, he explains how companies can use regression-backed customer health scores to predict and prevent customer churn.

Table of Contents

What is a customer health score?

A “red-yellow-green” assessment of the level of risk with the account – it’s a measure of your risk of churn (logo, product and revenue) and potential for expansion.

What is it used for?

Focus – it helps your Customer Success Managers be more proactive about where they spend their time. Turning around red accounts prior to renewal decisions increases the odds of retention, and approaching green accounts with new products, references and referrals helps grow revenue. 

Automation – in tech-touch, product-led, fully automated environments, health scores can trigger actions to support, retain, and up-sell customers. This can include things such as presenting self-help resources inside the product, sending email invitations to webinars, or showing alternative plans when customer usage wanes.

Forecasting – CFOs at many companies associate probabilities to health score color codes and use weighted sums to estimate future revenue. So health scores can predict not only cash flow and growth, they can also impact company valuations.  

Improvement – health dashboards communicate account status to the organization, and when supported with proper analysis, they can show how making improvements in certain areas improves customer health and installed base revenue. For example, if Customer Support waiting times create more red accounts, then reducing them produces more greens. 

Why is predictive accuracy important?

Bad scores = bad decisions – when accuracy is off, CSMs face the “watermelon problem”–green on the outside but red on the inside–and they spend their time working on the wrong accounts. Automations are similarly ineffective, revenue forecasts suffer, and the organization lacks insights for driving systemic improvement.

How does adopting a more analytical approach help you move from reactive to proactive to preventative when it comes to churn?

When you don’t have a lot of data, it can be subjective – for new companies that are starting out, health scoring starts with the founder or CSM’s read, “what does your gut say around this account?” When the company is small, you have a small number of customers and you’re probably talking to all of your accounts frequently, so subjective assessments can be ok.

Data improves predictive accuracy – there can be a wide variation in human judgment. As your team grows, layering in data and correlating key variables reduces the amount of noise and helps improve your predictive accuracy. 

Data helps you be proactive with a particular customer – vs. subjective and reactive. You start to look at cause-and-effect relationships, and you can begin to predict outcomes based on what happens upstream in the process. You can take action to improve a customer’s situation before things get bad enough that they think about canceling.

Data helps you be preventative by looking at customer patterns – looking across many customers’ outcomes, you can start to see how strategic decisions influence customers and impact the company’s finances. Being preventative means working upstream across the enterprise to address factors that drive customer health (e.g. product functionality, sales process experience), which then drive the decisions customers make later.

What types of data can go into health scores?

Usage data – this can include the total amount of usage, number of logins, and usage of certain features. Usage data shouldn’t be the only thing you look at, but there’s a case to be made that if customers are using a tool regularly, they’re most likely getting some value from it.

  • ​​How often are they accessing the tool? 
  • How much are they using it? 
  • What are they using it for?

Onboarding adoption – how much are new users using the product, and how quickly are they adopting specific features? It can be helpful to identify and monitor certain key features that new users need to adopt to get value from the product. And adoption is more than just using features–people must change their daily routines. How many people out of the target set of users have embraced the new approach? What change management interventions must be considered to maximize adoption across all users? 

Sentiment and satisfaction – NPS, CSAT, other surveys, and other customer research can be mined for sentiment data. There’s a lot of meaning that’s embedded in language that we don’t necessarily listen to or aren’t even consciously aware of, and there are some new and interesting developments in AI models for analyzing the language for sentiment. 

Note: NPS can tell you something about individual consumers, but it can be unreliable for accounts – generally speaking, if you have a good NPS score overall it correlates to lower churn and happier customers overall, but for specifically predicting what one account does, it tends to not be very predictive. 

Customer goals – look at what’s in the customer success plan and how you’re tracking to deliver what the customer is expecting to get. Many times customers haven’t even stopped to think about what to measure, so it’s incumbent on the software provider to steer them toward certain metrics and describe how other similar customers are utilizing those metrics. You don’t need 50 metrics; you just need a couple that really matter, for example, a project management tool might have a key productivity savings metric. 

Support tickets (volume and criticality) – for volume, you’re looking at the sheer number of tickets submitted. Keep in mind that some support activity is healthy. For criticality, you’re looking at the severity of issues, for example, if there’s a critical feature that the customer is asking for that’s a blocker for their business. If support doesn’t handle critical tickets the right way or fails to tell the customer that progress is being made on the product side, it can influence a customer to churn.

Payment history – for example, when COVID hit, a lot of companies in certain industries started to go under. When a company begins to fall behind on payments because they’re starting to run out of cash, that tends to be predictive. Unfortunately, this type of churn risk factor is often outside the control of the supplier. 

Fit assign points to your customers based on how close they are to your ICP. How many points you assign to a customer tends to be very predictive of which customers will end up staying, with “low-point” customers being the most likely to churn. Sales selling just anything to anyone is a huge driver of churn, so work to incentivize the right behavior so that sales spend their time bringing in more customers that are closer to your target. 

Note: the same metrics don’t work for every company – the metrics that work for one company to track and collect data might not work for another company, even if you’re in the same industry. Ultimately, you need to calibrate your own score.

What’s a lightweight way for early-stage companies to get started?

Having your CSMs rate each account Red-Yellow-Green is better than nothing – it’s easy to do, and it’s a start. You don’t need anything more sophisticated than a spreadsheet to collect CSM input. The problem with this method, however, is that it allows for errors in judgment. As humans, we tend to believe that we’re good judges of things. It’s an illusion our brain gives us, but it’s not true.

When do you need to get more sophisticated in your approach?

In sales-led companies, you get more sophisticated as you add more accounts – when you’re starting out and you’re talking to your accounts all the time, you probably have a good grasp of what the issues are. However, when you have 100 accounts and you’re not talking to them all daily, you have to rely more on the data to know how your accounts are feeling. 

In product-led (PLG) companies, you build models very early – there’s little to no touch at all with companies that utilize a free trial model. With PLG models, you’ll get a wide distribution of customer behaviors. You’ll have some who never do anything with it and others who explode with it right away. Compare data from these groups. What’s different? With PLG, there’s a ton of data available, so it comes down to making sure you’re looking at contrasts. 

What would the steps be to build a more sophisticated, regression-based model?

Step 1: Talk to Customers about why they leave and stay – talk to a small set of customers–they’ll give you a lot of clues. Ask them why did/didn’t you renew?

Step 2: Form hypotheses – now ask yourself “what do I believe to be the case about my broader market, about all my customers, and their behaviors? Are they alike or different from the sample?” Write down your beliefs and then look at how to test the hypotheses.

Why start by asking questions? It avoids a cognitive bias called  “WYSIATI” (what you see is all there is). It happens when people assume that the data they have tells them all that they need to know. However, just because you have a lot of data doesn’t mean that it’s telling you something relevant. You might be missing the data you need–go get it! 

Step 3: Collect and clean the right data (this part is hard):

  • Collect customer churn and renewal feedback at scale – using qualitative factors you identified from a small set of interviews, narrow to a short list of questions and gather feedback broadly and systematically either online or via formal exit interviews. This will allow you to quantify factors and connect them to revenue. You’ll need this to build your business case later! 
  • Gather internal data from different repositories using extract-transform-load or CSV file dumps – pull the data together, load it into a simple Excel spreadsheet and then build a quick data model. This is small-scale and very ad hoc. From there, you’re doing an investigation to understand the relationships between the data. 
  • Collect data you don’t have – examine your hypotheses and get some data, manually if you must, to include in your analysis to support or reject your beliefs.
  • Clean and match the data – yes, chances are you will have a lot of junk to parse through. Data hygiene tends to be an afterthought and (unfortunately) it’s never complete. Keep a list of all the issues you find for now. 

Step 4: Analyze factors – also called exploratory data analysis. This is looking across all your data and the end result (churn, retention, growth or contraction) and asking “Where is there a relationship? What is the correspondence and covariance between variables? A majority will show no relationship, so you can pare them down pretty quickly. When you only look at the factors that matter, a lot of the other data problems go away.

Step 5: Regression – you’ll then build a mathematical model to predict account or product churn (a binomial random variable) or revenue churn/expansion (a continuous random variable). You can use multiple regression to analyze continuous variables, and logistic regression to analyze binomials using any set of factors for each (usage, onboarding adoption, sentiment, etc. described above). See below for a set of tools. 

Note: More data = more confidence in your model. This is called the Law of Large Numbers, and there’s no getting around it! With PLG, even a small error at large scale can cost you a lot, but with a high-touch enterprise, more error might be acceptable because you’re working with a smaller number of customers. For example, a 1% error on 10 customers affects none, but a 1% error on 10,000 customers affects 100. 

(Optional) Advanced Regression – machine learning and other advanced models look for rare factors when you have tons of data. This is a huge investment, so you have to ask yourself if you’re going to get a return from advanced techniques before pursuing it. It might make sense if you’re dealing with:

  • Rare events, “the broken leg problem” – this theory gets its name from an example scenario: say that you can ordinarily predict whether someone will go to the movie theater on a specific day based on data you’ve collected about them. But if they suddenly break their leg, they won’t go to the movie theater, despite your prediction. Very large data sets and sophisticated algorithms can detect and accommodate for rare events like these. 
  • Non-linear patterns – one of the fundamental assumptions in classical statistics is that everything’s independent–there’s no interaction between variables. But in real life, many things do interact. In machine learning, you can include these effects and increase the accuracy of predictions.

Step 6: Test and Deploy the Model – compare the predictive power of the new mathematical model to your current baseline. Did accuracy improve? What are the impacts for making the change? If things will get better for customers, employees, and dollars and cents, it’s easy to make your case. Update the model in your RYG spreadsheet, CS platform, or use it in your automation rules. 

How many variables should you include?

Aim for a parsimonious model – use the least data to predict the most behaviors. It’s usually a handful of factors that will predict a very high percentage of behaviors. Plus, there’s a cost to collecting the data, maintaining it, and keeping it updated. Less data means less cost. 

You can get very predictive with 5-7 variables – it’s simply a matter of narrowing it down to the right 5-7. You will iterate until you can confidently arrive at the optimal number of variables that will give you a reasonably good outcome. 

What tech tools do you need to run the analysis and operationalize the score?

Excel (and a plug-in) – if you’re simply trying to understand these relationships and how you quantify them, you can build a simple model in an Excel spreadsheet. You don’t have to build data pipes and you don’t have to have deep historical data. Instead, simple experiments can tell you a lot. The big “pros” to Excel are that you can get data in and out pretty easily and everyone knows how to use it. There’s a lot of statistical functionality that comes native, but Excel also offers a free Data Analysis Toolpak plug-in.

Statistical software packages – these have a lot more to offer than Excel.

  • IBM SPSS 
  • Minitab 
  • Alteryx
  • TIBCO 

Customer success platforms – use CS tools to operationalize your health score and to set up automatic triggers. 

  • Some will let you modify the model to some extent or rely on outside computations (e.g. Gainsight, ChurnZero)
  • Some are coming out with AI models already in them (e.g. involve.ai, SturdyAI, UpdateAI, ZapScale)

Who’s responsible for developing and updating the customer health score?

CS Operations – ideally, they’re the ones that are running the reports and doing the data.

A consultant (if you don’t have CS Ops) – a consultant can help apply statistics and variation concepts correctly and help a company identify what a meaningful signal is vs. the random noise that comes from a lot of data.

How often should you revisit or adjust your model?

Check after a big external change (e.g. COVID) – COVID changed everything for a lot of companies almost overnight. The majority of models are based on historical evidence, so when a dramatic change happens that affects a sector, you can expect the models to become less and less predictive. 

Test to see if the model is still performing well (and adjust when not) – you want to measure predictive accuracy at periodic intervals and understand if it’s still performing within a margin of error that you want. If it degrades, then it’s time to update it. 

How do you identify and address the root causes of churn (especially when responsibilities lie in other departments of the organization)?

Equate churn to loss of revenue or profitability – which translates to valuation. That will get people’s attention to try and fix the underlying issues that are causing customers to leave. It all starts with interviewing customers to identify the factors THEY say are important, quantifying those factors for your business case, and demonstrating the cause-and-effect relationships with your regression model. 

Highlight the churn issues that need to be addressed by other departments – if the CS team can show that the top reasons people are churning, then the CEO can take that information to the necessary department to make it an action item for them to fix. In that way, CS is in charge of tracking and reporting, while the CEO is holding other departments accountable.

What are the most important pieces to get right?

Start outside in (with the customer) – start with the behaviors of customers, because that will drive everything else! 

Connect it to money – you’ll need to build your business case, sooner or later 

Apply the scientific method – ask yourself what your hypothesis is, then look at how you can validate it using facts, not speculation.

What are the common pitfalls?

Technology will not solve all of your problems – try not to chase new tools; the technology won’t fix your problems if you feed it irrelevant data.

Beware false positives – humans are hardwired to make Type 1 errors, which are false positives. We tend to intuitively jump to answers that aren’t supported by data and that often aren’t true. That’s why proper statistical analysis is so important. 

More Resources

Customer OperationsCustomer SuccessRevenue Operations

Want free guides?

We feature guides every month in our newsletter

Newsletter Sign Up