In this new article for Field Service News, Sam Klaidman, Founder and Principal Adviser at Middlesex Consulting, discusses the service leaders' journey to achieve their desired outcomes.
Here is an interesting conversation from Lewis Carroll’s Alice’s Adventure in Wonderland:
‘Would you tell me, please, which way I ought to go from here?’ [asked Alice.]
‘That depends a good deal on where you want to get to,’ said the [Chesire] Cat.
‘I don’t much care where—’ said Alice.
‘Then it doesn’t matter which way you go,’ said the Cat.
‘—so long as I get somewhere,’ Alice added as an explanation.
‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long enough.’
Fortunately, service leaders know exactly where they want to go. They want to achieve the business objectives they signed up for in the strategic plan or in their individual goals and objectives (which are used to calculate their annual bonus.) Unfortunately, many of these leaders are missing a terrific opportunity to win their own version of the Euro Cup because they are not using all the tools available to them.
Service businesses are buried in data. They get operational data from their products in the field, the people in the call centers, service managers, logistics people, and their peers in Finance, Marketing, Sales, Customer Success, and anyone else with an opinion. But what they are missing is insight – actually actionable insight. I call this condition DRIP:
Data Rich Insight Poor
Here is an example:
Most Field Service organizations survey their customers and measure one or more metrics they then use as key performance indicators (KPIs). The three most popular KPIs are:
- Net Promoter Score (NPS)
- Customer Satisfaction (CSAT)
- Customer Effort Score (CES)
You collect data about each customer and lump it all together to arrive at a single KPI number. Unfortunately, using any of these KPIs will not guide you to the actions you need to take to achieve your desired business outcomes like growing revenue, increasing employee satisfaction, and improving productivity. To get down to these actions, you must link individual data to actual actions taken by the customer to 1) find out what customers really did and how they responded to your survey and 2) go back and find out specifically what you have to correct to achieve a better outcome for your business.
The solution to the DRIP problem is to take your team on this journey:
There are not enough people doing this detail work, what one of my friends calls working in the weeds. So, let’s look at how NPS is generally used to see what you don’t want to continue doing.
Net Promoter Score
Net Promoter Score (NPS) first saw the light of day in 2001 when it was marketed as “The One Number You Need to Grow.” Today it is used in many businesses of all size and all industries. Also, it is used by many field service organizations. Interestingly, the NPS system has an enormous number of critics who think the whole thing is BS. However, there are real world examples that also support the validity of the system.
Let’s look at an example of where NPS and a high-level analysis yields some data that makes the analyst and their company feel like they are actually accomplishing something important. But they are not increasing desired outcomes.
A Quick Review of NPS
The interested party asks their customers the following question:
“Based on XXX, how likely are you to recommend us to a friend or associate?”
They use an 11-point scale where 11 is definitely likely, 5 is neutral, and 0 is definitely unlikely. The results are then grouped as follows:
The 2 green scores are promoters, the 2 yellow are passives, and the 7 reds are detractors. The NPS score is the percent promoters minus the percent detractors so the score can be anywhere from +100% to -100%.
Here is a chart produced by Bain & Company, the originator of the NPS system.
In this example, the surveyors are not worried about the NPS score: they want to understand how customer’s feelings correlate with their buying intentions. In this case, the promoters appear to be about 90-95% likely to consider their current manufacturer, the passives 75-80% likely to consider the incumbent, and the detractors only 40-45% likely to consider their current supplier.
Since the surveyors know the score each individual submitted, they can create unique programs to follow up with their customers in individual segments, or even sub-segments, to identify the reasons behind their feelings and then either correct any issues and/or offer compensation if their issue is beyond their control or unresolvable. Of course, in parallel, they must look at their internal procedures and policies to prevent alienating other customers.
But this is about intent. One of my all-time favorite business books is “Five Frogs on A Log” by Mark L. Feldman and Michael F. Spratt. The book is about mergers and acquisitions and is scary. The title comes from a child’s riddle:
Five frogs are sitting on a log.
Four decide to jump off.
How many are left?
Because deciding and doing are not the same things.
This is important because we don’t care what people say they will do; we care about what they do! A customer who says she will be back to you tomorrow with a purchase order is worthless until the P.O. is actually received and booked.
With respect to the Bain & Company data, I think it would be much more useful if the question were reworded to “Based on XXX, how likely are you to lease or purchase your next vehicle from our brand (or maybe from our dealership)?” After all, your business objective is to sell or lease vehicles, not get referrals. Then the surveyor could track each respondent and find out the percent at each response level, e.g., 0, 1, 2… who leased or purchased a car from them. It might take one or two years to understand the value of increasing the percent of promoters by one point, but at least they would be able to move ahead with their CX program based on actual data.
Another Example but About Service Parts Usage, not NPS
Data - Your business is the Field Service arm of a hardware product OEM. And, unfortunately, you consume a large amount of parts every month. To find out what is going wrong, you have your parts manager prepare a report of actual total usage by part number and another report breaking out the same data but by type of transaction; i.e., installation. warranty, billable, and service contract. You quickly notice that one expensive part is the most used part during warranty.
Insight - If you are only concerned about minimizing your customer’s downtime, you would increase stock levels. But if your desired outcome is to increase company profit and CSAT levels, you would make sure that each defective part is returned for failure analysis.
Action – The failure analysts would share the FA results and the total cost of each field repair with both Engineering and Manufacturing. Most likely, the results would be either a part redesign or modification plus a change in manufacturing process
Outcome - When this is done, you might find it relatively inexpensive to swap out the old design whenever you have a field engineer on-site with access to the equipment. And obviously you would pull all the old parts from stock and replace them with the new design. Your overall cost savings is your desired outcome.
Without linking your data to your desired outcomes, you are basically looking at a gratification metric. It makes you feel good, but it doesn’t get you any closer to where you have to get.
Note: Net Promoter, Net Promoter Score, and NPS are registered trademarks of Bain & Company, Inc., Fred Reichheld, and Satmetrix Systems, Inc.
- Read more about Leadership and Strategy @ www.fieldservicenews.com/leadership-and-strategy
- Read more exclusive FSN articles by Sam Klaidman @ www.fieldservicenews.com/sam-klaidman
- Find out more about Middlesex Consulting @ www.middlesexconsulting.com
- Read more articles by Sam Klaidman on Middlesex Consulting Blog @ middlesexconsulting.com/blog
- Connect with Sam Klaidman @ www.linkedin.com/samklaidman