Loyalty program managers are often tasked with pulling together results decks and creating compelling charts and graphs to show off the health of their program. When done correctly, these reports can help tell the story of your work and provide insight into how to set your program up for success.
You may already know some of the most telling loyalty reward program metrics you should be including, but are you aware of the ones that may be leading you astray? Here are three metrics that may be buzzworthy but should be either reconsidered or contextualized in order to really pack a punch when it comes to generating actionable program data.
Folks in the travel and hospitality industries may be familiar with the look-to-book ratio, which shows the percentage of people who “look” for a purchase compared to the total number of people who actually buy, or “book,” said purchase. This metric is often used to demonstrate marketing ROI and therefore the effectiveness of a specific product positioning with a specific audience.
For instance, let’s say Jane is looking to book her family’s annual vacation. Her goal is to book the best deal without having to talk to any salespeople. Jane may search multiple possibilities on several different sites and dates before finally deciding to book her trip. Her internet perusing increases the look-to-book ratio on every company site she visits.
The problem with using a pure look-to-book ratio? It doesn’t factor in revenue. Jane could be booking four flights, two hotel rooms, plus ancillary purchases like rental car and concert tickets. Or she could be looking at all her options but ultimately end up booking only a single hotel suite. If you’re required to report look-to-book, by all means do so, but don’t forget to add a layer into your reporting that addresses the actual revenue generated.
The other factor to consider here is that look-to-book doesn’t look across sessions or interactions. So again, using Jane as our example, let’s say she researches pricing for her trip using the website, but ultimately books her trip by dialing the call center. The look-to-book ratio in this scenario would show a zero for the website, even though Jane eventually made a purchase on a different channel.
While look-to-book can be a helpful metric to demonstrate ROI, it’s important to take a step back and look at the whole picture to make sure you’re getting a holistic, longitudinal and ultimately accurate view of the customer.
Net Promoter Score
Another popular metric that can be problematic for program managers is the Net Promoter Score. NPS is an equation that takes customer ratings of your organization, breaks them down by engagement level and generates a score. Organizations will take the outcome and correlate overall customer sentiment to its performance.
Let’s apply this to Joe. Joe is an occasional customer of Acme, and he recently made a purchase for a friend’s birthday. He receives his purchase two days late, a delay that causes him to show up empty handed at his friend’s party. The next day, he receives an email with the subject line, “Two quick questions.” He opens the email, which thanks him for the purchase and asks how likely he is to recommend Acme to friends. He clicks “4” out of 10 and is taken to a webpage with a text box confirming the selection and providing a short-form text box to explain the answer. While previous purchases have gone well for him, he’s currently still upset with his most recent transactional experience.
As with any outbound survey, there’s bound to be bias – in this case, Joe’s current experience is weighing his overall feelings for Acme and its products. He’s experiencing a recency bias.
Furthermore, customers on either extreme – those who are very upset as well as those who are extremely pleased – are most likely to respond to a customer satisfaction survey. Consequently, these extremes tend to drown out feedback from customers who are either in the middle or neutral.
If your organization uses NPS, consider providing additional analysis or context as to why your score is moving positively or negatively. One way of collecting this analysis is through additional market research surveys. You may also want to layer in transactional data to determine if perception is translating back to customers behavior. An ongoing analysis of retention, purchasing patterns, feedback and referrals can be telling.
Any metric’s “average”
When developing reports, there is a tendency to look at the data from an overarching view, not in segments. Making business decisions based on overall health but ignoring clusters or value segments (e.g. specific age groups or types of purchase) removes the granular level of how an individual customer or customer segment engages with your brand.
Let’s look at Janet. During her day job, Janet is responsible for compiling a monthly report for senior leadership on the health of her loyalty program. Each month, she pulls data on total travel and gift card redemption by member. She breaks that down by age and sex. Overall, the numbers are consistent with the previous month.
In the example above, Janet is looking at redeemers in isolation, but ignores logical audience segments. She will inevitably miss the fact that Millennial customers increased travel redemptions by 13 percent from the previous month, and that Boomer customers were redeeming gift cards 10 percent less than the previous month. Taking aggregate data and not drilling down deeper may cause you to miss this level of detail. Challenge yourself to drill down to see what kind of stories and insights you can pull from the data, and avoid reporting on blanket figures.
As you pull together your next yearly, monthly, weekly, or even daily report, take a step back and ask yourself if the data you’re presenting is really providing sufficient feedback for your leadership team. If it’s not, consider adjusting your strategy.
Do you like reading about loyalty? Be sure to subscribe to our blog and never miss a Good Points blog post.