Are you new to building scorecards or just want a refresher? This guide will take you through everything you need to know about building your first one (or updating your thirtieth!) and how they tie into your quality assurance process & program.
Download your exclusive copy of the guide to keep handy. Or, if you're ready to dive on in, just scroll down for the first section 👇
A scorecard is a rubric against which a QA analyst, team lead, or manager grades an agent’s customer interactions with a customer.
This ensures that support agents are interacting in a way that’s in-line with company rules & regulations, their brand, and more.
Consumers expect more from support teams than ever before - because support is critical to brand loyalty.
We see this in consumer behavior: they tell others about negative customer service experiences, they switch to competitors when they’ve had a bad experience, and often, they don’t tell the brand about the bad experience—leaving CX leadership in the dark.
The inverse of this: amazing businesses tend to have amazing support teams. But how do you define “amazing”? And how do you know if your team’s amazing when only 1 out of 26 unhappy customers complain?
It’s tough, and even though the industry is obsessed with metrics, the way brands measure support is broken.
When support teams first came onto the scene, they were often viewed as a cost center. Seeking to make the most out of their spend, leadership placed an emphasis on efficiency and speed. This is how several metrics—like Average Handle Time (AHT), First Call Resolution (FCR), or solves/hour—came to be. These statistics prioritized efficient handling of support tickets and quick customer interactions. While quickly handling tickets isn’t necessarily a bad thing, it can be if it lessens the quality of an interaction.
To fill the gap left by productivity metrics, teams began to use satisfaction metrics to understand agent performance. These include measures like CSAT, NPS, or CES. While understanding customer satisfaction is critical, these metrics miss a lot.
Let’s take CSAT for example.
CSAT can tell you how a customer feels in a given moment, but that feeling could be the result of anything relating to your business. If an agent handled a situation perfectly but the customer was upset with your billing policy, they might give a low score. Not only does the low score reflect poorly on the agent—even though the billing policy is out of their control—but the CSAT score itself doesn’t give any direction on how to make improvements to the customer experience. In a similar vein, NPS can tell you how a customer feels about your company as a whole, but isn’t actionable for support. These metrics also fall prey to selection bias—people will only take the time to leave a good (or bad) score if their experience was exemplary (or horrible!).
The main problem? None of these metrics give teams actionable insights into how to improve support operations or the quality of interactions.
There’s a massive Experience Blindspot between what traditional metrics tell CX leadership and what customers are actually experiencing.
This Experience Blindspot exists on three levels for all companies:
Teams are now turning to QA to get the information they need to see through their Experience Blindspot on all three levels—putting their scorecards front-and-center.
With that in mind, we put together this guide to answer all of your scorecard questions.
You’ll find guidance on building your first QA scorecard, how to know when it’s time to modify an existing scorecard, and tips and tricks from our customers who have gone through this process before.