When you Google “improve CSAT scores”, 194,000 results pop up. That’s a lot of listicles and one-off tips to sort through 🤪
But a lot of this content doesn’t get to the root of understanding what drives CSAT—that’s probably why you’re here with us today.
The real key to improving CSAT scores—and providing an amazing customer experience—is understanding what we call your Experience Blindspot.
Your Experience Blindspot is everything happening in your CX that doesn’t get captured by the traditional support metrics teams use to measure success, like CSAT.
This is because metrics like CSAT were designed to capture how the customer feels about their overall experience instead of individual agent performance. Customer satisfaction is critical to track and measure, and in order to improve the overall customer experience, you have to dig into individual agent interactions + compile this data to get team-wide insights.
This analysis leads to improvements in training and coaching, insights into how your internal policies impact customers, and more—which ultimately boosts your CSAT.
There’s no quick fix for the above. It involves thinking critically about the type of experience you want customers to have, aligning your team around the nitty gritty of what that looks like, and then setting up a system to ensure everyone’s aligned.
But we’re confident that any customer support team can put in the work and improve the customer experience ✨
Below, we break down the above TL;DR section in further detail. You’ll learn more about how the Experience Blindspot came to be, what it is, our thoughts on the most effective way to measure agent performance, and how this overall can improve your customer experience and increase customer loyalty.
First things first: let’s define the Experience Blindspot.
Your Experience Blindspot is everything happening in the customer experience that doesn’t get captured by traditional support metrics (such as CSAT, NPS, AHT, or First Call Resolution).
It seems really simple, but there are wide-reaching implications for not having a good grasp on your Experience Blindspot. So let’s take a step back (and do a bit of a history lesson!).
It’s no secret that the customer support industry has always been dominated by metrics. When call centers came onto the scene, most support teams were viewed as a necessary cost of doing business (rather than a critical piece of the brand experience).
Because of this, leadership optimized for running call centers as efficiently as possible, which is where tons of common productivity metrics come from (things like AHT, FCR, solves per hour, etc).
While measuring productivity has its perks, an emphasis exclusively on productivity results in speedy but sloppy interactions.
So—in tandem with measuring productivity—teams also measured customer satisfaction.
The most common customer satisfaction metrics that teams used (and still use today!) are CSAT (aka Customer SATisfaction) and NPS (Net Promoter Score). These two metrics measure how happy customers are with their overall experience and whether or not they’d recommend your company to others.
The main disconnect:
CSAT and NPS were designed to provide an overall measure of how someone feels about their experience and/or your company—but they started to be used to measure individual performance. Because of that disconnect, they can’t tell managers how or where to start when they want to level up their team’s skills (and the overall customer experience) because they don’t have visibility into quality beyond the number itself.
Think about CSAT like a basketball game: your team could be up by 30 points and win the game, but you missed every three-point shot you took (you otherwise played okay). At practice the next week, if your coach only cares about the overall win, they miss the opportunity to have you practice some three-pointers so that you can make those shots next time and get your team extra points, increasing your likelihood of winning overall.
This brings up a big question: how should teams measure agent performance if metrics like CSAT aren’t telling you everything you need to know?
The first step: bring it back down to the individual level.
In order to understand individual agent performance, you’ll have to dig into - you guessed it - individual agent interactions. That’s where the QA process comes in 😉
By taking a sample of actual customer interactions and grading them against benchmarks for your brand and company, you’ll be able to understand individual performance on a granular level.
Generating a QA score is simple: all you need to do is establish some guidelines for what a high quality interaction looks like (usually in the form of a quality assurance scorecard), then start reviewing tickets using your scorecard. Once you grade one ticket, you’ve got your first QA score!
When this grading process is done at scale, you’ll end up with both individual and aggregate team-level data that can both pinpoint areas for improvement and give insight into your Experience Blindspot.
We’ve seen time and time again that teams who prioritize understanding their Experience Blindspot—and use QA scores to analyze individual agent performance—end up improving critical team-wide metrics like CSAT in the process.
Take it from MeUndies, who improved their CSAT to 99% using QA insights. Team leads now have the data to give agents actionable and specific feedback, identify content gaps in their knowledge base, and provide the CX team with product feedback that they can relay internally.
Similarly, the team at monday.com reduced AHT by 30% through implementing a robust QA process. Not only did this process let them scale up grading volume by 48% (more data to pull from!), but this increased grading volume surfaced insights about the root cause of their high AHT.
Even though AHT & CSAT are very different metrics, these two companies’ experiences share a common thread: consistently assessing agent performance over a wide sample of tickets creates data that CX leadership can take action on.
And that’s where the magic happens ✨