CSAT Scores represent how satisfied a customer is with your business as a whole.
According to Qualtrics, CSAT is measured through direct customer feedback—usually through variations of this question:
“How would you rate your overall satisfaction with the [goods/service] you received today?”
Respondents select a response on a 1 to 5 scale:
This question usually takes the form of a survey, often via a popup form, email, or SMS.
To calculate your team’s CSAT, simply take the average result of all surveys. This is usually expressed as a score out of 5, or as a percentage.
Because CSAT is so easy to calculate—and the resulting data is really easy to interpret—it’s become a widely known and used metric in the customer support universe.
When to use CSAT:
CSAT is best used for department-wide or company-wide changes, like:
Updating billing policies
Updating appeasement policies
Offering a new service channel (ex. Phone Support)
Approximating how well the team is doing with word-of-mouth or repeat purchases
One thing to keep in mind: because CSAT scores represent how satisfied a customer is with your business as a whole, it does not exclusively reflect service quality.
If you are an e-commerce company, it can reflect frustrations with shipping or packaging or return policies. If you are a Software company, it can reflect frustrations with a feature or bug. Thus, CSAT scores reflect how satisfied a customer is with any element of the brand's customer experience.
The other issue with CSAT: it can fall victim to response bias.
Consider the last time you filled out a CSAT survey.
Chances are, you were either extremely thrilled 😃 or pretty disappointed 😢. You took the time to make your feedback known because the experience was either amazing or, unfortunately, horrible.
But most times, the majority of people don’t have a polarizing experience - they have a relatively normal interaction. Those folks are less likely to fill in CSAT surveys.
This phenomenon is known as the response bias, and leads to results that either skew high or low, generating data that CX leaders cannot rely on to make the call on matters like staffing, training or quality.
The one-sided nature of CSAT is not enough for CX leaders looking for a holistic understanding of the performance of their CX program, and the many people involved in bringing a product to the masses.
We’ve come to call this disconnect between traditional support metrics and service quality the Experience Blindspot.
The Experience Blindspot is why MaestroQA exists. Our founders realized (after interviewing hundreds of CX leaders 🤯) that teams relied on traditional metrics like CSAT to make business decisions that directly impacted the customer’s experience.
These metrics were either one-sided (like CSAT), or efficiency-based (like Average Handle Time). On top of that, these metrics didn’t provide any insight into what was actually happening in interactions (and areas where CX leadership could work to make improvements).
Here’s a scenario that they came across time and again:
A customer leaves a bad CSAT review after a lengthy call with an agent. The agent followed the company’s appeasement policy to a T.
To a reviewing manager, that low CSAT score and long AHT could mean an agent needs more training on tone of voice for CSAT, or macros to boost their efficiency.
But a proper QA audit of that ticket would have revealed the truth: the agent had followed procedure perfectly (they didn’t need any additional training!). The real issue: the company’s appeasement policies weren’t leading to the intended outcome—customer satisfaction.
Long story short: when CX leaders use metrics like CSAT and AHT alone to measure performance, they aren’t getting information that helps them understand the root causes of a poor customer experience.
This Experience Blindspot can lead to a wide range of issues, including agents being unfairly penalized for issues out of their control (which can contribute to low morale/turnover) as well as tons of missed opportunities for brands to level up their customer experience (and customer loyalty).
QA scores (and the omnipresent, omnichannel quality assurance scorecards!) are the best way to measure the true quality of an agent’s work. A QA score is the output of reviewing and grading a customer interaction against a scorecard. This review process ensures that agent interactions align with brand standards, internal procedures & rules, and more.
In the example above, a QA program would have ensured that the agent is not unfairly penalized or had to go through unnecessary training.
But that’s just one interaction.
QA’ing thousands of interactions takes things a step further. With a large pool of customer interaction data, CX leaders can identify trends and pinpoint precise improvements to make, both at the individual and team level.
For companies that have gone beyond measuring “customer support” to caring about the holistic “customer experience”, QA scores are vital.
When to use QA scores:
There are certain situations when you should use QA scores instead of CSAT:
When reviewing agent performance on tickets with low CSAT
Identifying low performers and high performers based on quality of work
Identifying how customer request volume impacts team and individual QA scores to guide staffing and hiring forecasts
Identifying the right balance between productivity and quality (ex. You don't want to optimize for the shortest Average Handle Time so agents are rushing customers off the phone, but you also don't want agents taking too long to resolve the customer's issue.)
So which is better: CSAT vs QA Scores?
In short: CSAT and QA scores are really different metrics, and they're both really important to a great customer experience.
CSAT scores are the best measure of overall customer experience, while QA scores are best at surfacing insights from customer interactions and helping CX leaders eliminate their Experience Blindspots.
We’ve seen time and time again that teams who build out QA programs - and really pay attention to what’s at the root of their QA scores - often end up increasing CSAT in the process. So while it’s easy to view these two metrics as different, they’re often linked.
Greenhouse, the leading B2B Recruiting SaaS platform, is a great example of this.
Faced with the challenge of providing highly technical support over chat to non-technical users , Greenhouse leveraged MaestroQA to train agents to over-communicate on tough, technical tickets.
Paradoxically, training agents to ask more questions lead to lower AHT, as Jess Bertubin, CS Ops Lead, explains:
“If the real issue is jumbled and troubleshooting steps don’t work, agents have to go back into discovery mode to uncover what’s happening, and chat times will be a lot longer,” Bertubin said. “Then you have longer resolution time, lower First Call Resolution rates, and lower CSAT. Worst case, you lose customer trust and maybe lose their business.”
Through the use of QA, Greenhouse has eliminated their Experience Blindspot, empowered their agents to deliver great customer experiences, and experienced a 10% increase in CSAT.
Want to take a deeper dive into the data? We analyzed over 265,000 customer support tickets to see whether CSAT scores correlated with QA. The short answer? No. CSAT doesn't tell the whole picture about your support team's performance.
The long answer? Download your copy of our eCommerce industry report to find out:
All in all: both CSAT and QA scores are important metrics that help support leaders understand the quality of their customer experience. If increasing CSAT is a major goal of yours, implementing a QA program is a great place to start.