Your Most Important CX Metric Is Your QA Score - Here's Why

No items found.

Think of CX metrics as helpful navigation tools that guide you to Great Customer Experience Land. They show you shortcuts to get to your destination, as well as lanes that might lead to Upset Customer Alley.

Most traditional CX metrics, though, are like out-of-date maps: they give you incomplete information, they can be unreliable, and, in most cases, they don't take you any closer to your goal of providing better customer experiences.

Yes, we’re looking at you, Net Promoter Score (NPS) and customer satisfaction scores (CSAT).

On the other hand, QA scores are like the modern GPS systems we use today: they're trustworthy, they're actionable, and they help you inch closer toward the Grove of Memorable Customer Experiences.

Your QA score is the only CX metric that gives you a clear view of the quality of customer support you're providing. Improving your QA scores means improving agent performance, and better agent performance leads to improved customer experiences that exceed customer expectations.


What Is a QA Score and How Do You Track It?

QA score, or quality assurance score, is a way of measuring how well your customer support agents live up to your business’s definition of quality customer service. You can track your agents’ QA scores through a process called customer service quality assurance, where quality assurance managers listen to or read customer service conversations and check against a QA scorecard if agents meet the standards of quality customer support.

A QA scorecard includes questions like:

  • Did the agent use a friendly tone?
  • How well did they explain the solution to the customer?
  • Did they use the right processes to solve the customer's problem?

QA managers assign points to agents for each question. An agent's QA score is the percentage of total available points they earn on their scorecard. So, if an agent scores 50 out of 100 total points, their QA score is 50%.

Typically, QA scores fall between 75-90% based on four random conversation reviews per week.

What are Customer Experience Metrics?

Customer experience metrics, or CX metrics, are a set of data points that measure and track the overall satisfaction of customers with a product or service. These metrics are essential in understanding the voice of the customer and providing insights into areas where improvements can be made. By regularly tracking these metrics, businesses can identify trends and areas for improvement to enhance their overall customer experience.

Here are the most common CX metrics:

  • Net promoter score (NPS®) - NPS® is a metric used to measure customer loyalty and satisfaction. It is determined by asking customers to rate how likely they are to recommend a company, product, or service to others. The score ranges from 0 to 10, and is calculated by subtracting the percentage of detractors (customers who give a score of 0-6) from the percentage of promoters (customers who give a score of 9-10).
  • Customer satisfaction (CSAT) - CSAT is a metric used to measure how satisfied customers are with a particular product, service, or experience. It is usually measured through surveys and can help businesses identify areas for improvement and track progress over time.
  • Customer effort score (CES) - CES is a metric used to measure the ease of a customer's experience while interacting with a company. It is often used to identify areas where customer service processes can be simplified or improved to reduce the effort required from the customer.
  • Customer lifetime value (CLV) - CLV is a metric that measures the total revenue a business can expect to generate from a single customer over the course of their relationship. It takes into account the frequency of purchases, the customer's average order value, and the length of the customer's relationship with the business. CLV is a key indicator of a business's long-term success and can help guide marketing and customer retention efforts.
  • Customer churn rate - Customer churn rate refers to the percentage of customers who stop doing business with a company over a given period of time. It is an important metric for businesses to track as it can indicate the overall health of the customer base and provide insight into areas that may need improvement.
  • Average response time - Average response time refers to the average amount of time taken by a business to respond to customer inquiries or requests. It is a key metric for evaluating customer service efficiency and can be measured across various channels such as email, social media, phone, or live chat.
  • Average resolution time (ART) - ART is a customer service metric that measures the average time taken to resolve a customer issue or request. It is an important indicator of a company's customer service efficiency and can help identify areas for improvement in the customer support process.
  • First response time (FRT) - First response time refers to the duration taken by a customer support agent to provide an initial response to a customer query or issue. This metric is important for assessing the efficiency of customer support operations and for improving overall customer experience.


QA Scores Are More Actionable Than CX Metrics Like NPS and CSAT  

QA scores are more actionable CX metrics than CSAT and NPS because you can immediately see how to improve QA scores by looking at an agent's scorecard. Let’s say an agent scores 90 out of 100 points. A quick peek at their scorecard will tell you why they lost those 10 points — whether it's poor grammar or a lack of process knowledge. And now you know how to get the agent to a perfect 100% score.

CSAT and NPS scores don’t offer sufficient clues to how you might improve them.

For instance, an 80% CSAT score tells you 80% of your customers were satisfied with their experience with your brand, but what about the other 20%? Unless most of the dissatisfied customers left comments about why they were unhappy (you’ll be lucky if they do), you don’t really know what soured their experience: was it customer service, an issue with your cancellation policy, or a product glitch? You’ll have to follow up with qualitative research to find out.

NPS has a similar story. Any score above 0 means you have more promoters than detractors, but you don’t know why some customers won't recommend your brand to friends and family unless you follow up with qualitative surveys.

QA scores are also more trustworthy than either CSAT or NPS scores. As long as you define your criteria for quality support clearly, you can be sure an agent’s QA scores paint an accurate picture of their quality adherence and performance. But the reliability of CSAT and NPS data depends heavily on when you send the survey, how you word it, and the number of people you send it to. Even with measures to avoid bias in your data, these surveys are still subject to common customer biases — such as social desirability bias, where customers choose more socially acceptable options, or the tendency to choose extreme or neutral scores — also known as the response bias.


How to Start Tracking QA Scores in 3 Simple Steps

There are three main moving parts to tracking QA scores: a team to conduct quality assurance reviews, a scorecard to evaluate agents, and a tool to make your QA process hassle-free.

Here’s  how to get each of these in place:

Set Up a QA Team

To track and measure QA scores at a consistent clip, you’ll need to set up a dedicated team to conduct support conversation reviews.

If you have a large customer support team, you might need a QA specialist or two to grade conversations and offer feedback to agents. QA specialists are full-time employees who only focus on QA reviews.

On the other hand, if you’re a small business with only a few support agents, your QA team can consist of support managers and senior agents. Make sure your agents and managers allot a specific time each week for QA so it doesn’t get ignored in favor of other support activities.

Create a QA scorecard

Your QA scorecard helps you evaluate agent performance based on customer support criteria that matter to your business.

To create a QA scorecard, you’ll need to:

  • Pick your most important customer support criteria: Common quality support criteria include good grammar and tone, compliance with customer service processes, and an agent’s overall effectiveness in solving a customer’s problem. To pick criteria that are most important for your business, think about your brand values and mission statement. If possible, take suggestions from team members from all departments as different teams may have different insights about what customers want.
  • Create a list of questions: The questions in your scorecard help graders check if agents meet your quality requirements. Sample questions could be, “Did the agent greet the customer in a friendly manner” or “Did the agent solve the customer’s problem?”
  • Choose a grading scale: Graders assign points to agents for each question based on a grading scale. Grading scales come in different shapes and forms. For instance, a simple Yes/No grading scale means graders assign one point if the agent completes a given action and 0 points if they don't. In a linear 10-point scale, QA managers rate the performance of the agent on a scale of 1 to 10 on different skills, like tone, grammar, and empathy.

If you’d like to dig deeper into QA scorecard best practices and examples, here’s a complete guide to creating CX QA scorecards.

Choose the Right Tool for QA

While you can use Google Sheets or Excel spreadsheets to manage your QA process, remember that these tools don’t scale. When graders have a large number of conversations to review per week, multiple spreadsheets make it cumbersome to log scores, feedback, and track progress.

Pick a dedicated tool for QA that allows graders to randomly select conversations for review, log scores and feedback, and track an agent’s performance over a period of time with detailed reports. Modern tools such as MaestroQA also allow you to take a peek at the processes agents use to solve tickets, using the Screen Capture feature, so you can coach them better and also fix broken customer support processes.

Choosing the right tool for QA early on will make it easy to scale your QA process as your customer support team grows.


How to Consistently Improve Your QA Scores: Tips and Best Practices

Improving your QA scores comes down to using QA insights to improve your team’s performance, making your QA process more actionable for  agents.

Let’s explore a few of these below.

Coach Agents Using QA Insights

The best way to improve your QA scores is to use insights from quality assurance to coach agents and improve their skills.

QA scorecards are your best source of intel for coaching agents effectively. An agent’s QA scorecard gives you a detailed breakdown of the areas they excel in and the areas where they might need help.

You can also use options like Screen Capture to drill deeper into the actions agents take behind the scenes to solve customer queries. This helps you see if agents are struggling with processes, like using your customer service software or finding answers in your knowledge base.

Now that you know the potential areas for improvement for an agent, focus on those areas during one-on-one coaching sessions.

To make QA-supported coaching even more  helpful, Joshua Jenkins, customer success manager at Plangrid, recommends “fearless communication” and “diligent documentation.” Give feedback openly and document each piece of feedback for an agent alongside their QA scores, so you can see how they’ve improved over time.

For quick access during coaching sessions, organize an agent’s QA scores, CSAT scores, and historical feedback in one place. MaestroQA provides this data by default in the Coaching tab.

Get Agent Buy-in on Quality Assurance Programs

If your customer support agents think of quality assurance as a way of policing them, there’s just a slim chance they’ll try to improve their QA scores. But if they’re happy to undergo quality assurance reviews, agents will be more likely to strive for better scores. That’s why getting agent buy-in for quality assurance matters.

To get agent buy-in, involve them in creating your QA scorecard. This means allowing agents to contribute ideas about which standards of quality support your scorecard should track and what your grading scale should look like. It also helps to clarify grading criteria and explain how an agent's final QA score is calculated. Transparency early on will help build agent trust in the QA process and QA scores.

Second, emphasize qualitative feedback as much as QA scores, so agents know the QA process is about improving skills and not just achieving a number. Small tweaks to your grading scale can help your scorecard stay focused on qualitative feedback. For instance, Etsy ditched their five-point grading scale in favor of a binary “meets expectations/doesn't meet expectations" grade. As a result, their agents and managers were able to have more meaningful conversations around improving performance, and agents reported being happier with the QA program.

Finally, let agents see a breakdown of their QA scores and allow them to appeal if they want to. This is another way of making your QA process transparent and trustworthy for agents.

Eliminate Inconsistencies in Grading

Inconsistent grading means your QA scores are not as reliable as you want them to be. If the scores are inaccurate, any effort to improve them will be unhelpful, too.

There are a few reasons why you might find inconsistencies in your grading process. Your graders may be unsure what a particular grading criterion means. For instance, if they’re not familiar with your brand, they may not be able to accurately judge if an agent used a “brand-friendly tone.” Graders may also be biased, favoring one agent over another when grading conversations. Also, different graders might picture a “helpful interaction” or an “empathetic response” differently.

To keep your QA process free from subjectivity, explain any grading criteria that leaves too much room for interpretation (or misinterpretation). Think tone, voice, and effectiveness. For instance, Intercom uses the acronym PREACH (proud,  responsible, empathetic, articulate, concise, and human) to help graders correctly evaluate the tone of their agents.

To eliminate grader bias and other inconsistencies, ask three to four senior QA analysts or support managers to grade a selection of tickets and then discuss any differences in grading. This will help you arrive at more consistent grading criteria.

You can also “grade the grader" on a regular basis to find graders who are missing the mark and fix inconsistencies.  Ask a senior grader, or benchmark grader, to grade a selection of tickets and compare individual grader’s tickets with that of the benchmark grader. MaestroQA’s Grader QA tool helps you automate this process by randomly selecting a sample of tickets for the benchmark grader, comparing individual grader tickets with the benchmark, and assigning  an “alignment score” so you can see how misaligned different graders’ scores might be.

Share QA Insights Teamwide

When QA insights are shared teamwide, your team can learn new ways to improve their skills, as well as QA scores.

Consider sharing QA insights, such as average QA scores, trends in QA, top tips from QA, and instances of great customer service, with your team. Compile them into a PDF, video, or newsletter and share teamwide on a monthly basis

Mailchimp's monthly Quality newsletter is a good example of how to share QA insights with your team. The Quality newsletter, delivered to the entire CX department, shares opportunities for improvement, a QA Tip of the Month, and top agents of the month. The Tip of the Month, in particular, has helped Mailchimp see a huge improvement in QA scores.

Update Your QA Scorecard

Your scorecard decides which skills your agents are graded on. At some point, your agents will become experts in those skills, so it makes sense to update your scorecard if you see QA scores stagnating.

The first step in updating your QA scorecard is to consult stakeholders, like CX leaders, agents, and quality assurance managers, about potential changes in your scorecard. They're in the best position to make useful recommendations on scorecard updates.

Next, revisit your brand values and policies and check to see if there's anything your scorecard doesn't capture well. For instance, when MeUndies noticed their scorecard didn't clearly reflect their brand voice (“a very California way of communicating”), they added specific characteristics like charming, confident, and curious to their scorecard to help graders check for brand voice.

Updating your QA scorecard can be a good way to move the needle on stagnant QA scores, but it should be the last thing you try after you’ve done everything else on this list. Updating a scorecard usually helps if your scores are already near-perfect (above 90%) and your agents need new skills and criteria to work toward. If your QA scores are average (around 70%) and your agents are constantly missing the mark in important areas, updating a scorecard may not be the best solution.

Start Tracking Your QA Scores with a Quality Assurance Program

If you're looking for one CX metric to help you track agent performance and customer support effectiveness, as well as improve customer experiences, QA scores are your best bet.

The best way to track QA scores consistently is to create a customer service quality assurance program. Creating a quality assurance program involves defining a customer service vision, creating a QA scorecard, setting up tools and processes for QA,  and making QA insights actionable for agents. Here's a complete guide to help you create a QA program and track your QA scores.

If you're looking for an easy way to run a quality assurance program and track and improve your QA scores, sign up for a demo of MaestroQA today.

Related articles
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard
April 4, 2024
Read More