How to Update Your QA Scorecard

No items found.

In an earlier blog post, we covered how to build your first QA scorecard and how to set up a grading cadence for your QA program.

Because scorecards are a snapshot in time of your company’s values, CX processes, and communication style, they need to be frequently updated to make sure they’re in-sync with your ever-evolving brand and CX goals.

In this blog post, we’re going to go over how often you should re-evaluate your scorecard, why certain events should trigger a scorecard refresh, and an easy five-step process to kick things off. 

When you should re-evaluate your QA scorecard

Like the title suggests, you should carry out a routine re-evaluation of your scorecards at least every 6 months to ensure constant customer service quality. But there are a few other key company lifecycle events that should also prompt a re-evaluation. 

1. Updated brand values

QA scorecards represent your company values - when these change, your scorecards have to change, too. 

Take the communication section for example (sidenote: see our Omnichannel QA Scorecard guide for more on the 4Cs you should include in your scorecard). 

This section typically has questions that capture the essence of your brand. Some brands do it through regulating the use of certain words - like referring to customers as members, or by checking for a friendly tone of voice (think about every trip you’ve made to Trader Joe’s).

These values change as a brand matures and evolves over time.

As such, a brand refresh or the launch of a new brand values handbook is a good reminder to evaluate your scorecard to see if it still matches up to your team’s new identity.

2. A push for increased efficiency

It could be a new C-suite hire, or a call for the company to do more with the same amount of time and resources.

Whatever the trigger, a push for increased efficiency is something most teams go through at some point.

As the backbone of your QA program, your scorecards are a good place to start as a way to increase your grading efficiency and agent efficiency when it comes to handling tickets.

We cover the how in a later part of this article.

3. Useful CX insights are drying up

Trustworthy CX insights are the ultimate goal of any strategic QA program. So when these insights start to dry up or are less impactful, it’s a sign that you should re-evaluate your QA scorecard and revamp it if necessary.

The “insightfulness” of a QA program is not something that you can really put a metric to (insights per 100 graded tickets, anyone?), but experienced CX managers know when their QA programs are not delivering the same level of insights as they’ve come to rely upon, and when it’s time to switch things up.

For example, the CX leadership team at Handy realized that their QA program was no longer delivering insights that the team could use to improve their program. 

Further investigation revealed that the sheer volume of simple, easy-to-handle tickets was crowding more complex, error-prone tickets out of the grading queue, causing Handy to miss out on opportunities for improvement.

They decided to only grade tickets with a negative customer satisfaction score. As a result, they were able to pinpoint opportunities for improvement on their more complex tickets and drive team performance improvements.

4. Signals from QA data

We know the last part about relying on gut feel to know when the insights are drying up seems a little wishy-washy, but hey, your team knows your CX program best. 

For the more data-driven amongst us, your QA data itself can be a powerful signal that you should re-evaluate your scorecards.

Here are a few indicators you should keep your eye on:

Stagnant QA scores

Apart from a lack of actionable insights from QA, the team at Handy (our earlier example) also noticed that their agents had scores that were constantly in the high 90s.

While some teams might see this as cause for celebration, Handy's team decided to dig deeper into their scorecards, and settled on grading only tickets with negative CSAT scores.

MeUndies noticed a similar trend with their QA scores. As a team that closely tracks the relationship between QA and CSAT, they realized that the stagnant QA scores also led to flat CSAT score trends, and lackluster customer interactions. They decided that a revamp was in order.

One caveat: revamping a scorecard for this reason is essentially shifting the goalposts, meaning that new data can’t be directly compared to the historical, pre-shift data.

If you’re a team that’s constantly scoring high, and wondering how else you can gain more QA insights, this might be something worth exploring. 

But if your team’s average QA score is in the mid-70s and showing no signs of improvement, shifting the goalposts to try and induce an increase in scores really doesn’t benefit the team much. 

Increase in agent appeals

Your agents are the ones hit hardest by an outdated scorecard that needs updating.

An increase in agents appealing their grades is a good sign that something in the scorecard isn’t in-line with what your agents have been trained on, and are practicing in the queue.

Graders missing their grading targets

If graders are missing their grading targets, there might be an efficiency issue with your scorecards - we cover how you can increase scorecard efficiency in the next segment.

Just beware: if you don’t have dedicated QA specialists on your team, and your managers are pulling double duty to grade, that could be the actual reason behind the missed targets.



Revamping your customer service quality assurance scorecards

So you’ve seen the writing on the wall, and you know it’s time to revamp your scorecard. Here are five steps (and four parameters) that you should use to evaluate your scorecard.

1. Get insights from stakeholders

The first step is to speak to all relevant stakeholders in the grading process. Depending on your team, this could include your CX managers, QA specialists, and your agents. 

Agents and QA specialists are the two roles that will have the most interaction with your scorecards, and would probably have the best insights on how to improve them.

They’ll also be the most affected by a QA scorecard that isn’t delivering value, so think of them as a canary in a coal mine - their happiness with the scorecard is a good indicator of how it’s performing.

2. Simplify your QA scorecard

Many teams moved from spreadsheets to a dedicated QA program in order to achieve more efficiency in their QA process.

Increasing your grading efficiency can benefit your team in several ways - with less time spent on each ticket, you can grade more tickets, and uncover more insights. 

You can simplify your QA scorecard in several ways that still give you the data you need, but take less time to use.

First and foremost, try to reduce the amount of questions or steps in your QA scorecard. If a question is not delivering any insights, or is no longer an area of concern for the team, consider removing it from the scorecard. 

Think about it this way: let’s say you grade 500 tickets per week and you spend 10 seconds on every question in your scorecard.

This means that every week, you’re spending 5,000 seconds on any one question (that’s around 83 minutes!). If the question isn’t providing you with the insights you need, you should cut it out.

Next, reorganize your scorecard. One efficiency hack that you can employ here is to group similar questions together. 

For example: questions like “Agent greeted the customer” and “Agent asked if there were other issues they could help solve before ending interaction” could be grouped together because they thematically deal with specific procedures agents need to follow.

Alternatively, you could space them out throughout the scorecard to follow the flow of a customer interaction.

The question about greeting would be at the beginning of the scorecard and the question about solving issues before ending the interaction would be at the end.

The first method might be better if your customer interactions are relatively short and quick - but for longer interactions, you might want to try the second approach. 

Main takeaway - ask whoever will be grading what they prefer! Learn about their workflows and needs, and that will help you to craft a scorecard that works best for them. 

Integrations are another way to decrease grading time. MaestroQA’s integrations with CX platforms like Zendesk or Aircall allow you to view your scorecards and tickets side-by-side, eliminating the need to switch between screens as you evaluate a ticket.

Integrations with learning management systems (LMS) like Lessonly and Guru allow you to quickly assign a knowledge base article to an agent who needs a bit of help in a specific area.

3. Re-examine company values and policies

As we covered earlier, an update to your company values usually means another look at your QA scorecard is necessary. If you’re updating your scorecard to align with new brand values, make sure that these values are unambiguously infused into the scorecard. 

SeatGeek’s scorecard is a good example of this. The value of “Humanity” was added to their brand values, so they created a specific section in their scorecard with clear parameters for getting a perfect 5/5. 

The team at MeUndies also revamped their scorecards to showcase a very specific brand voice that they had in mind, prompted by stagnant QA scores.

While you’re evaluating your scorecard, check whether any policy changes have been made since the last update as well. This could be an update to your security/identity verification policy or the company’s refund policy.

Like brand values, these should be stated on the scorecard and communicated to your agents unambiguously to ensure their adoption.

4. Ensure that your scorecard remains accessible to agents 

Mercari had a QA program with a passing grade of 96% - meaning there were only 4 percentage points between full marks and failure. Their program could be both extremely rewarding and punitive at the same time, and was often tough to understand - imagine getting a 92% QA score and being told you had failed 🤯

They eventually revamped that program to feature a 5 point scale with an accompanying scorecard, making QA results a lot easier to understand.

As a result, agents found QA results more accessible and insightful, and started engaging more with them.

Our main point here is: make QA results easily accessible and easy to interpret. That will ensure maximum engagement with your QA program, and ensure your agents receive the insights they need to improve.

5. A/B test your scorecards

You might have heard the words “A/B test” float over from the marketing team’s desks before (when we were all working in-person, of course). While it might sound foreign to CX, it’s an approach that you can easily apply to CX quality, and to great effect. 

The main idea is: regardless what kind of change you’d like to apply to your scorecard, make sure you test them out against an older version of your scorecard. If you grade 100 tickets in a day, try grading 50 with old scorecard A and 50 with new scorecard B, and comparing the experience and results.

For example, if you’re trying to decrease grading time, time how long it takes you to grade the tickets in scorecard A (as compared to B).

If there’s a significant difference in the time it takes to grade with your new scorecard, and you’re still gleaning the same insights from the ticket, you have a winner! 

Repeat this process for all the changes you’d like to make - you won’t always see a sizable improvement every time, but this is the best way to know if the changes you’re proposing are really making their mark.



In summary - your scorecards are like a car that needs a tune up every 6 months or when something big occurs. Keeping them up to date ensures that they’ll stay relevant and keep delivering trusted QA data that your team can rely on.

Now all that’s missing is for you to start grading and applying your CX insights back to the queue. See you back here in 6 months! 👋


Related articles
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard
April 4, 2024
Read More