Creating your first QA scorecard—or even updating an existing one—is a major milestone (congrats, you did it! 🏆🥇). But, once it’s finished, there’s more work to be done.
Top performing QA programs rely on a formalized grading cadence to make sure they’re constantly checking in on support interactions. This helps CX leadership understand their Experience Blindspot on three different levels:
In short: grading tickets regularly takes a constant pulse on what’s happening with your customers.
Setting up a grading cadence can seem simple because it relies on two quick decisions: figuring out who will grade your tickets and how often they’ll grade them.
Smaller teams have it easier here: if you’re a team of one and you decide to grade 100 randomly sampled tickets per week, congratulations! You just created your first grading cadence 🎉
But if your program is more complex than that, keep reading for the nuances of how to make your perfect grading cadence 👇
Question #1 you need to answer: who grades tickets?
After building your QA scorecard, you need to decide who’s going to be grading. This decision usually boils down to a few factors: size of your agent team, human resources available, and team maturity.
In our experience, most teams hire dedicated QA specialists to run their QA programs and perform grading. This allows managers to focus on other key tasks, like hiring, coaching, and onboarding.
Dedicated QA specialists give their undivided attention to grading agents, interpreting the data produced, and providing trusted CX insights for improving CSAT, reducing churn, and maintaining a positive brand reputation. And, since they aren’t directly in the queue, QA specialists provide a fresh, third-party perspective on issues that inevitably pop up during grading.
Not all teams are large enough to need a QA specialist, and some don’t have them. The alternative is to have managers, team leads, or senior agents double up to do QA. It’s a great way to train senior agents on a new skillset and provide an opportunity for career progression. This is where team maturity comes into play—you need to have some established veterans on the team to help grade.
That said, team leads, managers, and senior agents are busy. Between onboarding, hiring, coaching, reporting, and answering tickets, setting aside time for QA might fall off their radar. When this happens, agents are the ones who ultimately miss out on coaching and training opportunities.
Question #2: how often (and how much) should we grade?
Now you know who’s going to be doing the grading—but how much work do they need to do?
There isn’t one perfect answer to this question, and every company does it differently. So, we asked our customers what their grading cadences looked like.
Some quick stats: roughly 60% of MaestroQA customers grade between 1% to 5% of all interactions. The other 40% grade at least 5% of all interactions (but these tend to be smaller companies with a dedicated QA specialist on the team).
Behind all those numbers, there are 3 main factors that determine how frequently your team should be grading. These are your CX team’s resources, your queue’s average ticket difficulty, and how senior your agents are.
CX team resources: if you’re relying on team leads to handle QA, you might want to lower your grading volume since they have other tasks to do. But, if you have dedicated QA specialists, you’ll want them to grade as much as possible (without sacrificing quality).
We like using a percentage of tickets to set a grading goal, because it works for any size team and scales with you as you grow (or shrink, especially if your business has seasonal fluctuations in ticket volume).
For example: let’s say your team consists of 1 grader and 10 agents. You think you want to grade 5% of tickets, and your team receives around 5,000 tickets/week.
Based on that context, you’d need to grade 250 tickets per week to hit your goal, so your grader should average around 50 tickets per day. As soon as you get 2 graders, you can re-calculate how many tickets you’d need to grade to hit that 5% threshold—it scales with you.
That said, picking a percentage doesn’t always work out, no matter how much research you do. We encourage an iterative approach to grading—try a goal for two weeks and see if your team can hit it. If they can handle more tickets without compromising grading quality, increase the goal 📈 But if it proves to be too much, consider setting the bar a bit lower (maybe down to 3 or 4%, if we’re thinking about the previous example).
Agent seniority: Most companies that track QA data come to the same conclusion—agent performance increases with time and experience.
If your team is made up of a majority of super experienced agents, you might want to shift your attention to the less tenured members of your team (who could use more help!).
The main way to account for this is to set a minimum threshold score for grading. This lets you focus on agents who consistently have lower scores (often, these are the newest agents on your team).
Not only does this help you build trust with your more senior agents, but it also lets you focus your grading (and ultimately, training & coaching time) on the area where it can make the most impact.
QA software should be able to help you group agents based on their seniority or even select tickets for grading based on their CSAT score.
As an example: let’s say you set 85/100 as your minimum threshold score. Any agent who scores 85+ consistently gets graded less, and agents below the 85 mark will get graded more frequently.
If you want to take it a step further, you could involve senior agents in setting the minimum threshold score—it gives them ownership over part of their coaching program and provides a new opportunity for growth.
Average ticket difficulty level: Support looks different at every company. Some transactions and businesses have inherently simpler communications—while others require a different level of expertise (especially in heavily regulated industries, like healthcare or finance).
If the majority of tickets in the queue are easy to handle and more transactional, choosing random tickets to grade might not provide the insights that you truly need. Put simply, if a majority of your tickets are “easy”, the more “difficult” (error-prone) tickets that require more agent knowledge will be graded less often based on a random sample of tickets.
Some teams have come up with creative ways to filter tickets so that they make sure they’re always grading the most insightful tickets.
The team at Handy only grades DSAT tickets—tickets that either have negative CSAT scores or those flagged by agents because they struggled with an interaction and want a review. This allows the tickets with the most opportunity for learning to get graded, leaving Handy with the best possible insights to improve their training and CX programs.
Similarly, WP Engine has a hybrid program. They filter every DSAT ticket out for grading but maintain a separate QA instance that randomly selects tickets to grade. This allows the company to reap the same benefits as Handy’s program, while ensuring that the general performance of the CX program is maintained.
So! You’ve graded thousands of tickets, collected tons of agent performance data, and now need to distribute a lot of feedback.
While every step of the scorecard building process is critical to get right, they all pale in comparison to giving out graded feedback to agents.
Building a scorecard and grading tickets off that scorecard are just a means to an end. In this case, the end is having consistent, actionable feedback to give to agents and company leadership. With consistent grading and real-time agent performance data, you’re now empowered to supercharge your support operation across all three levels of your Experience Blindspot ⚡️
While we’re talking threes, there are three main ways to relay your hard-earned QA scores to your team. We’ve seen most teams use email notifications within a QA platform, coaching sessions, and team meetings to their advantage (often, it’s a mix of all three).
Email Notifications: Some QA platforms allow agents to receive their scores via email immediately after the grade has been submitted by the grader. This enables real-time feedback that the agent can apply to the queue, while staying engaged with the QA program. Agents shouldn’t be left to interpret their QA scores on their own, however. This method should be paired with regularly scheduled coaching sessions.
Coaching Sessions: Coaching sessions provide managers with an opportunity to analyze each agent’s long-term results and offer qualitative feedback. After all, the numbers don’t care about an agent’s feelings—but managers do. Coaches can help reframe QA results and put them in context. For example, a below-average score could be due to a recent product launch or a one-off event rather than a long-term trend. Most teams aim for one coaching interaction per agent per week. This number can be decreased over time for more senior agents.
Team Meetings: Team meetings are used to dig deeper into the QA data, identify trends, and proactively address localized issues before they become widespread. Mailchimp uses team meetings combined with a QA newsletter for this purpose—and they always experience a positive spike in QA scores for the highlighted section of the rubric.
No matter how you choose to give feedback, a QA tool can help bring your scorecards, grading, and feedback loop together. You can use MaestroQA to automate grading assignments, host your QA scores, report on agent performance trends, and more. Request a demo today to see the platform in action!