Quality Assurance

Setting Up a Grading Cadence for Your QA Scorecard

August 27, 2020
0 minute read

Completing your first QA scorecard is a major milestone - but once it’s finished, there’s more work to be done.

As you’ve probably realized by now, the best-performing QA programs are made up of more than just a standalone QA scorecard. Top QA programs also include a formalized grading cadence - this includes deciding who should be grading tickets, agreeing on a fixed grading volume and frequency, determining how to grade (aka - which scorecard to use), as well as setting up regular coaching conversations with your agents.

With these supporting factors in place, your team can go from simply having a QA program, to one that provides structured coaching for agents and trusted insights for managers.

The best part is that a grading cadence is really easy to establish. We’ll walk you through exactly how to set one up so that your agents receive timely, structured, and data-driven coaching in no time. 

Deciding if you should have dedicated QA specialists  grade your tickets

After you build your quality assurance scorecard, you need to decide who’s going to be using it to grade. This decision usually boils down to a few factors for most teams: size, human resources, and team maturity.

For most teams, it makes sense to have dedicated QA specialists to run the QA program and be responsible for grading. This allows managers to focus on the myriad other tasks (like hiring, coaching, and onboarding) that make up their average day. Dedicated QA specialists can give their undivided attention to grading agents, interpret the data produced, and provide trusted CX insights that can help improve CSAT, reduce churn, and maintain a positive brand reputation.

And - since they aren’t directly in the queue - they have a fresh, third-party perspective on the issues that pop up in grading (and aren’t afraid to point them out!).

We asked our CX network about their grader:agent ratios - we found that most teams have a ratio of roughly 1:20. We’ve included the exact breakdown below:

  • 22% of teams have a grader:agent ratio of 1:10, 
  • 51% follow a ratio of 1:20, and 
  • 27% have a ratio of 1:40 agents or more. 
  • In general, larger teams tend to have more agents per grader. As the team matures and builds a base of senior agents who require less grading, they shift their energy to newer agents

But not all teams are large enough to need a QA specialist, and many don't have them. The alternative is to have managers, team leads, or senior agents double up to do QA. It's a great way to train senior agents on a new skillset and provide an opportunity for career progression. This is where team maturity comes into play - you need to have some established veterans on the team to help grade.

The downside to having non-QA specialists step in to grade? Team leads, managers, and senior agents are busy. Between onboarding, hiring, coaching, reporting, and answering tickets, setting aside time for QA might fall off their radar. When this happens, agents are the ones who ultimately lose out in terms of their performance and career growth.

If you do choose to have team leads grade tickets and run the QA program, be sure to pick a grading volume that they can commit to, and that’s integrated into their goals or compensation. We’ll be covering that in the next section.

Deciding on your QA grading volume and frequency

While there are no rules around how frequently you should grade, 60% of MaestroQA customers surveyed grade between 1-5% of all interactions. The other 40% graded upwards of 5% of all interactions, but these tended to be smaller companies with a dedicated QA specialist on the team. Some companies, like Handy, choose to grade only DSAT tickets - where the customer had indicated unhappiness with the way the ticket was handled.

There are 3 main factors that determine how frequently you can grade: your team’s headcount, the average difficulty of a ticket, and the team’s ratio of junior:senior agents. 

CX Team Resources

Do you have dedicated QA specialists, or are your senior agents and team leads pulling double duty to grade? As we’ve talked about in the previous section, there are pros and cons to both. If you’re relying on team leads to handle QA, you might want to lower your grading volume in light of the fact that they typically have a dozen other tasks to handle on top of QA.

With dedicated QA specialists, it’s easy to work out how much you can realistically QA.

For example: imagine a team of 10 agents with 1 grader. The team handles 5000 tickets per week, and you’d like to grade 5% of tickets. 

5% of 5000 tickets = 250, meaning you’d have to grade 250 tickets per week (or 50 per day) in order to hit the goal. This approach scales with your team - as you start handling more tickets and the ratio jumps higher, you can add more graders to maintain that 5% target rate. 

Take an iterative approach to grading no matter how you do it. In this example, we’d encourage the team to try to grade 50 tickets in one week and see how it goes. If the team can handle more, tack more onto the goal, but if 50 proves to be too much, consider setting the bar a bit lower. 

Average CX ticket difficulty

If the majority of tickets in the queue are easy-to-handle and your agents knock them out of the park on a consistent basis, your QA program might not be giving you the insights that you truly need out of it. Put a different way: if you’re spending time grading “easy” tickets, are you truly catching the more “difficult”/error-prone tickets that require more agent knowledge?

If you randomly sample tickets for grading, it’s easy to see how the harder tickets (and the ones you should really be focusing on!) get lost in an avalanche of easy tickets.

In lieu of random ticket sampling, there are other ways to solve the problem. Handy and WP Engine have come up with two innovative ways to sieve out and tackle tough tickets.

The team at Handy now grades only DSAT tickets - tickets that either have a negative CSAT score from the customer survey, or those that are flagged by agents because they struggled with the interaction and want a review. This allows the tickets with the most opportunity for learning to float to the top - leaving Handy with the best possible insights to improve their training and CX programs.

WP Engine has a hybrid program - they filter every DSAT ticket out for grading, but maintain a separate QA instance that randomly selects tickets to grade. This allows them to reap the same benefits as Handy’s program, while also ensuring that the general performance of the CX program is maintained.

Agent Seniority

As agents gain experience in their role, their customer interactions get smoother over time. Most CX teams have QA data that backs this up; agents’ QA scores generally increase as they gain experience. 

Set a threshold score for your team (say, 85/100), and consider lowering the grading volume and frequency for the more senior agents on your team who have consistently maintained a QA score above 85. This will allow you to focus more time and effort on newcomers, while building trust with senior agents.

You could even take it one step further and involve your senior agents in setting the threshold score. This gives them ownership over part of their coaching program and provides new opportunities for growth and learning.

Relaying QA results to the CX team

There are three main ways to relay your hard-earned QA scores to your team: coaching sessions, email notifications, and team meetings.

Some QA platforms allow agents to receive their scores through email immediately after the grade has been submitted by the grader. This allows for real-time feedback that the agent can immediately apply to the queue, while keeping the agent engaged with the QA program. 

Agents shouldn’t be left to receive and interpret their QA scores on their own, however. This method should be paired with regularly scheduled coaching sessions.

In these sessions, managers can help analyze an agent’s long term results, and provide qualitative feedback to help them improve. More importantly - the numbers don’t care about an agent’s feelings, but a manager does. Coaches can help reframe QA results and put them in context - a below-average score could be due to a new product launch or a one-off event, and not a long-term trend of poor performance.

Finally, use team meetings to dig into team-level trends in the QA data, and nip problems in the bud before they become more widespread. Mailchimp uses a team meeting combined with a QA newsletter for this purpose - and they always see a positive spike in QA scores for the section of the scorecard being highlighted. 

The Benefits of Setting Up a Grading Cadence

Scorecards are the central pillar of every QA program - but they can’t hold the roof up alone. After finishing your first scorecard, invest time into setting up the processes that will enable consistent grading and structured coaching. The strategies we’ve outlined here will allow you to benchmark performance, make data-driven decisions, and ultimately improve the experience you’re offering customers.


Previous Article

Don't Settle. Dig Beneath The Surface For Customer Insights.

Using Zendesk CSAT Reviews and Slack to Appreciate CX Agents

Customer Support Best Practices From The NYC Support Driven Meetup

Why Failure To Provide Great Customer Service Is A Risk To Company Success

Maintaining Quality Of Customer Support In The Face Of Hyper-Growth

Customer Service Quality Assurance and Soft Skills

The 2 Agent-Controlled Factors to Improve CSAT Scores

Guide to Building Call Center Quality Monitoring Scorecards

How To Improve CSAT Scores for Your Call Center in 3 Steps

Customer Service Quality Assurance for Higher CSAT Explained

How To Build Your First QA Scorecard — A Comprehensive Guide

Innovation in Quality Management with Freshly at The Art of Conversation

A maybe-too-honest perspective on our rebrand

How to Create A Customer Service Quality Assurance Form

Quality Management and Customer Service Training Programs

CSAT Scores vs. Quality Assurance Metrics – Which Is Better?

5 Ways Quality Assurance Programs Can Improve CSAT Scores

MaestroQA Partnerships: Introducing Zendesk Suite

Is Your Quality Assurance Program Built For 2018?

Fresh Take: How Peer Review Can Identify Improvements

How Customer Experience Teams Can Impact a Company's Brand

Roger That! Assume Nothing Until You Get Confirmation

Creating a Multi-Channel Quality Form For Contact Centers

The Art of Training with Harry's Razors and FuboTV

Omnichannel Support For Agents And Customers: A Necessity

2 Types Of Agent Skills That Impact Customer Satisfaction

Mastering Customer Interactions in the Age of DSAT

Leanna Merrell

How Shinesty Uses Alternative Positioning as a Best Practice

Dangers Of The 90%+ QA Scores

Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies

Leanna Merrell

Elevating Call Center Performance with Six Sigma and MaestroQA

Lauren Alexander

Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative

Leanna Merrell

Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard

Leanna Merrell

The Essential Guide to Chatbot Quality Assurance: Ensuring Excellence in Every Interaction

Leanna Merrell

Unlocking Superior CX: The Bombas Blueprint for Quality and Coaching

Leanna Merrell

Agent Empowerment: 5 Tactics for Customer Retention from Industry Leaders

Mastering Agent Onboarding: Quality Assurance Lessons from ClassPass

How Angi Unlocked Growth and Continuous Improvement with QA

The Transformation of QA: Driving Business Results - Key Takeaways from MaestroQA’s CX Summit

Lauren Alexander

Unleashing the Power of Customer Conversations: Top 6 Tech Trends Revealed at the CX Summit

Lauren Alexander

Important Factors to Consider when Exploring Sentiment Analysis in Customer Support QA: A CX Community Discussion

Driving Business Impact with Targeted QA: Insights from an Expert

The Art of Outsourcing Customer Support: Lessons from Stitch Fix's BPO Partnership

Larrita Browning

How to Revamp QA Scorecards for Enhanced Quality Assurance

De-Villainizing QA Scorecards with Hims & Hers Customer Service

How to Maximize Call Center & BPO Performance | MaestroQA

Larrita Browning

Writing the Auto QA Playbook & Transforming Customer Support

Larrita Browning

MaestroQA Named One of Comparably’s 2023 Best Workplaces in New York for the Second Consecutive Year

Larrita Browning

Advancing Customer Service Metrics with AI Classifiers

Lauren Alexander

MaestroQA Named on Comparably’s Best Workplaces in New York

Larrita Browning

CX Strategy: The Future of AI in Quality Assurance

Larrita Browning

Elevating Customer Satisfaction with Visibility & Coaching

Larrita Browning

How Customers Collaborate with Their BPO Partners Today

5 Key Strategies to Supercharge Your BPO Partnership

Larrita Browning

Champion-Challenger Model: Improve Customer Service In BPOs

Larrita Browning

Kick Start Your Customer Service BPO Partnership Successfully

Larrita Browning

BPO Call Centers: Best Practices for Quality Assurance

Larrita Browning

Call Calibration: What is It & What are the Benefits?

Larrita Browning

Increase QA Team Alignment with Call Calibration & GraderQA

Dan Rorke

Measuring An Organization's 3 Ps: People, Process and Product

Larrita Browning

Empathy in Customer Service: Everything You Need to Know

Larrita Browning

Average Handle Time (AHT): How to Calculate & Reduce It

How to Onboard Your Customer Service Team to a New QA Program

21 Key Customer Experience Definitions for QA Professionals

Should You Have Dedicated Quality Assurance Specialists?

How Top eCommerce Brands Ensure Exceptional Customer Service in a Remote World

The Top 4 CX Books Recommended by Our QA Community

A Guide to Customer Service Quality Assurance Programs

5 Key Components of a Remarkable Customer Service Experience

The Ultimate Guide to Improving First Call Resolution (FCR)

How to Refresh Your Call Center Quality Monitoring Scorecard

The Key to Customer Service Coaching Is More Data (and Fewer Opinions)

Call Center Quality Assurance with Zola and Peloton

How to Update Your QA Scorecard

3 Ways to Test Your Call Center Quality Assurance Scorecard

The 9 Customer Service KPIs Needed To Improve CX

What is DSAT and 5 Steps to Improve It

Leveraging Customer Sentiment to Improve CX in Call Centers

Larrita Browning

Customer Experience Management and Quality Assurance Jobs

How Deeper CX Analytics Lead to Better CSAT | MaestroQA

Customer Service Management 101: Everything You Need to Know

Beyond Low CSAT Scores: Finding the Root Cause of Poor CX

How to Create an Omnichannel Call Center Quality Assurance Scorecard

Achieving Effortless Customer Experiences (CX) with QA

This Is What an Effective Customer Service Coaching Session Looks Like

Customer Service Coaching 101: Improve Agent Performance

Auto-Fail in Call Center QA: What It Means and When to Use It

MaestroQA's Aircall Integration: Bring Your Calls to Life

Build the Ultimate QA Scorecard Process for Email and Chat

Why Poor Agent Experiences Happen (and How to Fix Yours)

20 Call Center Coaching Tips to Boost Agent Performance

Setting Up a Grading Cadence for Your QA Scorecard

How to Avoid Bad Customer Service as you Scale your Business

Why Getting Buy-in for Quality Assurance is Essential

Building a New Call Center Quality Assurance Scorecard

What CX Leaders Need to Know About Ecommerce Industry Trends

Quality Assurance and Training with Seismic Learning & MaestroQA

11 Customer Service Training Ideas and Skills for Your Agents

Call Center Cost Per Call: How to Calculate & Reduce It

6 Tips to Automate Your Customer Service Management Process