What is the foundation of a robust quality assurance program? It starts with well-defined CX standards for your organization. For a fantastic customer experience, well-defined CX standards represent the characteristics and process your customer-facing teams should adhere to and excel in.
Quality assurance (QA) is crucial to ensure that your agents maintain this standard set out by the organization and give them opportunities to improve their performance. The CX scorecard is the cornerstone of any successful QA program, but one of the often overlooked aspects of QA is whether or not your QA team is aligned with how they interpret the standards in the scorecard. If your rubric is perfect, but your graders interpret it differently and have different scoring strategies, how much can you trust the results of your quality program? What insights can you draw if your graders are not using the same standards?
This article will provide the framework for two proven workflows that help measure and improve alignment amongst your grading team!
There are two widely accepted workflows for measuring and improving alignment:
Both of these workflows ensure your QA team or team leads are aligned with how they interpret the CX standards in the scorecard. As we dig deeper into these workflows, they also have stark differences intended to drive different results. Let’s define each process before deciding which workflow suits your team:
Call calibrations allow your QA team to grade agent performance and collaborate on the same ticket. The purpose of call calibrations is to have your team grade (or calibrate) the interaction individually while utilizing the same scorecard.
After each member of the QA team has submitted their scores, the team holds a call calibration session to discuss why they landed on the scores they did. Your team will probably disagree on what the “right” grade was - and that’s okay! The purpose is to have a constructive conversation to collectively decide how to grade.
Once the call calibration process has been completed, Maestro will generate an “alignment score” to showcase how aligned each QA team member is with the final calibration score. The higher the alignment score is, the more closely aligned they are to the agreed-upon CX standards.
In addition to improved team alignment, call calibration sessions can help refine training processes and identify issues, including soft skills and technical behaviors in customer interactions. Ensuring agents meet evaluation standards leads to a more consistent customer experience.
If your quality program is new, we see teams go through the call calibration process as frequently as once a week! Once your call calibration process runs smoothly, and alignment is high, consider moving to once a month and focusing on more complex tickets. When you launch a new rubric, it’s a good idea to increase frequency again until all graders are comfortable and aligned with the new criteria.
The key difference with this workflow is that it’s designed to provide feedback to an individual grader on their grading performance rather than engaging in a team-wide discussion about the standards. Similarly, there will be a member of the team designated as the “source of truth” (in Maestro, we call this a Benchmark Grader. In your program, this should be your most experienced grader or perhaps your QA Manager. Whoever is an expert on your grading standards!) who grades a ticket that another grader already completed in AgentQA.
The goal is to provide feedback to the original grader on how they answered the questions and shared feedback with the agent. Ultimately, the work done by the Benchmark Grader is then shared back with the grader so they can review their feedback about how to grade more consistently with the CX standards set out.
Similarly to call calibrations, we typically see teams run GQA at least a few times a month to share regular feedback with your graders. Depending on internal bandwidth, it’s a good idea to include consistent feedback for your graders alongside the feedback shared for agents, so everyone can continue performing at the expected organizational standards.
Ultimately, it depends on your ideal end state! Let’s walk through a few scenarios to help demonstrate the use cases of each:
Our Suggestion: MaestroQA GraderQA! This gives you the tools to give qualitative feedback to individual graders.
Our Suggestion: GraderQA! This gives you the tools to grade these graders’ tickets, uncover which criteria they are not grading correctly, and coach the grader toward success.
Our Suggestion: Definitely Call Calibrations! This will allow you to see what questions graders have, which areas of the rubric are unclear, and perhaps even make changes before launching.
Our Suggestion: Both Call Calibrations and GraderQA! Call Calibrations will be great to teach the new graders how you think about the rubric and work through customer problems. GraderQA will give you a specific alignment score, so you know how quickly your graders are ramping and what parts of the rubric they need coaching on.
GraderQA and Call Calibration workflows are valuable tools in your QA toolkit to unlock additional insights and better align your grading team. Whether pursuing just one of these workflows or implementing both, the conversations and learnings that come from these are endless. They should help create a more robust experience for your customers and agents!
Interested in learning more about MaestroQA GraderQA?
Request a demo.