Related articles
product
featuresintegrationsscreen capturepricingcase studiesMaestroQA for QAManagersMaestroQA for
Team LeadersMaestroQA for
Executives
© 2022 MaestroQA, All Rights Reserved
Grading your agents’ interactions with customers can be an effective strategy for ensuring best-in-class CX. But, maintaining consistency gets complicated as you scale your operation and begin grading for subjective topics, such as empathy.
That’s why many QA teams calibrate regularly.
What does it mean to “calibrate” your QA team? How do you do it? Continue reading to learn the basics about call calibration.
At a high level, QA teams calibrate to keep graders on the same page. Dan Rorke, Customer Success Manager at MaestroQA, points out, “The whole point is to make sure the QA team is aligned with how they’re interpreting the standards for the CX organization.”
To get started, each grader reviews the same support ticket. The ticket could be related to a phone call, email conversation, or live chat, although it’s essential to provide graders with the entire transcript regardless of the support channel. Graders review the ticket independently and then join a calibration session for a team-wide discussion.
When choosing a ticket for calibration, there are multiple approaches you can take:
Calibration sessions can take many forms, but at MaestroQA, we see most customers take one of two approaches:
Calibration sessions can include leadership, graders, and even agents. We recommend regularly including different stakeholders outside of graders in the calibration process to ensure alignment across all levels of the organization and to tap into different perspectives of the grading standard.
QA teams often rely on spreadsheets to facilitate the call calibration process. For example, graders input their scores into spreadsheets. Benchmark graders use spreadsheets to run calculations and determine team alignment. This approach requires considerable administrative work for CX leaders and opens the door to unreliable QA data. “There’s so much data all over the place that it’s tough to track how things are progressing and trending,” Rorke said.
At MaestroQA, we offer a comprehensive suite of QA features, including several that enhance the calibration workflow. Our team calibration workflow makes it easy to invite graders, collect their responses for a particular rubric, and run successful calibration meetings. Individual scores are organized automatically without copying and pasting from spreadsheets.
MaestroQA also makes it easier for benchmark graders to perform final calibrations and confidentially share results with graders. When logging into MaestroQA, individual calibrators can see their scores along with the final calibration scores—but not the scores of other graders.
Reporting in MaestroQA simplifies analysis and helps QA leaders identify misalignment with graders, rubrics, and other criteria. Interactive heat maps provide a convenient way to drill down into problematic areas and identify opportunities for improvement. Calibration data can be easily exported for offline analysis.
Regularly calibrating can yield numerous benefits that ultimately support the CX team’s ability to deliver healthier customer interactions. Here are a few examples:
Simply asking graders to participate in a quarterly, monthly, or bi-weekly calibration session might be enough to increase accountability and mitigate the potential for grading bias. Well-run calibration sessions help participants better understand the organization’s expectations, thereby increasing the likelihood of improved grading.
Analyzing calibration session data can help CX leaders identify graders who may need additional coaching. Grader QA from MaestroQA is a valuable tool for providing structured, one-on-one feedback during coaching sessions.
Perfect alignment between the benchmark grader and the rest of the team is unlikely. That’s OK. Sometimes misalignment can help the CX organization identify unknown gaps in the customer experience. It’s an opportunity to teach and strengthen the standards.
Spending most (or all) of the day grading support tickets can cause even the most experienced grader to feel disconnected from the larger organization. Calibrating gives everyone a reason to reconnect, leading to increased motivation, productivity, and job satisfaction.
Calibrating produces a new data set that QA leaders can use to measure grader performance. Centralizing calibration data in a system like MaestroQA helps ensure this data's reliability and usefulness. “You have all the data right there, and it makes the process of getting data into one spot easier,” Rorke said.
More often than not, calibrations will surface areas of confusion or misunderstanding regarding the interpretation of rubrics and grading criteria. Because of this, calibrations are a great tool to use when launching new rubrics to ensure grading expectations are as straightforward as possible.
MaestroQA provides modern QA software that makes call calibrations faster, easier, and more productive. Streamline calibration preparation, grading, analysis, and follow-up with MaestroQA.
Request a demo to learn more about our call calibration features.