Inconsistent standards across graders
Grading did not produce actionable insights + agent promotions affected by inconsistencies
Used GraderQA for quality assurance calibration to ensure alignment
Highly productive calibration sessions, aligned graders, happier agents, and more joyful customers!
Scribd is on a mission to change how the world reads. More than one million paid subscribers across 190 countries rely on Scribd to gain access to one of the largest online collections of ebooks, audiobooks, online magazines, and documents.
To help customers maximize the value of their memberships, Scribd offers free support via its Help Center. Scribd’s customer support agents—known internally as “customer joy creators”—work diligently to resolve a wide variety of issues that range from simple password resets to more advanced technical troubleshooting and copyright issues.
Scribd’s QA team exists to ensure that support agents are creating joy in every interaction with customers. A huge piece of their success: taking a data-driven approach to quality assurance calibrations.
The CX team shared about how GraderQA has helped them to deliver customer joy at The Future of Quality. Read on to learn more, or watch their session below!
QA calibrations have always been an essential component of Scribd’s QA program. And, although the calibration sessions frequently lead to useful conversations about Scribd’s QA scorecard, they were failing to produce the types of actionable insights that are necessary for ensuring grader consistency. Improving grader consistency became especially important when Scribd began using QA data to make decisions about agent promotions.
“Every time we went through a calibration session, we realized how different our graders were grading,” said Moon Gordy, Quality Assurance Specialist at Scribd. “In addition, calibrations were our only opportunity to meet with graders and get their thoughts about the rubric, which made it difficult to ensure alignment.”
Aligning grader actions with Scribd’s quality scorecard was not the only challenge. Scribd’s calibration sessions required a large investment of time from numerous CX team members to evaluate grader performance, discuss grading best practices, and gain majority agreement on proposed changes to the rubric.
“We take our calibrations very seriously and involve a large number of stakeholders, including members of the Customer Success team,” said Gary Villalobos, Manager, Training and Mentorship Specialist at Scribd. “Even then, the sessions were still not effective at helping us make our team more consistent.”
Realizing the need for a more scalable process to measure grader performance, Scribd turned to MaestroQA.
MaestroQA's GraderQA feature automated Scribd’s grading workflow by selecting a random sample of graded tickets for review by one senior QA grader (the benchmark grader).
GraderQA compares individual graders against the benchmark grader and generates an “alignment score,” which identifies the severity of misalignment across graders and pinpoints areas of inconsistency.
“GraderQA enables us to peek in on the tickets that are being graded.” Villalobos said. “As a result, we don’t have to focus so much of our calibration time on reinterpretation or reviewing specific examples in order to bring everyone into alignment.”
Taking a centralized “grade the grader” approach also eliminates potential conflicts of interest, leading to a trustworthy QA metric that informs complex decision-making about agent promotions at Scribd.
Gaining an objective, data-driven understanding of grader performance with GraderQA creates new opportunities to align agent interactions with customer expectations across a variety of criteria - and to understand if calibration sessions are working.
“Having this feature allowed a greater sense of accountability,” Moon said. “It also helps us identify areas of potential misalignment, so that we can continuously improve.”
Reliable QA data enables highly productive calibrations and one-on-one coaching sessions that lead to new best practices and meaningful change.
“Now we can pull data to understand how all other graders are grading a particular criteria,” Moon said. “This serves as a great conversation starter during calibration sessions and in one-on-ones to ensure alignment across graders.”
GraderQA will also play a pivotal role to ensure the successful adoption of Scribd’s new QA scorecard.
“I’m expecting that there will be a lot of misalignment initially, and that’s OK,” Moon said. “Data from GraderQA will allow us to understand how graders are interpreting questions and compare how alignment varies between the old and new rubrics.”
By creating a scalable grading workflow, increasing the reliability of its QA data, and aligning graders to an ever-improving QA scorecard, Scribd is laying the groundwork for a quality program that will yield enhanced levels of support—and, ultimately, more joyful customers.
“We’re trying to steer our QA structure toward our goal of creating joy for customers,” Villalobos said. “And, the best way we can do that is by establishing a sense of trust and reliability between us and our customers.”
Need a better way to objectively assess QA grader consistency and elevate the impact of your calibration sessions?
GraderQA from MaestroQA enables a highly scalable workflow for assessing grader alignment, which frees up your calibration sessions to focus on the discussions that matter most.