Unlocking the Potential of QA Scorecards for Optimal Agent Performance and Customer Satisfaction
Scorecards are indispensable tools in quality assurance (QA) processes, providing valuable insights into agent performance and driving improvements in customer service. However, to ensure their effectiveness and alignment with organizational goals, QA scorecards need to be regularly revamped and optimized. In this blog post, we'll dive into an enlightening interview with David Gunn, a Customer Success Manager at MaestroQA and an expert in scorecard revamping. David will share valuable insights and practical tips for maximizing the potential of scorecards and taking your QA processes to the next level.
The Process of Building QA Scorecards from Scratch
When working with customers who are new to QA scorecards or need more defined goals, David emphasizes the importance of aligning objectives backward for maximum impact. According to him, "If you get the goals and work backward from there, it's more impactful. Some people might not have goals initially, so using a standardized scorecard can be beneficial in the beginning stages." This approach allows for initial grading and data analysis, identifying areas performing well, such as communication, that no longer require extensive questioning.
From there, a tailored and refined scorecard can be formulated, addressing specific aspects and asking different questions that align with the customer's goals. David advises starting with simplicity and avoiding overly complex scorecards with numerous questions or unclear prompts that might overwhelm newcomers. "If it starts to be like 50 questions, that's too much," he advises. "Go and find areas where there's overlap or you're asking very like-minded questions and see where you can consolidate things into one or remove them."
Building scorecards based on goals and gradually refining them based on data-driven insights allows organizations to focus on relevant aspects and ensure that agents are evaluated on what truly matters.
Streamlining and Enhancing Scorecards
To streamline QA scorecards and optimize efficiency without compromising accuracy, David recommends consolidating questions and addressing duplication. He suggests, "Utilize checkboxes or different levels of severity and specific criteria, so you don't need lengthy responses. Simplify the scoring process while still gathering valuable insights." By using section scores, it becomes possible to identify areas where questions can be removed, helping you optimize the scorecard over time. David explains, "After you've had a rubric for a while, use our reporting to look at the section scores. If three sections are consistently scoring 98 to 100, do I even need to ask those questions anymore? Is it worth my time? If somebody's constantly scoring high, do I still need to ask? Can I ask it in a different way to learn more?"
MaestroQA Customizable Reporting
By regularly analyzing section scores and eliminating redundant questions, organizations can streamline their scorecards, focusing on the most critical aspects that impact agent performance and customer satisfaction. This not only saves time but also ensures that evaluations are precise and aligned with the goals of the QA program.
MaestroQA Section Scoring
Customizing and Segmenting QA Scorecards for Improved Alignment and Insights
Scorecards should be tailored to different channels, query types, and agent roles to provide deeper insights into specific areas. David suggests breaking down channel-specific aspects and considering the different processes and ways of handling tickets. "You might ask a completely different set of questions because the skill set of somebody handling a technical troubleshooting question is different from someone addressing a billing question," he explains. Aligning QA scorecards with company and customer experience (CX) goals is crucial to ensure the questions asked are relevant and contribute to desired outcomes. David highlights the importance of splitting scorecards based on different agent roles and categories of tickets. "Asking the same questions isn't always fair because they're doing completely different types of work," he states. "Splitting them out in that way is really helpful."
MaestroQA Root-Cause-Analysis Checkboxes
Segmenting QA scorecards based on channels, query types, or agent roles enhances clarity and efficiency, ensuring relevant evaluations for different scenarios. By aligning the questions in the scorecards with specific goals and objectives, organizations can ensure that the evaluations are meaningful and contribute to the desired outcomes. As David points out, "Sometimes people set up their rubrics, and they're asking questions for things that aren't actually important to what they're trying to accomplish. It's a really simple thing, but if decreasing a metric like First Response Time is really important to you, but the questions you're asking have to do with grammar or punctuation, you're not accomplishing that goal. If you have these goals from both a company perspective and a CX perspective, but then you're asking all these questions that have nothing to do with it, then what are you really accomplishing in your rubric?" The reporting derived from segmented scorecards offers detailed analytics and actionable insights, allowing organizations to make informed decisions and improvements that directly align with their goals.
For example, a telecommunications company may have different scorecards for voice calls and live chats, considering the unique aspects and skills required for each channel. By segmenting the scorecards, the company can gain insights into the strengths and weaknesses of agents in each channel, enabling targeted training and improvement strategies.
Addressing Subjectivity and Calibration in QA Scorecard Evaluations
Subjectivity and calibration are common challenges in scorecard evaluations. To address subjectivity, it is important to set clear guidelines for scoring criteria to build trust among agents. David explains the significance of consistent feedback from multiple graders, stating, "If I'm an agent and I have five different people grading me and they're all giving me the same kind of feedback, then I would probably trust in that. But if I was an agent and you graded me, and you gave me a 95, but then Michelle graded me, and she gave me a 76, I'd be like, what is happening here? Haley thinks I'm great, but Michelle thinks I'm not, and they're grading the same types of tickets, so what happened?" Calibration exercises play a crucial role in aligning graders' perspectives, ensuring consistent evaluations across the team.
Additionally, addressing ambiguity in the questions is vital for accurate assessments. David suggests using clear descriptions to define expectations, stating, "If you do have some level of ambiguity in your questions, using the description to lay out like, okay, this is what constitutes 'meets expectations.' This is what constitutes 'below expectations,' so on and so forth, and having that guideline set can help. By providing explicit guidelines, graders have a standardized framework to assess agent performance, reducing subjectivity. Evaluating the value of questions becomes crucial when managing high scores. Redundant questions can be eliminated if agents consistently score high, saving time without compromising accuracy. This strategic approach ensures efficient scorecard evaluations while maintaining the integrity of the QA process.
MaestroQA GraderQA Alignment Reporting
Calibration exercises and clear guidelines empower organizations to overcome subjectivity challenges, establish consistency in evaluations, and ensure fair and accurate assessments of agent performance.
Leveraging AI for Enhanced QA Scorecard Automation
Streamlining targeted QA with AI classifiers can significantly enhance efficiency and accuracy in the scoring process. By leveraging AI classifiers, specialized classifiers tailored to specific QA criteria can be created, such as identifying high-effort chat interactions based on key phrases and patterns. These classifiers can then be integrated into automated workflows alongside the scorecard, enabling a hyper-targeted QA approach.
Furthermore, AI classifiers can automate compliance-related questions by analyzing keywords and phrases within customer interactions. For instance, if the question pertains to agent verification, a classifier can be trained to identify specific macros or predetermined text that should be used. This eliminates the need for manual grading, as the classifier can flag instances where the verification requirements were not met, allowing QA teams to focus on more complex evaluations. The combination of AI classifiers and scorecards not only streamlines the QA process but also ensures consistent and reliable evaluations while reducing manual effort.
MaestroQA’s Custom KPI’s Generated by AI Classifiers
By embracing AI technologies, organizations can automate repetitive tasks, enhance accuracy, and free up valuable time for QA teams to focus on strategic initiatives and high-impact evaluations.
Elevate Your CX and Drive Conversions with Optimized QA Scorecards
Revamping scorecards is essential for efficient and effective QA processes. By following the insights and tips shared by David Gunn, organizations can optimize their QA scorecards, align them with company goals, address subjectivity challenges, and leverage AI technologies to enhance automation and personalization. Continually refining and adapting scorecards enables businesses to achieve better agent performance, improved customer experiences, and drive overall success in their operations. The result is increased customer loyalty, higher levels of efficiency, and a competitive edge in today's customer-centric landscape.
Want to learn more?
Check out the full recording of our chat with Hims & Hers, and learn how they are De-Villainizing QA & Building a Scorecard That Agents Trust. If you would like to learn more about what MaestroQA can do for your business, please request a demo today.
Haley Fortune
Haley is the Marketing Campaign Coordinator at MaestroQA where she spearheads the new Product Webinar series.
Isaac Lee
After spending years wondering where those "calls recorded for feedback and quality assurance" went, Isaac joined MaestroQA. Today, he helps produce helpful guides and content to help companies take their quality assurance processes to the next level. Connect with him on LinkedIn.
Subscribe to the XM Blog
Join hundreds of CX-obsessed professionals who are building their own leading CX teams.
Related articles
De-Villainizing Quality Assurance for Exceptional Customer Service: How Hims & Hers Empowers Agents and Improves Scorecards
May 11, 2023
Read More
Writing the Auto QA Playbook and Revolutionizing Your Customer Support Experience
April 20, 2023
Read More
How Novo is Advancing Quality Metrics for Customer Service Teams with MaestroQA’s AI Classifiers
April 7, 2023
Read More
Revamping Scorecards for Enhanced Quality Assurance: Insights from an Expert
May 19, 2023
Read More
De-Villainizing Quality Assurance for Exceptional Customer Service: How Hims & Hers Empowers Agents and Improves Scorecards
May 11, 2023
Read More
Writing the Auto QA Playbook and Revolutionizing Your Customer Support Experience
April 20, 2023
Read More
How Novo is Advancing Quality Metrics for Customer Service Teams with MaestroQA’s AI Classifiers