The Romans favored a gradual, structured process when it came to city-building: they identified and established a strong central area of the city (usually an agora, or marketplace), then added to that core over time by following a pre-established pattern/framework. Very cool 😎.
But before I lose everyone who isn’t a history buff, let me get to the point—building a QA scorecard isn’t that different from planning an Ancient Roman city: you start from a strong core that reflects your company values and goals and follow a framework to add to as you go.
Keep reading to learn more about Roman cities, QA scorecards, and everything in between. We’ll go over what a quality assurance scorecard is, why they’re crucial to your team’s success, and how to build one yourself. We’ll also show real examples of QA scorecards from best-in-class customer experience (CX) teams like MeUndies, SeatGeek, and more.
If you are ready to go really in-depth into building QA scorecards that will generate insights and trustworth data for your CX team, look no further than our Ultimate Guide to CX QA Scorecards, or keep reading for a handy checklist download at the end of this post.
A Quality Assurance Scorecard (also known as a QA scorecard, quality monitoring scorecard, or call center quality monitoring scorecard) is a rubric against which a QA analyst, team lead, or manager grades an agent’s interactions with a customer.
These scorecards are based on a company’s values and standards for agent-customer interactions and form the foundation of the QA process, and are they way that support teams continually ensure that customer interactions go the way that they want them to.
Most quality assurance scorecards are a collection of customer service values and procedures phrased as questions posed to the grader. In the example from MeUndies above, the grader is asked to determine how well the agent did to personalize their response to the customer - something that helps set MeUndies apart from their competitors on the customer experience front.
MeUndies uses a checkbox in this segment to allow the grader complete flexibility in determining which personalization tools were used, and which were skipped over. Other quality assurance scorecards also a mix of linear point scales, yes/no dropdowns, and short answer fields to collect feedback for agents.
Quality assurance scorecards serve as a framework for call center agents to improve, by providing measurable metrics that impact customer satisfaction and customer loyalty.
QA scorecards are the backbone of a QA program, so what this question really should be is “Why do you need a QA program?”.
Companies of all sizes have implemented QA programs for their CX teams, so the main reason has nothing to do with size or annual revenue. Rather, companies implement QA for four reasons, which we’ve condensed into a cheesy acronym for you - SAVE.
Teams like LevelUp instituted a QA program to provide a framework (and data!) for managers to coach their agents around. Previously, no such program existed, and agents on different teams would receive different coaching experiences depending on who their manager was.
2. Accountability and Visibility
Managers at WP Engine have to grade agent interactions on top of their many other tasks, and often fall short of their grading targets. The CX leadership implemented a QA program to help automate the ticket assignment (or QA process) and help managers stay accountable to their grading goals.
The QA program also allowed managers to have visibility on their agents’ engagement with their assigned training modules on their knowledge management program, and helped them to build a high-trust environment where everyone could thrive.
GetUpside had a real happy problem on their hands - downloads of their cashback app skyrocketed, but the CX team couldn’t grow at the same pace.
Faced with the prospect of losing the in-house team she had built over two years to be replaced with an outsourced BPO team, Heather Naughton, CX and Operations Manager, chose to double down on her in-house team. She implemented a QA program to help spot and correct inefficiencies in their QA process, and the team eventually brought response times back to their pre-growth spurt rates, and then exceeded them!
Back to my Roman city analogy: to build a call center QA scorecard, start with what’s important to your company, and build from there.
I spoke to our Customer Success Managers Matt and Laura to figure out the best way to build a QA scorecard from scratch:
To which they replied:
Companies building a QA program often consider their company values first, and this makes a lot of sense—your customer-facing team is the most frequent human point of contact a customer has with your brand, so it is essential that they embody your brand’s values.
Step one in the scorecard building process is to take a scrap piece of paper or open a Google Doc (save a 🌲 ppl), write “Company Values” on the left, and fill it up with the values that drive your customer interactions.
On the right, write: Operational KPIs and Metrics.
Is First Contact Resolution an important part of your customer experience? Add that to the list. Are you aiming for your agents to deliver a certain number of interactions per hour? Pencil it in. Are there important regulations that your agents have to follow in their interactions (PII, HIPAA-compliance, etc)? Gotta have that.
To round it off, keep listening in on customer calls and speak with agents. These interactions will help you to identify the pain points both customers and agents are currently facing, as well as potential areas of opportunity to improve. Make note of these areas as well – this could become a question on your QA scorecard.
By cross-referencing these insights with the list you’ve made, you will already have gathered the main ingredients to build your first scorecard.
Once you’ve mapped out the most important brand values, KPIs, and areas of opportunity for your team, you can take these and affix them to a framework. Here are two suggestions from our experts:
Jeremy Watkins, Director of Customer Experience at FCR, suggests a 4C framework to build your scorecard around:
(I know, that’s way more Cs than we promised, but Jeremy has a point.)
Each of these represents a section in your rubric, which you can build out with questions based on the list that you’ve created previously.
Another way to do it is something that Laura suggests:
These pillars are:
things like tone, understanding context, empathy go here. Other companies have also added elements like humanity or use this section to suggest having unique interactions like casually swearing at a customer (with love, of course).
this section can be as simple as asking a Yes/No question: “did the agent resolve the issue for the customer?”, but can also involve a linear sliding scale (from 1 point to 5 points, for example) to better capture the nuances in each customer interaction, or to reflect a particularly technical or complex interaction.
did the agent properly follow internal procedures? We often see checkboxes for this section, where a QA agent can easily tick off each requirement as it gets met.
Both of these frameworks are a great place to start (and there’s plenty of overlap between them!). You can also look for patterns in the notes you’ve made, identify the main categories that emerge, and use those as the primary sections of your rubric.
With your quality scorecard mapped out, the next thing to do is to start fleshing out the questions that go into it.
We compiled five common questions to give you that a head start in your QA scorecard research process - here’s a sneak preview:
Good grammar and a friendly tone of voice (or one that matches your brand values and experience) are essential aspects of any customer-facing team.
If you include this question in your call center quality assurance scorecard, check that it reflects the omnichannel nature of support today – tone of voice and good grammar is a must for every channel, regardless of whether your agents were providing support over email/chat/call. You can achieve this by either wording it to work for both voice- and text-based channels, or by creating different rubrics for each channel.
If a call center agent doesn’t understand the root cause of the ticket well, they might not come up with the best solution—and we all know that it’s better to treat illnesses rather than just symptoms.
Tagging is a critical part of ensuring good data quality—by aggregating tag data, your team can easily identify the issues that keep popping up—and move to correct them.
Customers often struggle to describe their pain/issues to the agent – and that's not their fault! They might be missing important context. Your customer experience team lives and breathes your product, and should be able to figure out what your client is trying to communicate even when they can’t find the right words to.
Use this question when empowering agents to be more vigilant in digging up the real issues your customers are facing.
The problem with using pre-programmed macros to answer humans is just that – you're dealing with humans!
Macros can’t fully capture the wide variety and the uniqueness of your human interactions. Using the wrong macro could lead to an awkward answer from the customer's perspective, but it could also mean more work for the agent – trying to customize a macro about billing issues to answer a question about shipping might be more trouble than it's worth, and agents should know that!
So if you’re using macros, care must be taken to select the right one for the task, and modified to give the right resolution for the customer.
This question is helpful when QA-ing newbie call center agents, to seasoned pros. Macros, LMSes, and guidebooks are helpful when onboarding a new team member, but being able to read the nuances of every case and select the right form of resolution is a skill that comes with experience, and can always be honed over time.
Including this in your scorecard will help you keep a lookout for agents who are still building up to that level of awesome. It also gets at the heart of the interaction – did the agent do the right thing – while also taking into account that there are differences between what will make the customer happy, and what actually follows internal protocol.
Many teams start their QA process on spreadsheets. Through a complex combination of formulas, Google Sheets and reference cells, it's possible to set up a QA program where you can figure out what works for your team without paying for a tool.
But eventually, spreadsheets and formulas become unwieldy and hard to use for even the greatest Excel whizzes ✨ out there. Imagine having to manage the QA scores and analytics of hundreds of agents on a massive spreadsheet. Just the thought of maintaining a spreadsheet with 100 Excel tabs has me quaking in my chair. It’s safe to say that what works for a team of 10 does not scale to a hundred agents.
Another reason to use QA software - the integrations. Leading QA software programs integrate with all manner of CX tools that your team might currently use, including learning management systems, data warehouses, and helpdesks.
Take a moment to evaluate your company’s current needs and goals, pick a tool (cough* this CX quality management software is pretty fab *cough), and get to building!
At MaestroQA, we are uniquely positioned and privileged to have insights on how hundreds of CX teams do QA. We picked three success stories to give you a little bit of inspiration:
Fullstory regards their customer experience team as defenders of their brand. They started the QA planning process with their product “watchwords” like Empathy, Clarity and Bionics – concepts around which their product was built. They then built their first scorecard with the goal of validating what their CX team was already doing, and to help them to keep improving over time.
With their watchwords in hand, they crafted detailed descriptions of the experiences a customer could expect in every customer interaction, and included them as Yes/No questions for the scorecard, with the belief that a slow, thoughtful, qualitative QA process would present more value than a quantitative QA score.
MeUndies are a great case study of how to maintain quality in the face of a seasonal workforce, as well as a great example of a brand voice-oriented rubric.
Their QA program allows call center agents to track their own growth over time and lets the team to pay attention to the needs of each agent and rally behind them if they needed it.
In their scorecard, MeUndies included parameters like brand voice, customer satisfaction, personalization, empathy, the use of macros and (get this) use of emojis. I 😍 them already.
To learn more about how they weigh each parameter in their call center quality monitoring scorecard, as well as their philosophy behind scorecard design, read this case study.
Our friends at SeatGeek started looking at a QA program to improve customer satisfaction scores. They’re like that kid in class who gets an A and wonders out loud how they can get an A+ 🤦♀️ (but SeatGeek does it for their customers!).
Their first scorecard was based on their operational needs: agent thoroughness, tone, and resolution. To keep it simple, they used a linear 10 point scale on spreadsheets.
As their CX team grew in number, more senior CX agents were incorporated into the QA team. SeatGeek realized that the time was right to implement a QA platform to automate parts of the QA workflow and improve their efficiency.
This new-found efficiency also let them implement more parameters to measure their agent’s performance while maintaining their average Time-to-Grade.
Read about how they did it here.
While Rome wasn’t built in a day, your QA scorecard certainly can be!
You’ve probably realized this while reading, but you already have the main elements of a scorecard on hand. If you’re in any doubt, double check your CX program against these best practices for Quality Monitoring Scorecards that we’ve compiled for your convenience.
You’ve been listening to your customers’ needs and pain points, talking to your agents, and you already live and breathe your brand and its values. The customer examples we included above should also have driven home the point that most call center scorecards start with company values and operational requirements, and then grow from there.
Start off with something manageable that also embodies your company’s values and organizational goals. Complexity can come with experience and the ever-changing needs of your CX team, but for now, it’s a good idea to keep things simple.
To summarize, when building a scorecard:
Here's a handy checklist that you can use as you get started!
Once you’re done building your first QA scorecard, our guide on onboarding your team to your new QA program will be really helpful for you to secure your team’s buy-in and get them the maximum benefit out of QA!
Go forth and QA!