Part #

How to build a QA scorecard

Building your first QA scorecard can seem like a daunting task. You probably have a mix of things you need to be grading on, but it can be tricky to figure out how to put it all together into one place (be it a spreadsheet or QA software).

We’ve broken down the process into five key steps. Below, we dive deep into each step - you’ll have your scorecard up and running in no time 🏃♀️

How to build your call center quality scorecard:

Step 1: Identify the Values and Goals that Drive Your Customer Interactions

Your support team is the most frequent point of contact someone has with your company—it’s essential that agents stay on-brand and compliant. For this first step, you need to identify what your brand values are, as well as the goals your support team is trying to achieve. 

We recommend getting out pen and paper and doing this by hand (or on a big whiteboard!) 📝

On the left side, identify your company and brand values. If your support team has established values, include those, too. Is a speedy response something that’s crucial to your values? Does your team have a specific brand voice they need to use in interactions? Jot those down. 

On the right side, take note of your operational goals. Is having a low Average Handle Time one of your team’s goals? Add that on the right side of the list. Do agents have to comply with a particular way of phrasing a request for compliance or legal reasons? Pencil those in. 

Once your list is compiled, you’ll likely notice an overlap between the two sections. With the above examples: if a speedy response is crucial to your brand’s values, it’s a good sign that you’re already measuring AHT (and if you aren’t already measuring AHT, it’s something worth considering!).

To round it off, spend time listening in on customer calls and speaking with agents. These interactions will help you to identify the pain points customers and agents face, and where things could be improved. 

Step 2: Create & refine scorecard questions

Next up, you need to write the questions for your scorecard. Not sure what to include? We’ve got 4 simple yet effective questions to add below.

  1. Did the agent use good grammar and an appropriate tone of voice? 

Let’s start with an easy one. Good grammar and a friendly tone of voice (or one that matches your brand values and experience) are essential for any customer-facing team— regardless of whether your agents provide email, chat, or call support.

  1. Did the agent identify the root cause and tag the ticket correctly? 

This question helps you confirm that the agent understands your tagging system (and how to use it) and your product.

  1. Did the agent select and appropriately modify the right macro? 

Macros exist to help agents be more efficient when answering tickets. Instead of typing out replies to every question, agents can use macros to answer frequently asked questions in one click. But using the wrong macro could lead to an awkward customer experience—agents should know when (and how) to use them.

  1. Did the agent choose the appropriate resolution to the customer’s issue?

Being able to “read through the lines” and understand the nuances of each case —and selecting the right resolution—is a skill that comes with experience. Including this question in your scorecard helps you identify agents who are still building that skill. It also gets at the heart of the interaction by assessing whether the agent did the right thing, while also weighing internal protocols against what will actually make the customer happy. Traditional metrics like CSAT or NPS would not provide this level of detail into the customer experience.

Step 3: Organize your questions 

There are two main ways you can organize your questions on your scorecard. We see teams usually use the 4Cs framework or the pillar framework.

4Cs framework: 

You may have noticed that lots of your scorecard questions share similar themes. That’s totally normal, as most questions are related to one of four categories: communication skills, customer connection, compliance and security, and correct content. Some teams organize their scorecard around these categories.

Communication Skills 

How well do your agents communicate? 

For phone conversations, you might want to evaluate tone of voice, pace of communication (e.g. talking too fast or slow), or the excessive use of filler words (e.g. um, ah, and mmmmkay). For text-based channels, grammar and spelling are critical for communicating clearly, concisely and in a way that’s on brand. The messaging should also look good visually, which means paragraphs should be broken up logically with proper line breaks. 

Customer Connection 

Do agents make a real connection with your customers? 

Do these interactions provide customers with experiences that differentiate your brand from the other companies? 

Some common points of connection that make it into scorecards are: 

• Greeting customers warmly and using their names wherever possible

• Listening carefully, acknowledging issues, and repeating back what was heard to ensure everyone is on the same page 

• Responding empathetically to the customer’s mood and tone 

• Communicating a willingness to help and take initiative

Compliance and Security 

Do agents follow all essential policies and procedures to keep the customer and the company safe? 

Do they handle PII in the right way and protect the log-in information of customers? 

Security is a critical component of quality review. At a minimum, security means properly authenticating customers before disclosing or changing information on their accounts. Depending on the industry, you’ll often hear acronyms like PII, PCI, and HIPAA. Failure to comply with these regulations reduces trust with customers and could result in significant legal problems for your company, so it’s wise to make sure agents are regularly graded on compliance and security. 

Correct and Complete Content 

Do agents provide correct and complete answers and use the right tools to arrive at those answers? 

Are all internal processes followed? 

Providing wrong or incomplete answers defeats the purpose of having a customer service team. After all, customers reach out to gain access to information that will solve their specific challenges. When customers receive bad information, they’ll either lose confidence in your team or call back again in the future— leading to unnecessary spikes in call volume.

The pillar framework:

Some companies prefer the pillar framework, which aligns questions around three main pillars: soft skills, issue resolution, and procedure. 

Soft skills can be anything from tone, to empathy, to understanding context. Companies may also add elements such as friendliness, humanity, or going the extra mile to ensure a great customer experience. 

Issue resolution can be as simple as a Yes/No question of “Did the agent resolve the issue for the customer?” It can also involve questions with a linear sliding scale (i.e., from 1 point to 5 points) to capture the nuances of each customer interaction or to reflect a particularly technical or complex interaction. 

Procedure is another simple section that makes sure the agent properly follows internal procedures. Lots of teams ask procedural questions in a checkbox format, which makes it easier for a QA analyst to tick off requirements. 

Step 4: Decide where to build your scorecard

You now know what your scorecard looks like...but where does it go?

There are two basic options that most teams use: either some form of spreadsheets (like Google Sheets or Excel) or a QA software made specifically for grading. 

Spreadsheets are often the starting point for smaller teams or teams building out their first QA program. While they may work in the short term, they don’t scale with teams as they grow beyond a few agents. We think MaestroQA’s a pretty great option 😉

Step 5: Test out your scorecard

Your QA scorecard needs to work for all of your graders, managers, and agents to be successful...but you won’t know if it’ll work for your team unless you give it a try.

We recommend testing your scorecard prior to rolling out to your organization. There are 3 best practices we usually advocate for when testing: make sure you assemble a small feedback group, make it easy for them to share feedback, and be sure to give your scorecard test the time it deserves (3-4 weeks for most teams).

Assemble a Small Feedback Group 

Form a team that will test the new scorecard and make sure to include a wide range of roles. When selecting team members, look for people who are familiar with your CX goals and willing to provide constructive, honest feedback. 

Then, agree on a methodology for using the new scorecard. If this is your company’s first scorecard, graders could immediately begin using it to assess all tickets during the test period. Remember to keep agents in the loop, too. Reassure them that any tickets graded with the new rubric are for testing purposes only and will not influence their KPIs.

Make It Easy for Team Members to Share Feedback 

We’re big fans of creating a dedicated Slack channel to collect real-time feedback from graders. Encourage graders to post anything about the new scorecard that seems unclear, time-consuming, or misaligned with your values and policies. 

Calibration sessions—meetings that get all of your graders talking about the new scorecard—are another great way to solicit feedback and identify potential issues. To maximize the impact of each calibration session, use your new scorecard to grade one or more tickets prior to the meeting. Compare your grades to those of other graders and then use the meeting as a forum for overcoming misalignment.

Test Your Scorecard for the Right Amount of Time

Three to four weeks of focused testing should be enough for most teams. That said, wrapping up testing as quickly as possible is not the primary objective. What’s more important is ensuring that graders understand each question and that your new scorecard is an effective communicator of feedback 📈