Creating a quality form for your customer service team can be a scary task. Questions abound. How many items should it hold? What questions matter most? Which ones don’t? What type of scoring method should we use?
What makes it even scarier is that the rubric (and questions within, and how they’re scored) is the foundation of a great quality assurance program. A bad rubric can lead to scores that don’t represent what a quality interaction actually means to your brand. Or it can lead to inflated scores that result in metrics that your team can’t really use to improve (GASP!).
But as Jeremy Watkin, long-time QA professional, tells us, creating a quality management form doesn’t have to be scary. There are some tried and tested methods that companies can use to think through the process of creating a quality form that works really well for their unique business (and their unique customer base).
These articles walk you through various methods of thinking about what should go into your agent scorecard:
This article walks you through the steps that every company should take before building out a call center quality assurance form of any kind (in a spreadsheet, or an omnichannel QA platform).
The first step is to really think about your company’s brand, how you want to be perceived by your customers, and what good looks like for your specific company and customers. This includes:
- What you call your customers
- What the philosophy or mission of your company is and (equally importantly) what the mission of your support team is
- What good looks like for your company, and support team (is there a list of things that make up a “good” customer interaction for your team, etc
The next step is to think about quality in three separate areas:
- Accuracy: Did you provide the right answer? In the right way? Were all internal processes followed?
- Compliance: Did the agent handle PII in the right way? Did you protect the log-in information of your customer?
- Connection: THIS ONE IS LAST-BUT-DEFINITELY-NOT-LEAST, WOW! Did the agent have an authentic interaction with the customer that will differentiate your brand from the other companies that customer has talked with? Was it real? Was it human? And so on...
Then, how do you think about the ways that you use this customer service quality assurance checklist to create different rubrics for each channel (or do you use the same form?), and what should you look for in a support tool to support your team goals?
Don’t be scared!
(A fun read – the only difference between fear and courage is the action that you take).
There are two schools of thought around how many scorecards companies should have. Some believe that you should have a different form for each support channel. After all, each channel requires a very different set of communication skills. For instance – tone of voice matters over the phone, and doesn’t really exist for other channels. And grammar might be more important in an email than over SMS, where an agent could be a lot more casual with a customer.
Others think that one scorecard should be applicable for every customer interaction. After all, each channel should be aligning to a standard of excellence that applies to every interaction a customer has with your brand, right? So maybe there is a way to create a form that highlights universal best practices across any channel.
This article goes into the four pillars that every quality monitoring scorecard should have…
- Communication Skills – How well did the agent communicate the message?
- Customer Connection – Did the agent make a human connection with the customer?
- Compliance and Security – Did we follow all essential policies and procedures to keep the customer and the company safe?
- Correct and Complete Content – Did we give out correct AND complete answers and use our tools effectively to arrive at those answers?
...and how these pillars can mean slightly different things on different communication platforms.
It also goes into the benefits of omnichannel customer service rubrics, and considerations for creating this type of form. Including:
- Keep your form relatively simple
- Create a quality definitions guide
- Use N/A for certain questions
- Grade the entire interaction
- Slice and dice by channel and question in your reporting
Livin’ la vida omnichannel!
This article is for people who’ve been thinking about their quality forms for a while, and they have a list of things that they know they don’t know (wait...what?).
Do you have a pretty good sense about what’s going wrong on your team, and you know there must be a way to make sure these things are accounted for in your call center quality assurance guidelines?
These questions and frustrations are a good place to start in creating your first rubric – don’t get bogged down in getting the perfect scorecard right off the bat. This takes iterating over time as you gather data on your team’s performance and it’s a huge win to just get started, consistently review tickets, and share feedback with agents in a more structured way.
To be or not to be, that is the Q!!
FullStory has a simple mission – they believe that everyone benefits from a more perfect online experience. They want to be part of making peoples’ online experiences better. Within their mission, they have some watchwords (what FullStory calls their core/brand values internally) to guide them.
These watchwords are empathy, clarity and bionics. These watchwords give everyone at the company a framework from which to make decisions. These guide both their product decisions as well as how they operate internally.
When the support team at FullStory was creating their rubric, they really wanted to make sure that everything the support team was doing was in line with what marketing, sales, and product were up to – after all, part of their job is to make sure their customers have a more perfect online experience with their brand.
So they built their QA rubric around their core values very explicitly. Learn more about what they did here.