Customer Experience is a highly complex and ever-evolving industry, and nowhere is that more apparent than in the vast number of terms and jargon that make up the CX alphabet soup - QA, CSAT, NPS, and LMS - just to name a few.
In a growing industry like ours, it gets even more confusing when people with different definitions come together to talk about CX - what do you really mean when you say you do calibrations once a month?
To get everyone drinking from the same bowl of soup (we’re really stretching the analogy here), we’ve put together a glossary of the key terms used in the worlds of CX and Quality Assurance. It’s a pretty long list, so we loosely organized the terms into four different buckets: key QA processes, industry-specific terms, metrics, and other concepts you need to know.
Also known asa Quality Control, Grading
uality Assurance (QA) is the process by which CX teams ensure that their interactions with customers meet their organization’s standards. While these standards differ widely from company to company, they typically have the 4C’s in mind when establishing a QA program: communication skills, customer connection, compliance and security, as well as correct and complete content.
The main process in QA is grading. Grading happens when a QA specialist or team leader randomly selects tickets from an agent’s queue and evaluates if they meet (or don’t meet) the company’s quality standards (aka the 4C’s). The terms “grading” and “QA” are often used interchangeably.
Also known as: new hire training, induction
Onboarding is the training process that teaches newly-hired agents everything they need to know about interacting with customers. This usually includes product training and knowledge, basic troubleshooting, as well as CX tone-of-voice training. The length of onboarding programs may vary from company to company, but they typically run from 2-8 weeks.
QA features prominently in the onboarding process, and frequent QA early on can help new employees understand what makes (or breaks) a response. New employees are usually introduced to a sandbox environment to apply what they’ve learned without fear of real-world ramifications if something goes wrong. Senior agents (or onboarding specialists) grade these test tickets, and deliver feedback, reinforce learnings, and identify areas of improvement. An alternative approach: instead of a sandbox environment, some teams have agents answering actual tickets early on, but they’ll heavily QA them.
Also known as: uptraining, upleveling
Related: Learning Management Systems (LMS), Learning and Development (L&D), Knowledge Base (KB)
Training is the process in which an agent is assigned learning material or coaching based on areas of improvement that have been identified through QA. It is also the last part in our “Classic Loop” that describes how QA impacts Customer Service training.
Grading allows teams to identify areas that require improvement, and assign targeted training materials. These materials are usually hosted on a Learning Management System (LMS) that tracks an agent’s progress and serves as a knowledge base for the team.
To close the loop, agents are then graded again in a subsequent round of QA, and their improvement over time can be tracked in a quantitative manner.
At some companies, a full Learning and Development (L&D) team exists to keep the knowledge base up-to-date, create new training when necessary, and run onboardings for new hires. These teams thrive on QA data! It helps them to evaluate the efficacy of their programs and identify gaps in their training or knowledge base that require further improvement.
Coaching is the process in which an agent receives feedback from either a grader, manager, or a peer. These coaching sessions are typically conducted on a 1:1 basis and feedback is given based on the agent’s QA data.
Modern QA solutions allow managers to spend more time on 1:1 coaching sessions with agents instead of grading. Rather than pinpointing individual errors in tickets, most systems allow managers to coach using a much larger dataset.
Over time, coaching has evolved into a more inclusive and democratic process. One such improvement is the introduction of appeals to the coaching process. An appeal is the process by which an agent challenges the grade given to them, usually based on extenuating circumstances that were not taken into consideration.
During the appeals process, agents share their side of the story and a grader reevaluates the given grade. This helps build trust amongst agents and managers, and helps with the agent experience (see below!).
The nature of customer service means that some parts of the grading process are subjective. Grader A and B might give scores differing by just 1 point for an agent’s tone in a chat interaction. For a scorecard that grades on a 5 point scale, that point difference can mean up to a 20% difference in that agent’s QA scores 🤯
Calibrations aim to remove subjectivity by having graders grade the same ticket separately, then come together to discuss the score the ticket should have received. Some QA tools automatically assign tickets for calibration to ensure that graders are always in line with agreed-upon grading standards.
If you want to learn more about calibration, this panel with Stitch Fix dives deep into how they implemented a best-in-class calibration program.
Scorecards (or rubrics) are the backbone of every QA program - they provide the tangible way to grade someone on the quality of their interaction.
Scorecards used to be built by the CX team on a spreadsheet (just imagine an Excel file tracking hundreds of agents over thousands of interactions, and the associated anxiety 🥺). But these days, scorecards often reside in QA platforms that are fully customizable, can automatically (and randomly!) pick tickets for grading, and report on long-term QA data on both the agent and team level. Graders have reported a 10x increase in tickets graded when moving from spreadsheets to QA software.
If you’re looking to create your first scorecard, this guide will help you get started. If you’re a seasoned QA pro hoping to level up your scorecards, check out our guide to call center quality monitoring scorecards, which covers the topic in more depth.
Touchpoints and interactions refer to every point of contact that a company has with a customer. In CX, that would refer to every customer support action logged with a customer in the CRM, regardless of channel (phone, chat, email, or social media).
VoC refers to the process of capturing customer’s expectations, preferences and aversions. If you’re curious about why people are writing in, and what is causing them to have negative experiences with your company, VoC programs can help.
CX teams are uniquely positioned to capture these insights, and QA programs can ensure this data is captured consistently. Product and backend teams use VoC data to plan their product roadmaps and engineering sprints to ensure that the product meets the evolving needs of the customer.
Personally Identifiable Information (or Personal Health Information in the healthcare space) refers to any data or personal information that can be used to identify specific individuals. This ranges from addresses and birthdays to Social Security numbers.
Most QA scorecards are built with PII compliance in mind, because the legal and reputational ramifications of not protecting PII can be extremely damaging to the company.
Modern QA software has the added benefit of automatically assigning tickets to graders for grading, ensuring the right ticket is graded at the right time.
For example, if your team wants to grade all DSAT tickets (we talk about DSAT in the next section!), and a random sample of 5 normal tickets per agent, automations assign those tickets to graders seamlessly.
These automations have become more powerful over time, allowing CX management to specify trends and patterns that might be problematic in the future (and nip them in the bud!). For growing teams, automations also ensure that each agent is graded at the frequency that their experience and performance requires - you’d grade a two-year agent who consistently receives 90+ QA scores a lot less frequently than a new hire.
Also known as: grader QA
Quis custodiet ipsos custodes? (who will guard the guards themselves?) is probably the only Latin phrase I know, and is the basis of grade-the-grader, or grader QA.
In this process, graders are scrutinized to ensure that they meet the agreed-upon standard of grading that was established during calibration. Grader QA can also help managers report on the efficiency and accuracy of their graders based on the number of tickets graded and appeals they receive.
All very meta, I know.
Here’s where we open up the real can of … Campbell’s Alphabet Soup. Metrics are the direct output of many CX programs, so defining them is essential to ensuring we’re comparing apples to apples.
If you skimmed past the fancy equation, Average Handle Time is simply the average amount of time taken to handle a ticket, from the time it was submitted, to the time an agent finishes up the tasks required to complete the customer interaction.
AHT shouldn’t be taken as a success metric - you don’t want agents to rush to close tickets in order to keep their AHT low (since some customers and issues need more time than others).
Rather, AHT can be used for assessing the efficiency of the CX operation as a whole - which lets CX management establish performance benchmarks for new agents and inform decisions around team staffing levels.
If you wanted a benchmark to compare your team against, Call Center Helper Magazine published a study finding that the industry standard for AHT is just over 6 minutes, but keep in mind that they also found a wide variance between industries, so your mileage may vary.
FCR is the percentage of contacts that are resolved on the first interaction with the customer. FCR rates give CX leaders a good indication of customer satisfaction (CSAT), because no one likes having to reach out again with the same unresolved issue!
The number of tickets entering the queue, or the number of customer interactions initiated, given a period of time. Teams usually look at ticket volume on a weekly or monthly basis.
This metric gives an indication of how efficient an agent is with dealing with tickets in the queue - as in, how many tickets they can grade each hour. A falling tickets per hour metric should not be cause for alarm, though. QA data and notes taken by graders usually show the bigger picture - in a lot of cases, the agent was taking the time to properly walk the customer through the steps to resolution.
Tickets per #X active users give teams an idea of how they are doing with regards to customer education. For a company that’s rapidly growing - ticket volumes are naturally going to go up. But if this metric is following a downward trend, it might mean the team is doing well with customer education and preventing issues from becoming tickets.
This is a great metric to watch for teams experimenting with self serve CX (where customers are shown a menu of support articles before an actual chat can be requested with an agent), or who are implementing a product marketing campaign.
The GetUpside team tracks Tickets per 1000 Active Users to understand how their CX team is doing in terms of FCR and customer education. Read their case study here.
Customer satisfaction scores (CSAT), and its two siblings DSAT and UNSAT (dissatisfied or unsatisfied rates) are a mixed bag when it comes to QA.
Why? CSAT scores are usually measured through a customer survey at the end of an interaction, often on a binary scale (thumbs up/down) or on a 5 point scale. As with most optional surveys, the data tends to show a little bias. Just think - are you more likely to answer a survey if the interaction:
Chances are, interactions that go as expected (scenario 2) don’t usually result in surveys submitted, meaning CSAT doesn’t usually show the whole picture of an agent’s performance.
Here’s another example: a customer wants a refund for a product they’ve bought, but they don’t meet the criteria for a refund/return. The agent follows policy to a tee, yet the customer is still disappointed they didn’t receive their refund. QA and CSAT metrics will disagree here - the agent would probably score well for QA having followed policy, but receive a bad score for customer satisfaction.
Despite these potential shortcomings as an individual agent performance metric, CSAT scores are still important in customer-centric organizations. Without a doubt, it’s a barometer of the success for a company’s customer experience program and a key indicator if things need improvement, but teams shouldn’t rely on it to tell the entire story behind an agent’s performance.
QA scores are the direct output of a QA program. These scores are usually given as a percentage, or out of 100 possible points.
As we said earlier, scorecards are completely customizable, which means it’s difficult to compare QA scores across different companies.
However, QA scores are a powerful snapshot of how an agent is performing relative to what their company has defined their QA standards to be. QA scores chart an agent’s progress and growth over time, and the individual components that make up the score can be used to assign targeted training where needed.
Agent Experience refers to the holistic view of how empowered, efficient, and effective your agents are. Simply put, happy agents = happy customers!
This trend was best described in our by Bonni Poch, CX Training Manager at Staples, as “moving from catching to coaching”, in our webinar titled Why Fortune 500 Companies are Replacing Legacy CX Systems with Zendesk and MaestroQA.
CX teams have caught on to the fact that a positive agent experience generally leads to better customer interactions. QA has evolved as a result, focusing more on empowering agents and giving them the leeway to make judgment calls on how to best help customers, rather than ensuring they follow a script.
The shift to omnichannel CX refers to the practice of having multiple channels of CX support where a customer can reach out to a CX team, all within the same dashboard. Most CX solution providers like Zendesk, Talkdesk, and Kustomer enable you to meet your customers where they are, be it on phone, chat, social media, or email.
This trend also allows more CX self-service than previously possible, thanks to the advent of CX chatbots and more user-friendly support pages. The rise of Omnichannel CX has also led to the increase in importance of ...
As support channels get more and more complex, teams are building increasingly complex CX tech stacks to support their agents. As a result, the quantity and quality of available CX software integrations is an important point of consideration when selecting a QA tool.
Not a new trend, but all the more crucial as more privacy laws spring up around the world (think CCPA, GDPR, HIPAA, and FERPA). With these developments, CX teams are held to increasingly higher standards, and compliance is a catchall term that describes a company’s efforts to maintain those standards, whether legal or policy-based.
We hope this article helped - tweet at us @MaestroQA if there are other terms you think we should include!
For a better understanding of the state of QA, including what the latest trends and metrics to watch are, look no further than our annual conference, The Art of Conversation. All panels with our guest speakers from leading CX teams like Zendesk, Mailchimp and Peloton can be requested here.