MaestroQA Case Study: Using QA Data with Zendesk

Challenge 🤔

QA and training only linked for 1:1s, and feedback is reactive

This is some text inside of a div block.
Implication 😭

Team-wide improvement is hard, can't address systemic issues

This is some text inside of a div block.
Solution 💪

QA data used to create trainings, address issues causing bad CSAT (including systemic)

This is some text inside of a div block.
Impact 📈

Bad CSAT decreased through targeted training!

This is some text inside of a div block.


Quality assurance data can tell you how things are going on your support team down to the smallest of details, if you know where to look. It can tell you which agents are struggling with which sections of which rubrics on which tier of support for which industries of customers (if you set it up right)(wow what a mouthful). 

Below is an overview of how Zendesk (a CRM company with support, sales, and customer engagement products uses QA data to make a business impact, and give customers irresistible experiences.

Challenge: Before QA and Training worked together

Before the training and QA team worked as closely together as we do now, there were few, if any, large-scale initiatives on improving the quality of support. Initially, all feedback was individual-based and shared primarily in coaching conversations with direct managers. 

There was little to no reporting or analysis we could point to when discussing trends or action items at any other level besides individual based. 

Our QA program worked well in terms of helping guide managers on utilizing QA to coach, but did very little to help lift our team-full of advocates across the board. As our team grew even larger, and as we made commitments to focusing on best in class support, it became unsustainable – we couldn't target the full team with improvements and training. 

We weren’t addressing the systemic, “uncontrollable” issues plaguing our workflows (things happening outside the business that are impacting customers and the support team). 

This was primarily because we didn’t have clear reporting tools since we were evaluating advocates in spreadsheets...sooo many spreadsheets. 

Implication: No team-wide improvements, reactive training, or reporting on company issues impacting CX

Team-wide improvements were lost, and company issues impacting the support team couldn’t be identified or addressed.

It was hard to tell what issues our advocates were facing beyond the individual level. We were able to be reactive, and address any issues 1:1, but team-wide improvements were lost, and company issues impacting the support team couldn’t be identified or addressed. 

Solution #1: How QA and Training worked together now

QA data should drive training at the agent and team level. We have weekly meetings between QA and training, where we work together on the training content that comes out of our QA data each week. 

On the agent level, QA data is used in 1:1s to help coach agents on the specific areas that they can work on. On the team level, we use reporting to create weekly trainings on relevant improvement areas.

We call some of these micro learnings – quick up trainings on small things agents can improve. For example, if we notice that advocates aren’t filling out ticket information often enough, we might create an infographic on how to do it, and why it’s important for the team as a whole. We also look at failures by product area and region, so we can target our microlearnings to the applicable teams. 

With our current partnership, we’re now much quicker to adapt to trends in the QA data, as well as being able to predict certain trends and develop content/trainings that have an almost real-time impact on the improvement of our support team. Recently, we began staffing our chat channel with our more technical advocates, many of whom had never worked on a live channel before; due to our ability to spot these dips in quality for these advocates, we supplied microlearning trainings that resulted in a week over week increase of 5% in QA on the chat channel for these technical advocates. 

Solution #2: Deep dive into CSAT Root Cause 

Prior to this root-cause project, we didn’t know what was driving dissatisfaction (DSAT, or bad CSAT). 

Now we audit every DSAT ticket and determine the root cause of the negative experience – we partnered closely with leaders on our team to come up with a list of DSAT categories. And we break the data out by channel, product, customer segment, plan level, and region, to really understand where things are breaking for our customers. 

This data as a whole gives us a better sense of what’s causing negative experiences for customers, and which actions we need to take, either as a support team or as a business, to improve customer pain points.



Impact: Changes made based on root cause analysis projects

There are two examples. One is advocate skills based, and the other is an issue we identified at the company level. 

We noticed through QA that we might have an issue with soft skills/handling for very technical issues over Chat. Our root cause analysis confirmed that this was a large driver of DSAT. So we targeted the team members taking these types of tickets with a soft-skills-for-chat training. That led to a direct increase in performance and a drop in bad CSAT for that channel.  

We also identified an issue with billing-related questions. Billing can be a contentious topic for some customers, and we found that a significant portion of dissatisfaction on these tickets was due to the time to resolution when we were waiting on partnerships with our Finance team. 

We created a training on escalation paths for the advocacy team (aka when should you actually escalate to another team), as well as on the finance tools and questions that advocates can handle themselves, and a long term training plan to further improve on this. 

Customer Success Stories