How QA Programs Improve CSAT – When You Apply a Growth Mindset

Company Name
Undies 🤭
Challenge 🤔

CSAT and QA score plateaued (neither were bad)

Implication 😭

When QA scores are high, but CSAT could be higher, you can do better!

Solution 💪

Have a growth mindset around QA, where you update QA standards whenever CSAT plateaus

Impact! 📈

Increased CSAT (in this case 96% – 99%)

Customers’ experiences matter more now than they did even ten years ago – if a support interaction isn’t on-brand, they’ll feel it, and it will matter to them. Applying a growth mindset in QA can lead to continual improvements in customer experience and CSAT. 

MeUndies has seen this first-hand, as they’ve updated rubrics and raised standards for their distributed teams multiple times in the past few years. If you put a growth mindset toward your quality program, you can unlock improvements in CX (and CSAT) that never stop.

Setting the context: 

Industry: E-commerce/Retail 

Quick overview of the company (one or two sentence description): MeUndies is a direct-to-consumer underwear and apparel company. 

Who are your customers?: The MeUndies customer prioritizes their self-expression, but not at the cost of being comfortable. They've had the realization that underwear is too important of a daily necessity to settle for cheaply made, uncomfortable products from the stale brands that existed prior to MeUndies.

How many agents?: Like many support teams, our team size will fluctuate between the holiday season and post-holiday season. As of right now we’re a team of 53 December 2020 we’ll be 82! 

Distributed team? How many locations for agents?: All  of our customer experience agents are based in the Philippines. Similar to many of the other teams, our agents are working remotely at the moment! 

Do you use an outsourced team? What’s the breakdown of in-house vs outsourced?: While the majority of the customer experience team is based in the Philippines, we have operational roles that support our Philippines team in Los Angeles (HQ). These operation roles include: Quality Assurance, Learning & Development, Retention and Social CX. 

Do graders or team leads do 1 to 1s with agents ? Or is it someone else?: 

Both! Team Leads are in charge of their team as a whole regarding their overall performance and they hold weekly 1:1’s with each agent on their team. QA specialists will schedule 1:1’s with our “developing” and “probation” agents regarding their ticket responses/QA goals specifically. The goal of these 1:1’s is to dive deeper in the more granular pieces of their interaction’s makeup and talk through specific tickets and their problem areas. 

How many team leads, how many graders?: We have 5 team leads that directly manager our agents and 3 QA graders. 

How many channels?  We have four channels for customer inquiries: Email, Chat, Social Media, SMS. 

What kind of ticket volume are we talking (approximate): In 2019 we received almost 300,000 customer ticket inquiries. 

Most common request that your support team gets: The top 5 customer inquiries we receive are: return requests, order status updates, order change requests, order cancellations and promo code requests. 

Biggest challenge that your support team faces: The biggest challenge we face as a team is finding alignment across the entire team on how we handle interactions 

How many customers (approximate):  While I can’t tell you how many customers we have, I will say that as of last week we’ve sold 14 million pairs of underwear. That’s a whole lotta butts to cover ;-) 

Is there any seasonality to your business? Yup! We operate in 2 different seasons: holiday and post-holiday. The holiday season starts with Halloween marketing in late-September and goes until St. Patrick’s Day in March. 

Do you have a dedicated training team? We do have a dedicated training team! In Los Angeles, Caylen McDonald oversees our entire L&D program, in addition to her counterpart in the Philippines.

In one or two sentences, what’s the purpose of your QA program? The purpose of our QA program is to provide consistent feedback directly back to our agents to always ensure they are learning and growing, and we are providing our customers with the best service possible (measured by CSAT). 

What are your roles in the QA <> training loop that you’ve built out?  How do you all work with agents, and how does your Quality Assurance program lead to better CSAT?

Ro, Caylen and Lauren make up the operational function of our Customer Experience team: 

  1. Lauren: Quality Assurance Specialist who is leading our QA program
  2. Caylen: Learning and Development 
  3. Ro: Director of Customer Experience for MeUndies. 

In terms of the feedback loop, it starts with Lauren and her QA team. They’re on the frontlines going through hundreds of agent and customer interactions every week. Through their grading, they identify and report learning and development opportunities to Caylen and her team. Caylen will then take those opportunities and build actionable lessons in our L&D platform: CheekSquad University. Where does Ro come in? She doesn’t...just kidding! As the manager of our OS operations, she makes sure that all the moving pieces of our programs are effective in making an impact on our key KPIs: CSAT and NPS. 

To summarize: Caylen is the Learning and Development coordinator, she builds out trainings and lessons in Lessonly. Lauren makes sure that QA is good for the individual agent, makes sure QA is actionable, and turns QA reporting into training. And Ro is the nucleus of this team, who ties it all together. 

When you first started using MaestroQA - you created a rubric, and you started using QA with agents on your team, and QA score went up over time, along with CSAT. And then eventually you hit a point where QA score was really good, but plateaued, as did CSAT. Tell us about that?

Ro Nattiv – Director of CX

You’re right, when we first started our QA program, our CSAT was about 4.85. We have seen improvements year over year, and after two years our CSAT stands at a 4.94 average to date. This is great. It’s a true sign that QA has helped us improve over time. But unless we are at a 5.0, there is always room for improvement.

At that point, CSAT wasn’t bad at all - so why did you go through the trouble of redoing your rubric ? 

Lauren Manser – Senior QA Specialist

The thing that makes MeUndies different from other programs that we have seen, is that we are truly never happy with where we are at. Just because our CSAT isn’t bad, doesn’t mean there aren't areas for improvement. Until every interaction is a 5 star interaction, there is work to be done! 

We look at QA scores against customer interactions, and analyze how we can make our interactions even better across the board. What are we missing? What are the inconsistencies? What needs to be more black and white? What makes our awesome interactions awesome? We often make small changes to our rubrics – tweaks on wording, adding one or two points, maybe changing the weight of how much some points are worth… making little adjustments helps our agents know what we want them to focus on and allows us to target our problem areas.

It’s about breaking down the mechanics behind QA regularly to tell a bigger story than our metrics. 

Caylen McDonald – Learning and Development Coordinator 

I joined MeUndies in July of 2019, and our team CSAT was a 4.95. Part of my onboarding was deep diving into the interactions to understand where the team was at, and what I saw was not a 4.95 team overall. So I asked the team as a whole, “Do we really think our team is a 4.95? Is that a representative of what we are truly producing?” That reinvigorated the team to take a look at where we were at, and make our next push to achieving more. 

In our last big revisit, we took a handful of tickets with a “100%” QA score, and the three of us dove in and really asked the question “Is this ticket really deserving of 100%?” To answer that, we each wrote out what our “perfect” response would be, and compared what each of ours looked like. We were able to discover the common threads between our responses, and the high points each of us brought to the table. 

Once compiled, we were able to nail down 4 major characteristics they all had in common and looked back at the original ticket and re-graded it under this new perspective. The score dropped to about 65%. This helped us create a jumping off point for not only a new QA rubric, but also an opportunity for L&D to align with that. It’s not about tearing your rubric apart to simply switch things up, it’s about taking a close look at whether or not your rubric truly determines whether a ticket is meeting the level of service your team strives to provide. 

Lauren Manser – Senior QA Specialist

And that brought us back to a really scary thing. Not only questioning our QA program, but our own team. It’s hard to turn to your team and tell them that even though the metrics say we are doing awesome, at this point a 96% average QA score, we have to re-evaluate that and remind ourselves that regardless of what our numbers are telling us, we have to push ourselves to constantly hold ourselves accountable to a higher level. And some of that stress came before the rubric was even changed. By doing the exercise Caylen talked about, the QA mindset was already adjusted and QA grades started dropping. That happens too, it’s not always a rubric change that you have to reckon with, it could be a mindset shift or a new grader even. 

So you redid your rubric? Can you give some examples of some changes that you made?

Lauren Manser – Senior QA Specialist

We have had multiple iterations of our rubric, whenever we feel like some aspect is not getting the correct amount of attention from the team, we adjust our rubric to push that initiative. We do this by making adjustments to the weight of a category, or adding in additional areas of interest that maybe were not measured previously. 

For example: a different branch of our CX team wanted to test a Sales Program using our agents, essentially encouraging them to upsell.  This test was tied to an incentive, but not all agents were taking it seriously, or attempting to include the Sales approach into their interactions. So we added it into our rubric. Not only to highlight if the agent was taking the opportunity to sell, but also if they were doing it in an appropriate way. By tying it into the rubric, the agents were more focused on how that played a part in their metrics, as well as allowed us to really dig into these sales interactions. Not only that, when we started to see our CSAT drop, we could perfectly link it to these sales interactions through QA.  

It brought up a really important blind spot in our communication. The QA/L&D program was not involved in the sales education. So we started seeing a dip in CSAT scores for these sales interactions and it became clear that the agents were not giving the proper training or tools to pull off this change at this time. So, we scrapped the test and removed Sales from our interactions and our rubrics.

Caylen McDonald – Learning and Development Coordinator 

Another great example of rubric changes, was brand voice. Now if anyone has seen our ads, it’s pretty clear that our brand voice is a huge part of how we interact with customers. Not only do we have to teach something that is not super tangible, but we have to teach agents who live in a different country how to grasp this very “California” way of communicating. 

Originally, we had a pretty broad statement on our rubric like, “Portrayed the MeUndies brand voice in interaction”, and it was worth a lot of points. But because it was so broad, there was a lot of room for different interpretations and questions, it was hard for agents to know what to do. 

It quickly became clear that the agents did not have a great grasp of brand voice, and our rubric wasn’t helping that. We had to come up with a way to teach brand voice, something that tends to be intangible, and make it more tangible. We were able to define 4 major characteristics of our brand voice based on a house system (yes, like Harry Potter)! We had agents take a personality quiz to find their dominant house, and provided pop culture references for each house. 

This gave the agents a tangible hold on what the characteristics we are looking for are. Then, we tied them into our interactions and how they should reflect in our customer interactions. Each of these characteristics is reflected in our rubric as non-graded feedback to constantly remind the agent what characteristics they are best utilizing, and which ones require more focus, without impacting their overall grade. 

From there, we were able to take the characteristics of brand voice, and break them down in the rubric into smaller pieces. An agent could see within the rubric which pieces they missed out on, and why. It’s all about providing the correct resources, along with the rubric updates to ensure the team is successful.

Inevitably, QA score takes a hit with each new rubric redo - how do you make sure agents are okay with this? 

Ro Nattiv – Director of CX

Great question!  I think the majority of support professionals can identify with one of our primary struggles: getting your entire team to embrace a Quality Assurance program and be “OK’ with changes. To be perfectly honest, QA is generally not considered a popular program within any support team. If that’s not the case with your team, err... let’s talk! But the truth is, our agents aren’t always 100% on board with the changes that are made. 

Generally speaking, many agents view our QA specialists as being overly critical, nit-picky and merciless. This is especially true when we make either a change to one of our rubrics. What our agents tend to disregard is the purpose of our QA team and program, which is to help our agents transform into agent superstars. Truly, the best possible versions of themselves. 

At the base level, one thing we do is incentivize based on averages – if the average QA scores across the board are lower than they were last month, there is still the opportunity to achieve your normal bonus based on where you fall relative to the mean. This helps the agents buy in at the base level. 

But as a leadership team, WE are okay with taking a hit in our QA metrics, because we have seen it’s proven to churn out better work from our agents. And no matter what QA change we have done, the scores always bump back up to where they were before. Regardless if it is a rubric change, or some sort of operational challenge that impacts our interactions, our high standards make it so we bounce back faster. And even more importantly, we are able to see who truly embraces the changes. Who is willing to admit they aren’t perfect, accept feedback, and push themselves to be better? These are our agents who end up being promoted.

How do you go through the process of figuring out the stuff that could be better when you’re redoing your rubrics?

Caylen McDonald – Learning and Development Coordinator 

One of our main motivations to revisiting rubrics, is the idea that we never want to be complacent with our performance. We know the level of service we want to provide to our customers, and we have to continually hold ourselves accountable to see if that is matched by our actual performance. It’s super easy to look at your QA scores with a team average of 97% and think “Awesome, we are doing great and we don’t have anything to worry about”. But that’s not the case. That’s when we really want to take a hard look at the work that’s coming out and revisit our rubric to determine how we can best push our team for more. We take a lot of different approaches when it comes to revisiting a rubric from here. 

And it’s not just coming from us looking at our current QA rubric. It’s feedback from the graders (questions they have when grading), the agents (confusion on what a rubric aspect means), the TL’s (what disputes they continually submit) and L&D , what the agents are currently learning that go into it. 

And then when you do redo a rubric, can you describe the process through which you take information from the L&D AND QA process and make sure it results in better CX and better CSAT? 

Lauren Manser – Senior QA Specialist

It’s all about the focus of that rubric. If we have seen personalization take a big hit recently, we might bump up the point value on the rubric to shine more light on that, and allow the agents to get more consistent feedback on that subject. Personalization is super important to our interactions, and we need to make sure that that is properly weighted in their QA grades. Adding more weight to that section makes it more obvious if points are still being missed, and allows us to dig into those further. We’d then try to figure out if they have the right tools that they need, or the resources to get these points on the rubric.

Caylen and I work super closely to ensure that we are both on the same page when it comes to QA & L&D updates. We want to make sure that any change to the rubric does not come unless the agent is equipped to handle it – which sometimes means we have to make new trainings. 

Caylen McDonald – Learning and Development Coordinator 

Honestly, that is one of our main roadblocks that we are trying to sort out today. Before I joined the team, QA & LD were not super linked, and it has become apparent over the past year or so that that's a detriment to our team. We are coming up with new ways to ensure that QA/L&D/ and our on the ground leadership are all aligned. Right now, our agents are getting a ton of feedback from all directions, and it can become overwhelming quickly, especially if it varies. So here are our first couple steps. 

Each month, the QA team provides the L&D team with a monthly report on team trends. Common categories include most common point docs, tenure of agent, etc to best identify areas of improvement. Our L&D team is currently tracking the 2020 results thus far to build out an extended learning calendar for the rest of the year. We’ll dive into the specifics to  understand the root of what’s missing and develop refreshers and workshops that best enhance our process. We can then test the training directly to the individual category on the rubric to track its reflected performance in QA.

What was the thing getting you through the tough times or times that felt a little turbulent with changing rubrics and getting your BPO team on board?  

Ro Nattiv – Director of CX

What really gets us through those tough times  are the moments when it “clicks” with the team. The best example I can give is when an agent is consistently struggling with their CSAT scores and they not only embrace the feedback they receive with their QA specialist, but also seek out 1:1 time with the QA team as well. When they commit to start taking the feedback and turning it into action, they almost always start to see a positive change in the CSAT scores. That’s when we start to see agent “buy-in” with our QA program. 

Through our QA program, we have really been able to set new standards for ourselves. And it has played a huge role in our CSAT. Not just in the actual metrics, but the ideology behind it. Everyone on the team knows the level we are striving for, and works hard to meet it. That’s probably the biggest change we have seen over the recent months. We have seen the agents and our TL’s really grasp onto the growth mindset. In pushing themselves to always strive for improvement, both on and individual level, and as a team. It’s a true sign that our obsession with growth has fully become part of the Cheeksquad culture and DNA. 

More Stories