QA score and CSAT score weren’t correlated 👉 so agents weren't incentivized to impact CX
Members weren’t getting the right outcomes, and CSAT was lower than we wanted it to be
Changed rubric to focus primarily on the member experience, and minimally on process
CSAT has gone up 6%, and it’s stayed up!
Industry: Subscription box service
Quick company overview: FabFitFun is a quarterly subscription that delivers a selection of 8 - 10 full-size beauty, fashion, fitness, and wellness products curated by the FabFitFun team. Members also get access to TV content, community forum and exclusive flash sales & offers.
Who are your customers? Women looking to focus on growth and discovery through self-care 💅
How many agents? 250+
Distributed team? How many locations for agents? Yes! We have folks in LA, and the Philippines
Do you use an outsourced team? What’s the breakdown of in-house vs outsourced? 90% outsourced agents, 10% in house agents
Do graders or team leads do 1 to 1s with agents? Or is it someone else? Graders and team leads hold 1 on 1s with agents together
How many team leads, how many graders? 1 TL to 11 reps, 1 grader to 2 teams
How many channels? Chat, email, phones, social
Approximate ticket volume? About 150k reach outs a month
Most common request? Information about membership, questions about orders.
Biggest challenge that your support team faces? Constantly evolving product (every season!) and making sure reps are incentivized in the right way.
Approximately how many customers? Over 1,000,000 members
Is there any seasonality to your business? Yes, we deliver boxes seasonally
Do you have a dedicated training team? Yes!
In one or two sentences, what’s the purpose of your QA program? To ensure our CX team creates an amazing experience for our customers that's aligned with our company goals. CSAT is a company-wide metric, so we also use QA to make sure customers are getting the best possible experiences.
We have a lot of outsourced reps, and we weren’t seeing a correlation between CSAT and QA scores – their CSAT could be amazing, but their QA score could be bad, and vice versa. When this happens, you know something is up. In our case, our QA score wasn’t checking the right thing (which is making the customer happy), so we needed to shift around what we were looking at.
We were looking too strictly at areas that weren’t moving the needle in terms of the customer experience - we were QAing primarily against internal rules, regulation, and processes.
The consequence here is that we were pushing agents, and constantly reinforcing against, things that weren’t impacting CSAT! We were incentivizing them to focus on strict rule following, which ultimately wasn’t moving the needle on how we were making our customers feel.
Reps were a bit scared to go the extra mile for customers because they were worried it might negatively impact their QA score. So if a customer needed help with something that was slightly outside of the rules, but would have been a good exception to the rule for the customer, the customer wasn’t getting that good experience.
Right - we were in a situation where customers ultimately weren’t getting the right outcomes, because our reps felt like their hands were tied, and they felt like they had to follow the rules too much (because of the very process oriented way that we were QAing.
In this time, our CSAT was lower than we wanted it to be.
As we talked about, the rubric was very process and rule oriented. But since that wasn’t mapping back to ultimately impact CSAT, we changed our rubric to focus entirely on the customer experience, and things that really impact the customer.
We’re now asking things like, “Did the rep do everything they could to make the customer experience as frictionless as it could be?” This includes things like looking at past interactions, account notes, background information related to their issue, etc.
We’re also asking now: “Did we do everything that could to take care of the member?” This includes using all avenues, bringing in Team Leads if they needed to, etc. And we’re asking if any mistakes were made that impacted the member experience.
We do still have a “tools” section that covers internal processes. So we’re still tracking whether or not the right internal protocols are followed, and we still coach on this stuff. But this doesn’t impact the QA score of the rep – the only thing that is impacted is whether or not they did the right thing for the customer.
And fabulously, tools scores haven’t dropped since we implemented this change – reps still really care about doing the right thing, and they know why we’re coaching on this still.
Since we changed our rubric to be more focused on customer experience (and less on process) we’ve seen CSAT go up 6%, and it’s stayed up.
We also have a positive correlation between CSAT and QA now, and reps feel more empowered to do what they think is right to take care of our members. They feel confident getting creative and finding the right solution for people, instead of feeling stuck behind rules. They’re chatting with teammates about complicated cases, taking ownership over their solutions, and members are getting custom solutions when they call in.
And reps are happier too. Before I think they might have felt like they wanted to help a member on a call, but couldn’t. Where now they feel like they have the power to do their jobs, which is to make our members happy.