Building an Insightful and Transparent QA Program by Focusing on DSAT Tickets

Handy’s Challenge:

QA scores inflated by the high volume of simple tickets

This is some text inside of a div block.
Negative Implication:

QA program became less effective at identifying areas of improvement, and complex interactions slipped through the cracks

This is some text inside of a div block.
The Solution:

Implemented an automated QA workflow to grade only tickets with negative CSAT scores

This is some text inside of a div block.
Impact at Handy:

Handy can easily get to the root cause of complex interactions and drives real performance improvements

This is some text inside of a div block.

A large majority of customer interactions at Handy are pretty straightforward. Simple questions like, “Can you help me reschedule my booking?” make up about 90% of inbound tickets. Tickets like these require minimal training, and are really hard to get wrong. 

For most agents, this led to artificially high QA scores - which got the CX leadership at Handy to investigate further. They realized that they were not uncovering any actionable insights by reviewing mostly simple tickets (which was the main reason they wanted a QA program in the first place!). 

Handy decided to dig deeper into the story behind those high QA scores to truly understand how their team was truly performing and get to work fixing up their QA program.


Challenge: High QA scores meant a lack of insight and impetus to improve the CX program at Handy

For Handy’s leadership team, having QA scores that averaged 92% was no cause for celebration. It was all too easy for a manager to see a high score when preparing for a coaching session and assume that the agent was doing well. 

The truth was that the majority of interactions were straightforward ones - like rescheduling cleaning appointments - where there was little room for the agent to mess up. In turn, QA'ing mainly "easy" interactions meant the team wasn't grading tougher, more complex interactions  where there would be real learning and growing opportunities.

The evidence was in the numbers: when graders stumbled across more complex interactions, QA scores tended to drop drastically. While easy interactions often landed a grade in the high 90s, more difficult interactions were often in the mid 50s. This often led to confusion and a drop in morale for agents who were used to seeing scores in the high 90s.


Solution: Implemented QA filters to grade only DSAT tickets, and took a data-driven approach to agent coaching

The team at Handy turned to MaestroQA for help. 

With automations and tags in the MaestroQA platform, Handy’s leadership team smoothly transitioned to grading only DSAT tickets that had a negative customer satisfaction score attached to them. MaestroQA’s category-leading Zendesk integration meant that operationalizing this was a breeze - the team could now cut through the clutter of easy, perfectly-handled tickets and focus on trouble areas.

The Handy team actively encouraged agents to flag mishandled tickets for review as well. They accomplished this through a “QA score credit” - agents who flagged their own tickets for review received a small bump to their QA scores to help offset the negative impact that grading a mishandled ticket might have.

The new MaestroQA instance also meant that Handy could take a very scientific approach to their onboarding and continue to uplevel their CX program across all areas. The team could now A/B test their onboarding program as they expanded rapidly, tweak training protocols between onboarding classes, and keep the changes that worked best.


Impact: A more transparent and insightful QA workflow with high agent participation

Agents at Handy quickly embraced the new normal. They realized that they could discover many more areas to improve with the new focus on DSAT tickets, even if it meant their average score would no longer be in the 90s range.

The ability for agents to flag tickets (both their own, and their teammates) for review led to a collaborative culture that sought the best results for the team as a whole. The QA score credit incentivized the team to submit more tickets than before for grading, giving the team better insights about their CX program. It also changed QA’s reputation - rather than being seen as punitive, agents started to see QA as a collaborative function that they had a stake in.

QA also allowed Handy to test and validate the training protocols which worked best for getting agents onto the queue - allowing them to quickly scale their team up to meet increasing CX demand with confidence.

Customer Success Stories