Phone and email
Peer review QA program
Before MaestroQA, the support team at Illuminate Education was doing what many companies in the early phases of a quality assurance program do – they were spot checking tickets reactively, particularly when they were made aware of an issue (a bad CSAT score or a customer complaint). And, like many companies in this phase, they were managing the process in spreadsheets.
Matt Dale, VP of Support, and Kallen Bakas, Director of Support, didn’t feel like they fully knew what was going on in agent-customer interactions, and they didn’t have a strategy or process for team improvement unless an issue was escalated (in which case they’d give the agent feedback).
They felt like giving feedback was a cultural value of their company – but the feedback was often vaguely positive and non-actionable, and wasn’t actually helping anyone improve their skills.
Additionally, one of Illuminate Education’s core values is continuous improvement, and they knew that they could improve the program. They just needed a more robust framework and process to create the feedback culture that they wanted to have – a culture in which feedback moves both up and down the hierarchical ladder, and in which feedback is constructive, actionable, and ultimately improves the customer experience.
This is their story:
The last thing we wanted was a top down system for quality management – we wanted the team to continue learning from one another and helping each other both up and down the hierarchical structure.
To do this, we developed a peer review program using MaestroQA. While team leads do grade occasionally, the majority of the grading comes from agents looking over each others’ work.
One challenge associated with peer review (which is less of an issue on teams that have just a few people grading) is keeping every grader aligned in their standards for quality.
Illuminate handles this through extensive training. We’ve had three major trainings in the past 7 months, where we go over the basics with agents (new and old), the best ways to use MaestroQA, and how to give constructive feedback to teammates.
Team leads have additional, separate trainings from agents. These trainings focus more around how to use the tools in MaestroQA, and how these tools can be used to identify what agents need help with.
We also create a training video everytime a new feature in MaestroQA is rolled out. It includes how the new features can be used by the team to help drive the QA process and provide better constructive feedback.
Sometimes team leads work with individual agents – they’ll regrade tickets that a peer agent has already graded, to show the agent where their grading (and inherently, their understanding of quality and feedback generally) can improve.
Calibrations are used to coach agents on how to grade tickets, and how to give more constructive feedback. With this strategy, we’re using MaestroQA to help agents get better at support and supporting our product, as well as coaching their peers. The calibration process allows us to continue to nurture our team and help agents grow in both areas.
In busy times, agents often deprioritize their QA assignments to keep up with incoming customer requests. In order to account for this, we created a policy where agents can grade fewer tickets in these times, and we have automations set up so that people get regular, small grading assignments in their inboxes, so none of their work ever piles up, and they’re able to grade regularly even when they’re busy.
Sometimes agents come in with a disagreement about how a ticket of theirs was graded by a peer, which creates healthy debate. In this case, a team lead will regrade the ticket, and then there’ll be a dialogue around how the team lead thinks about the score and the interaction, resulting in a better understanding of both how tickets should be handled and how they should be graded.
In each 1:1 that agents have with team leads, agents bring an example of a graded ticket of theirs, and the two talk about it together.
Sometimes junior agents grade the work of tenured agents which might seem odd (someone who’s new to the company giving feedback to someone who really knows their stuff).
What we’ve found though, is that it’s a really incredible learning experience – as the junior agent can see how a skilled colleague handles situations that they’ll eventually be exposed to.
The peer review set up also creates a unique alignment across the team on what the standard of quality really is. Because all agents are so deeply involved in the quality review process, they have intimate knowledge of the component parts of a quality interaction, common mistakes within each component part (not just their own mistakes, but mistakes of the entire team), and how all of the pieces come together to create a positive customer experience.
A big part of the success of our peer review program is that we built our quality rubric as a team. In the early days of adoption, we put together a committee responsible for creating, testing, and iterating on our rubric. We considered together what good looked like for our company, and everyone agreed upon what we came up with.
The biggest benefit of our peer review program is that it’s changed the way that people on our team give feedback to one another. People have grown to really enjoy the feedback culture because it has strong connotations with learning and improving, as opposed to with negative reviews (which can be what people typically associate with feedback).
This mental shift has been great. Colleagues now help each other be the best that they can possibly be, and the expectation is that others will help them in the same way. It’s partially because there was a change in the kind of feedback that people give each other – the quality of feedback being given has improved dramatically.
Before, people were very cautious about the feedback they gave in a way that was inhibiting. There was a lot of positivity and encouragement, but not a lot of honesty around what could be improved. Now feedback is very real, very actionable, and helps people learn.
Another huge win is that the team’s reporting has gotten a lot better. With a peer review program, we can see how agents are performing as they’re helping customers. And because the grading is coming from an entire team, many more tickets are graded than could possibly be done by just our team leads.
Data from MaestroQA and the peer review program have given team leaders a way better idea of how people are performing.
One unexpected thing that came out of the peer review program, is that there’s now a lot more thought and discussion around how we can all improve as CX professionals at the soft skill components of our job (unrelated to specific Illuminate skills). Instead of focusing only on how we can improve in our knowledge and communication about the product, we spend a lot of time talking about how we can give better support generally, beyond product knowledge.
An example is in helping each other figure out the best way to deescalate an angry customer. MaestroQA has given the team the framework to work on and discuss these types of nuanced, soft skill issues.
Another big thing is that the team now has a very clear idea of how people need to grow. We really use 1:1s to address areas where agents can grow based on feedback from the peer review program with MaestroQA. Maestro has been the foundation for us to work on improving our team.
One major way in which we consider the return on MaestroQA is that before the tool and peer review program, we didn’t really know which agents weren’t performing well, or who needed help. So we had people supporting customers who weren’t giving the best possible customer experiences. It’s hard to quantify the cost of this, but it would have been risky if this went on for too long.
It’s not really possible to quantify the benefits of having a team of people who are continually pushing each other to succeed. But there’s undoubtedly a synergy in everyone working together to improve how we support customers that makes us stronger than the sum of our parts.