Needed to scale a QA program to keep up with growth in their CX team 📈
Brand values had to show through for customers AND agents
Keeping a uniform Agent Experience across continents 🗺
Powering and supporting emotional event experiences 🤩
Think about the last time you bought tickets. (You didn’t really have to close your eyes but hey—whatever works). Whether for your partner’s favorite band, ringside WWE seats with your college mates, or to bring your kid to their first ever ballgame, I’m sure there were a lot of emotions at the root of that purchase.
SeatGeek built an entertainment ticket search engine with the knowledge that scoring those seats can be an intense affair. Recognizing that tech only goes so far, they also put together a personable, human CX team to ensure that their clients received the support that they needed.
When the time came to add a QA program, they wanted the same humanity and care to be shown to their agents as well. This is how SeatGeek used MaestroQA to build a more human QA process:
SeatGeek’s QA program started the same way most QA programs do – on a spreadsheet.
That quality scorecard checked three simple operational parameters – Agent Thoroughness, Resolution, and Tone.
From those humble beginnings, both the CX team and the QA workflow grew quickly. Soon, relying on a spreadsheet was no longer efficient at SeatGeek’s operational scale. At the same time, it became clear that the QA process could (and should) reinforce the unique, human interactions that SeatGeek’s agents had become known for. It looked like a revamp was on the cards.
As part of their effort to scale up their CX operations, SeatGeek brought in a BPO. This team was based in the Philippines, and provided much needed cover for the in-house team and allowed SeatGeek to provide round-the-clock support to its customers. With more team members across more locations than ever, SeatGeek enlisted MaestroQA’s help in designing and implementing an easy-to-use rubric that would scale with their organization.
The first step was moving the QA process off archaic spreadsheets and into the cloud. With MaestroQA, SeatGeek’s QA teams were now able to set up automations that took care of assigning QA specialists to grade tickets and exported individualized PDF reports for every agent. This led to incredible time savings for the QA team, and allowed them to redirect their efforts to providing better coaching to the agents.
The increased scale of operations also laid bare a potential issue with consistency—as senior agents and new QA specialists were being brought in to increase the number of tickets graded, it became difficult to ensure that grading was being done in a consistent manner.
For example, SeatGeek found that there were discrepancies between the grades that different agents would give the same ticket. It was subtle (up to a ±10% difference in grading between QA analysts), but SeatGeek’s QA team could foresee potential Agent Experience problems arising from it.
For a QA program that was built on humanity, it was important that SeatGeek’s agents could trust that the process would give consistent, reliable scores that they could use to further improve themselves.
To solve this, it was a simple matter of switching the rubric from a 10-point scale to one that graded out of 5 instead. But behind the scenes, there was a much more complex process of ensuring that everyone was on the same page: the QA team had to get their BPO’s buy-in to use the same scale, and then train all QA graders to calibrate the grades given on the new simplified rubric.
In addition, a simple framework was developed and circulated amongst the agents, clearly spelling out the differences between grades and showing what was expected to get a perfect score.
Through a seemingly simple switch from a 10- to 5-point scale, SeatGeek reduced the variability in the scores QA specialists were handing out, while wiping out the guesswork involved when an agent received a grade. Moving the QA process onto MaestroQA allowed SeatGeek to scale without all the usual growing pains that an expanding CX team usually faces (can you imagine manually generating reports for hundreds of agents or calculating QA scores across 60 different spreadsheet tabs? *shudders*).
Writing this piece has made me really tempted to join SeatGeek’s CX team (but don’t everyone quit your jobs at the same time!). It sounds like a truly nurturing place to work for, where the agents are always kept front and center throughout the QA process.
This was clearest when SeatGeek implemented improvements to their QA program – the agents were consulted at every step, and every care was taken in ensuring that these changes would help them excel at their roles.
The QA team implemented a new policy, where every score that was not a 5 (out of 5) would require the QA specialist to give written feedback in tandem with the numerical score. This would give QA agents pause when giving anything less than a stellar grade, and ensured that thoughtful, written feedback is the norm for their QA process.
QA specialists at SeatGeek are encouraged to be kind, thoughtful, but firm when it comes to giving written feedback to agents. It’s easy to get caught up in a flurry of graded tickets, but SeatGeek’s QA team takes special care to accentuate the positive and to ensure that all feedback is given with agent improvement in mind.
After exporting individualized reports for each agent, the QA team then provides in-person coaching to the CX team, and where that is not possible, provides clear and constructive comments in writing.
The initial questions on SeatGeek’s rubric were more operational in nature than values-based, and the team realized that it could not capture the humanity that they were trying to convey in their customer interactions. As such, two additional parameters were added to SeatGeek’s QA scorecard to help drive more unique and human interactions.
Agents were now (in addition to the original parameters) being graded on:
This included grammatical errors and typos, but also checked that the agent fully understood the context and explanations their customers were giving. This change encouraged agents to take a step back and to empathize with customers, instead of running through as many tickets as quickly as they could.
By far the frontrunner for Best Scorecard Parameter (trust us, it’s going to be a thing). Adding this parameter meant that an interaction could go something like this:
Customer: Hey, I have a quick question about my tickets to the Gators game.
CX: *answers the question and any other follow ups*
CX: So the Gators have been doing well, haven’t they? I’ve heard great things about Kyle Trask recently.
(chatter about organized sport ensues)
And it would be completely fine. Encouraged, in fact. SeatGeek has taken the emotional nature of their business and turned it into an advantage with their CX by encouraging them to have human and unique conversations with their customers.
By adding these new parameters, SeatGeek was able to operationalize the authenticity and customer-first communication that they’ve come to be known for, and both their customers and their CX agents are better for it.
You might’ve heard the phrase “happy spouse, happy house” bandied about over a drink (or two, or more) before. You can almost see where I’m going with this: happy agents probably leads to happy customers. That’s not quite what’s happening over at the SeatGeek CX team, in fact, there’s one key difference – they would still treat their agents in the same, human way whether or not it had an impact on Customer Experience. It just so happens to help in this case!
The nature of SeatGeek’s business meant that delivering thoughtful, human experiences across every CX interaction was a must-have for success. Through designing a simple, human scorecard, and prioritizing every agent’s growth, they have managed to consistently deliver the human CX experiences they’ve become known for, while also scaling their CX team rapidly.
Want to learn more about how you can build a values-based scorecard for your organization?👇