One of the best parts about good restaurants is having the wait staff guide you through your meal at the perfect pace. A savvy waiter or waitress never rushes you, but they also don’t let you linger at your table when you’re clearly ready for the check.
Customer support teams can benefit from the same mindset: thoroughly resolving issues without wasting a caller’s precious time. In order to accomplish that, Quality Assurance (QA) managers need to get familiar with Average Handle Time (AHT).
AHT is a critical metric for QA teams to track in order to help agents resolve customer service requests efficiently without sacrificing the quality of the customer’s experience. If your goal is to reduce AHT, it’s important to organize your tools, customer service training, and tech before an agent answers a single call.
But before we talk about specific ways to cut down your AHT, let’s cover the basics.
Average Handle Time (AHT) is a customer service metric that indicates the average time it takes for a support agent to close a ticket, including hold time, call time, and necessary follow-ups. AHT is a common KPI for contact centers that want a tangible way to measure efficiency.
Any interaction—whether via phone, email, chat, or social media—that’s drawn out longer than necessary is a low-quality one. On the flip side, you can’t have a quality customer service interaction if the customer feels like an agent is rushing them or cutting corners for the sake of closing a ticket.
QA managers need to pay close attention to AHT because the stakes are high. Microsoft’s 2019 State of Global Customer Service Report notes that 61% of people have cut ties with a brand due to a poor customer service experience. And one of the most common causes of a poor customer experience is—you guessed it—sitting on hold. In fact, nearly 60% of people are frustrated with long hold times.
You can’t improve what you don’t measure, which is especially true when it comes to how efficiently agents resolve customers’ questions. That’s where the AHT calculation comes in.
Calculating AHT is simple:
AHT = (total call time + total hold time + follow-up time) / total number of calls
Here’s a quick example of calculating AHT:
Let’s say you have 100 calls that take 600 minutes with a total hold time of 200 minutes and 300 minutes of follow-up work.
(600 + 200 + 300) / 100 = 11
So, your AHT is 11 minutes.
This is like asking how long it should take to prepare a meal: are you serving instant ramen or a four-course dinner?
AHT benchmarks can vary depending on several factors, such as the industry you’re in, the complexity of your business, or how well-established your customer support team is.
That said, there are some ballpark figures for certain industries. According to data sourced by Call Centre Helper, a good AHT for telecommunications companies is about 8.5 minutes, while the benchmark for financial services is 4.75 minutes.
With a rough industry benchmark in mind, QA managers can take a sample of a month’s worth of customer support tickets, then evaluate the AHT for that sample period. Once that's done, ask these two questions:
It’s important to note that a low AHT shouldn’t be the sole indicator of customer support success. If agents are under constant pressure to close tickets faster than last time, they might make mistakes that cost more time in the long run. That’s like a server handing a guest their check before giving them a chance to order dessert.
However, if your team is consistently battling a backlog of support tickets, there are some strategies you can implement to boost efficiency.
Reducing AHT isn’t about shortcuts or hacks—it requires QA managers to be proactive with their training and processes. This way, every moment a customer spends with an agent goes toward resolving their issue.
Quality audits help QA managers identify their team’s weak spots that can contribute to bloated AHT metrics.
“If you want to drive change in any metric, whether it’s First Call Resolution, AHT, or CSAT, you have to start with the quality of the experience,” says Justin Junious, Customer Experience Lead at monday.com.
One way to scale-up grading for customer experience (CX) teams is with advanced quality assurance scorecards. That’s how MaestroQA helped monday.com increase their volume of quality audits by 48% within three months.
By ramping up their monthly quality audits, monday.com cut down their AHT from 24.1 minutes to 16.9 minutes—nearly a 30% improvement. That’s the power of using data—not guesswork—to improve the customer experience.
Some companies might be tempted to throw new agents into the mix, especially if the support team is overwhelmed with incoming tickets. But under-trained agents can cause bottlenecks in your queue, thus increasing your AHT.
The most impactful agent training is informed by data—not legacy systems or cookie-cutter presentations. Take Zola, an online wedding planning service, for example. To identify training gaps, their QA team uses MaestroQA to score new agents’ support tickets on key criteria to share customized feedback during coaching sessions.
“Our goal is to get agents the lessons they need quickly and create a real-time feedback loop,” said Rachel Livingston, Senior Director of Operations at Zola. “Agents are expected to complete their lessons within a few days, and supervisors usually reconnect on the topic in future coaching sessions.”
Additionally, Zola syncs MaestroQA with Lessonly (a leading team training software) to expedite agent onboarding, identify training gaps, and collaborate on new opportunities. For example, based on the analysis of past interactions, Zola’s CX team identified the need to focus on training lessons for brand voice and call de-escalation.
By closing the feedback loop between QA and training, Zola cultivated more confident agents, productive coaching sessions, and happier customers.
Tracking down answers to recurring questions can quickly eat up the clock—especially if resources are unorganized. A digital, easy-to-navigate knowledge base can help with this. But knowing exactly what information to include and whether it’s actually helping agents can be confusing.
The ride-sharing app Lyft ran into these challenges while trying to codify internal knowledge for customer service training purposes. MaestroQA stepped in to help identify aspects of the business that were causing the most confusion for new agents by providing a steady drip of CX data.
These insights laid the foundation for a single source of truth that put product and policy knowledge at their fingertips, which ultimately improved Lyft’s first call resolution (FCR) rates.
Additionally, since senior agents gained more time to focus on the tickets in their queue, their AHT dropped significantly.
In 2019, 25% of interactions between brands and customers were automated by artificial intelligence (AI)—a number that’s expected to grow 40% by 2023—and for good reason. Without automation, manual, repetitive tasks drag agents (and the business as a whole) down.
For example, in 2019, the online fitness company ClassPass spent the equivalent of 6,250 days—that’s over 17 years—chatting with 1.5 million contacts to cancel their subscriptions. QA data revealed that ClassPass’ support team was spinning its wheels, leading to unrealized revenue and inaccurate forecasting, on top of a lot of wasted time.
"We decided to fully automate the process," said Sydney McDowell, CX enablement Lead at ClassPass. “Now we have zero cancellation chats handled by agents.”
Automation can’t(and shouldn’t) fully replace human interaction, but it can give customer support teams the breathing room they need to take on more tickets—and close them faster.
Responding to questions with “Let me check...” or “I think you should...” doesn’t just erode trust between support staff and customers; it wastes time as well.
These instances can be avoided with templated or “canned” responses, which are pre-written answers that ensure concise and consistent communication between agents and customers.
Let’s say a customer needs clarification about a refund policy. The agent can simply reference the approved response for that SKU rather than relying on a subjective interpretation, which could be lengthy or inaccurate—both of which extend AHT.
Taking this a step further, QA managers can create specific scorecards that track the effectiveness of templated responses, and identify those that need to be tweaked. These insights can also inform training, ensuring new agents avoid ineffective communication strategies.
QA managers can even A/B test templated responses to gauge how well they perform in specific scenarios.
AHT is one of the most commonly-tracked call center KPIs, and lowering it can be a great goal. That said, QA managers can’t afford to let speed become the priority over effective customer support.
AHT is just one piece of the CX puzzle that should be viewed within the context of other metrics such as First Contact Resolution and QA scores.
Of course, agents should strive for efficiency. But if they’re over-eager to close support tickets, that can backfire in the form of callbacks for unresolved issues. An interaction that takes longer than your normal AHT isn’t necessarily a bad thing, especially if a customer is responding well to the agent who’s guiding them towards a solution.
Remember to think of CX like the dining experience at your favorite restaurant: equal parts efficient and enjoyable.
QA managers hold the insights that separate average customer support teams from all-star customer support teams. Want to see how QA data can improve your AHT (and more)? Take a tour of MaestroQA and request a free demo.