Dangers Of The 90%+ QA Scores

No items found.

If your team has QA Scores in the 90% range, it might reflect doing an awesome job. However, be wary if it’s actually the result of a flawed QA program.

Here are common reasons we’ve uncovered that inflate QA scores:

1) QAing “easy” tickets

Sometimes graders gravitate towards easier tickets because they are quicker to grade. But when avoiding difficult tickets, QA scores misrepresent the actual quality of your service.

Directors and managers should review the ticket difficulty of graded tickets to see if this is occurring for their team.

One potential solution is assigning or randomly generating the actual tickets to QA.

2) QAing leniently

QA scores can create a false sense of confidence on your team, even when the reality is that nefarious problems are not being surfaced. We see this happen in three common ways.

Scenario 1 – the quality assurance scorecard is too rigid. A Yes / No option doesn’t effectively capture the nuanced cases and a Grader doesn’t want to assign a “No” because it’s too harsh. This situation requires re-working your scorecard design.

Scenario 2 – different graders have a different understanding of what constitutes a quality interaction. The solution here is to calibrate, the process in which multiple graders grade the same ticket blindly and then compare notes.

Scenario 3 – a culture that views 90% as an A, and which thinks only As create a positive atmosphere, can sometimes sacrifice the quality of customer experience. If you set the expectation that a 70% is okay, this problem will get mitigated.

3) What is required from agents is minimal

An agent might be following the process you’ve trained him or her on, but the process itself is flawed. This often comes in the form of heavy usage of macros.

This is the most difficult reason to come across because it requires much deeper changes across the department. With a glass half-full mindset, this also presents a tremendous opportunity.

We recommend starting by looking at tickets where the agent followed the correct process and then seeing if the customer did the relevant follow up steps.

For example, if a customer asks you to reset the password and the agent responds with a templated set of instructions according to process, check how many times the customer is actually resetting the password and logging back in.

This can also uncover discrepancies between your process and customer satisfaction scores (CSAT). It doesn’t matter if your agent is following the process if the process isn’t solving the customer’s problem and delivering an effortless experience.

Ultimately, your QA program should drive higher customer satisfaction, but if you are inflating score or not expecting enough of your agents, you might not be getting the full benefit of your program.

Related articles
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard
April 4, 2024
Read More