MaestroQA: How Headspace Maintains Brand Voice Consistency

Initial Challenge:

Managing outsourcer quality in spreadsheets ☠️

This is some text inside of a div block.
Unique complexity:

A brand voice that is extremely important!

This is some text inside of a div block.
Biggest wins:

Efficiency, and 96% CSAT

This is some text inside of a div block.
Superpower:

Encouraging beginners to continue meditating!

This is some text inside of a div block.

Headspace is a global leader in meditation and mindfulness through its meditation app and online content, with more than 34.5 million members across 190 countries. Founded in 2010 by Andy Puddicombe (co-founder) and Rich Pierson (CEO), it’s backed by Puddicombe’s deep knowledge of the time-honored tradition and practice of meditation, coupled with his expertise in translating those learnings into modern day applications.

The support team works to capture Puddicombe’s authenticity and passion for meditation – it’s their job to make sure users feel encouraged by Headspace to continue their practice. With a dispersed team (partially outsourced, and partially in-house in Santa Monica) this poses a bit of a challenge. But Headspace has a robust quality assurance program to aid them in overcoming this obstacle.

The following is a Q&A with Michael Luevanos, Quality Assurance Manager and Kenny Bonilla, Quality Assurance Specialist at Headspace.

What makes the Headspace support team unique?

What makes your customers unique?

The Headspace support team is committed to going the extra mile for the user. If customers are experiencing any kind of difficulty accessing their sessions, the support team will contact every person within the org who may have insight into the topic or problem at hand. They’re are also ingrained within various departments at Headspace, and act as a voice for the user while product decisions are actively being made.

Customers have all sorts of backgrounds and varying experience with meditation. We want to support their journey with Headspace, as well as their meditation practice in general. We also make sure to meet the customer where they are, in terms of technical proficiencies and language options. Meditation is something that we think people can benefit from throughout their lifetimes, so our customer age range is quite large, and we want to support our users at all ages.

On which channels do you offer support to customers?

You can find Headspace on email, phone, and chat (when email volume permits). We also integrated self service tools in the past year so our users can sometimes come to their own resolutions in real time.

Do you use Zendesk for all of those?

We use Zendesk for all except our self-service tool, Solvvy.

What else do you integrate with?

Since most of our channels are supported through Zendesk, we don’t integrate with a ton of other products. Mainly it’s MaestroQA, Plecto, and Solvvy.

Do you think being a subscription business has changed the way that you provide support?

Being a subscription business does change the way we provide support because we’re at a touchpoint in the middle of the customer lifecycle – if customers don’t have a positive experience, this could impact their desire to re-subscribe or use the app at all, and that is taken very seriously. So we aim to resolve user issues in totality, and simultaneously encourage the continuation of their meditation practice.

How do you think about customer retention? Is it baked into your support strategy at all? Your QA strategy?

Customer retention is baked into our support strategy, but not overtly. The more people utilize the content, the more likely they will re-subscribe. We think about retention as doing all that we can to enable users to continue with their journey.

What percent of tickets do you QA?

We grade 2% of each agent’s interactions.

How were you QAing before you started using MaestroQA?

Before MaestroQA, we were using rubrics created within Google Sheets. It was not well organized, it was difficult to make changes to, and it wasn’t easy to read.

What do you do with QA data?

We primarily utilize quality data to develop coaching for agents and to clarify expectations. We are constantly making updates, releasing new features, or launching partnerships that our entire team needs to be proficient in. Their QA scores help us understand where knowledge gaps exist, and where effort needs to be directed.

We also use QA data to develop agent reports that explain high and low points in agent performance. From MaestroQA reporting, we can figure out where agents can improve and where they’re excelling.

What would you say MaestroQA helps you with most?

MaestroQA has helped us streamline the process outside of Google Sheets. It has been a huge time saver. With the time that we save in the QA process, we work to improve QA for the team. We updated the rubric, added (then removed) an auto-fail section, weighted the sections in our rubric to be solutions oriented, and now we are working on advancing a new coaching initiative – this initiative includes more advanced and personalized agent coaching with lessons attached, based on QA scores and other metrics.  We’re also easily able to integrate new team members into our QA process without taking much time. We QA and calibrate much more with MaestroQA than we would be able to with Google Sheets.

We also use MaestroQA’s reporting functionality to make more accurate coaching suggestions, based on individual and team performance. For instance, a lot of our agents have been with Headspace for over a year, so they constantly receive passing scores. Since we don’t have negative scores to review with them, we instead look at their field averages and see which are the lowest, to see where they can use a refresher training. This could mean tagging, greeting, closing etc.

This is most important because it improves agent morale – we’re able to give agents accurate feedback on their progress over time even if they’re performing well.

Who QAs, and what processes do you have in place around sharing QA feedback?

We have two QA Specialists from the Customer Experience team grading interactions for both Santa Monica and our third party BPO partner. We are currently updating the process we have in place around sharing QA feedback, but it will include a PDF of all interactions that we have graded for the agent to review so they fully understand why they have been marked down, and where they’re excelling.

Leads from our BPO are also involved in QAing within MaestroQA, so they have a first hand understanding of agents’ work, and they provide follow up coaching based on feedback from HQ in Santa Monica.

Were there any unexpected ways in which MaestroQA improved your team?

One thing that’s very important for us is tone – our tone needs to be consistent no matter what a user emails in about, or how much time has passed since they last emailed in. No matter what, they need to feel welcomed, listened to, and they need to get a timely and accurate solution.

MaestroQA made it easier for us all to be aligned on standards and expectations for customer communications. We edit rubrics based on calibrations, and make sure that Puddicombe’s tone-of-voice shines through in customer interactions. We started by calibrating once a week, then moved to a monthly cadence which has proven to be sufficient.  

With MaestroQA we have been able to save a lot of time, make changes that benefit the team, and stay consistent with tone regardless of the time that has passed since the user last reached out.

We average about a 96% CSAT at any given time. MaestroQA has helped us here because our CSAT is high even during busy seasons when it takes us longer to respond since our quality is always good.

I think MaestroQA has had the most apparent success with new hires who come out of training into “nesting.” From nesting until they are full-time in production, we see a lot of progress and success from QA. New (nesting) agents are QAed more than tenured agents in the beginning. They’re QAed on a weekly cadence (which includes an in person meeting) for their first month to ensure that they get the coaching resources they need early on.

What makes your QA rubrics unique?

We have focused our rubrics to grade for solutions-oriented responses. We want to cut down on back and forth for the user’s benefit, while also making sure that they feel heard during the exchange.

In alignment with solutions oriented responses, we have weighted grades that put more emphasis on solutions. If the agent did not provide a full and complete solution, they can be drastically marked down.

We have a very full and dynamic Help Center that covers billing issues, FAQ, feature assistance, meditation assistance, navigation assistance, etc so we grade agents on whether these self-service tools are provided as a resource for the user based on the situation. This is important because over 30% of our inquiries can be solved with self service answers – saving our team time, and making sure more agents are free to field fresh customer inquiries.

If an agent goes above and beyond in their interactions, they get a +5% bonus. At the end of the month, we provide the agent with the highest grade and most bonuses from our BPO team and Santa Monica a thoughtful gift and card to say, “thank you for providing outstanding support to our community.”

How do you think about the way brand voice fits into your support team and QA process?

Headspace has three strategic anchors: creative excellence, simplifying complexity, and authentic expertise. Our brand voice is a combination of these ideas, and Puddicombe’s feelings on authenticity around our content.

Our QA rubric was created with Puddicombe’s notes and these things in mind – we want our agents to capture this voice, and we grade them on it. It is certainly a challenge though, because brand voice in customer support requires a balance between the expectation people have of Headspace, and the realities of running a business.

That said, we always try to never leave the user with an “oh well” feeling, and we seek to advance their practice and encourage them to meditate.

How was the transition of setting up MaestroQA?

It was very easy. We developed the whole QA program from the ground up so a lot of the initial work was coming up with a rubric that would work to grade all of our agent interactions. Once we had that set up, we started to figure out how to track these interactions and what type of coaching we could provide with our insights.

How do you think about managing the outsourced team?

We think of our outsourced team as a part of the Headspace team. Ongoing collaborations have ensured that we are all grading from the same point of reference:

  • On a daily basis, we’re in communication via Slack.
  • On a weekly basis, we meet and speak very openly with agents and leaders.
  • On a monthly basis, we have calibration sessions.
  • On a quarterly basis, we visit our team there, and we’ve even brought their leadership team in to visit our Santa Monica headquarters.

How do you make sure that your outsourced agents are using your brand voice, and providing the same level of customer experiences that your in-house team is?

We make sure that outsourced agents capture our brand voice through our QA process – we use the same rubric for the in-house team in Santa Monica as well as the BPO team. Both teams are graded on solution-based responses also – if information is not clearly relayed, or if the information provided is not accurate, we follow up with the agent directly with coaching and suggestions.

Through QA, we learned that many agents at our BPO weren’t personalizing their responses. It’s important to do this so the user feels heard from the start. We had to be consistent in explaining this by providing examples, approving what is written, and positively reinforcing the desired behavior. Once we were monitoring this behavior, our agents did become more personalized in their interaction style.

Customer Success Stories