The Art of Training with Harry's Razors and FuboTV

No items found.

Many companies use QA to inform their training, and then to measure how well agents are incorporating that training into their day-to-day jobs. QA can reveal what they’re doing really well, and where onboarding or uptraining might need to be adjusted. Two companies with this classic feedback loop are Harry’s Razors and FuboTV.

During this panel session at The Art of Conversation 2019, Nick Martin of Harry’s and Abdullah Kahn of FuboTV share their insights on what makes for effective onboarding programs and ongoing training as it results to their support teams. Here’s what they have to say:


To kick things off, could you start, Nick, by giving a little context about Harry's and how your support team's structured? Then, Abdullah, we’d love to hear the same in relation to FuboTV.

Nick Martin:

Harry's is a men's care and grooming company. We started with making shaving better and have since evolved to more men's care and grooming products. As direct to consumer, it's really important for us to have really strong relationships with our customers. We have meaningful conversations across five different channels, mostly related to navigating customers through the website and distribution-related issues, but we also have an option for a subscription service so a lot of our contacts are related to modifications and cancellations along that subscription service.

Abdullah Kahn:

FuboTV is a streaming service and has been around for about five years. We started off as a soccer streaming company in the US but now we stream more than that and compete in the space with Sling, DIRECTV NOW, and YouTube TV. We're the smallest of the bunch, so a lot of people don't know about us, which is actually great because we get to play around with our tools and technology and get it working.

Can you tell us how you train and onboard new hires, in terms of the process you have in place?

Nick Martin:

Yeah. Certainly. One really important note is that we start with the hiring. We're really intentional with hiring for skills and competencies. Because of that, our onboarding is really focused on the rest: the tools, the processes, some small knowledge. We have about a week-long onboarding class, which is pretty intensive, based on education, exposure, and experience. From there, regular coaching is continued but that's really when the training stops.

Just to quickly dig in, your team is mostly virtual and they come on site for that one week right?

Nick Martin:

Yeah, so we actually have an extremely diverse workforce including part-time remote, in-house, full-time, and we've had a third party in the past. So, that first week of training is great for culture building. And we're moving to training that can be accessed remotely at any moment. A lot of our uptrainings (ongoing trainings) have that capability through recordings, but right now, it is really important for everyone to come together in that first week onsite.

Abdullah Kahn:

We have a direct-hire, work-from-home team, which is basically spread out in the US and South America. We have someone in India, Portugal, I believe, who support our product directly. Then, we have a BPO partner out in South Asia that we have brought on.

Initial training was done through Google Docs. Over time, we realized that wasn’t working effectively so we brought on a tool called Lessonly that helped us streamline and provide training to our at-home reps, where onboarding lasts about two weeks.

We’ve begun to segregate reps by contact type. The first segregation we did was for retention because it's really easy to join Fubo and leave Fubo, unlike a TV company where you're stuck with a two-year contract. So, we focus on retention, so started working on creating training processes around how to handle customers who want to leave and kind of sell the value. Next, we plan to expand into sales, so have four different diverse groups, retention agents, sales agents, customer service, and tech support.

And how do you guys measure the success of your training programs, if it's working well or not?

Nick Martin:

Elisa, who's here, is our quality training and development senior manager. She's actually set up a really cool training program around pre- and post-assessment. You can dig into it and look at areas that we over-indexed, or areas that we're a little short on.

Then, when you corroborate that with some of that early quality evidence coming from Maestro, it's just like a perfect picture. It's just a continuous iteration on how do we become better in that training.

Abdullah Kahn:

We have created this closed loop process where when a new product launches or a new business change happens, the training team trains the agents. Then the next week, our QA team focuses, does their regular QA monitorings for performance but also specifically looks at the contacts which are related to the new training that has been imparted. Then, they score that and we have a weekly meeting with all of the department heads where QA gives feedback to both product, customer service, and training on what's working, and what's not working.

Nick Martin:

Very early on in the tenure of new associates, we have regular meetings where that conversation is happening. There should be a very consistent feedback loop from quality to training to the manager or the team lead for us every single day during that first couple of weeks.

That's really interesting to hear, that the actual QA score process feeds back directly into the new hire onboarding training program.

You mentioned something very interesting earlier about hard skill training and soft skill training. It's something we hear from our customers all the time – how do you train on soft skills? For hard skills, it's mostly product knowledge, like this is what you have to do, here are your procedures and processes. Do you find it harder to train on hard skills? What are the techniques you use to actually train on these soft skills and even why do we care about soft skills now?

Abdullah Kahn:

We focus heavily on soft skills. Part of our QA scorecard is a customer experience section, which is all about soft skills, how you interact, how you answer the contact, chat, email, phone support, our different channels of social media. Do you empathize with the customer? What types of words do you use, especially if you're chatting with a customer?

How do you feel what a customer is saying without actually speaking to them? That's a challenge. Soft skills are a thing that some people have initially and some people don't, so we've used different tools to train the individuals on things like, how do you bring inflection in? How do you start empathizing?

Then, when we do our QA sessions, we really focus on giving feedback around soft skills. If a customer is using all caps, maybe you should not put a smiley face at the end of that chat, or if a customer is screaming, maybe you don't get upset back at them. We say, "Match the customer's tone but not when they're upset."

So, a lot of focus is put on that interaction, which happens with individuals. I really don't believe in (especially in the soft skills section) yes or no questions. It really has to be subjective, based upon the particular contact. So, that really helps. Sometime in the past, people do yes/no, were you empathetic? Did you treat the customer properly? Yeah. That doesn't work. So, you really have to treat it individually at an individual level.

Interesting. When you're talking about teaching people how to find the right inflection points, how do you do that? You mentioned some tools, too. I'd be curious. How do those tools help you do that, or how does Lessonly help you do that?

Abdullah Kahn:

There are best practices out there. We play audio samples of conversations, and say, "listen, this is how you could have done better."

Nick Martin:

Yeah. We definitely have soft skills in our quality scorecard, and we do do some formal training at the beginning, especially around experience engineering. We're big believers in effortless experience, of positive positioning, experience engineering or really learned techniques to reduce a customer's perceived effort in that conversation, but I think really what it comes down to is we invest heavily in our people managers.

So, when we think about soft skills, it transitions into coaching. We’ll say to associates, “these are things we're going to go over together. Can you come to me with a few things that you thought you did well, and a few things that you might be able to improve the experience? What are some things that you think you could have said or done in your soft skills techniques that reduced the customer's perceived effort?”

Because we invest in those people managers, our team leads, they spend a lot of time continuing that soft skill progression through the experience.

Could you give an example what positive positioning is, and then what positive positioning looks like?

Nick Martin:

Yeah. There's some really cool research on just the word no, for example, and it comes from obviously when we were children and when we heard the word “no.” I'm going to throw out a stat that is definitely wrong here – there's a 30% reduction in emotional response when you hear the word no or something like that. So, we focus on positive positioning.

A really good example is if someone asks, "Hey, can you deliver to Brazil?" which right now, for Harry's, the answer is “no.” But the real answer is, "We're really excited to deliver to Canada and the US. If you're stopping by, I have a couple stores I could hook you up with," right?

So that's a “yes.” It's not a “no,” and it changes the entire conversation.

We've touched a little bit on continuous training. We've touched a little bit on how QA has impacted your training program. Both of you have helped me understand examples of how you've uncovered insights from tenured folks through QA, that have fed back into your training program. Could you share specific examples as well because I think that'd be really interesting for the audience to learn?

Nick Martin:

Definitely. So, we actually have a key result in our OKRs that is quality pushing to training. So last year, we wanted three quality trainings, like quality-based training that came out of our quality assessment.

Really good examples from last year include our voice and tone training. We also had a root cause training, which is inbound analysis, like why is a customer contacting us? It's really interesting because it doesn't affect the customer experience that much in that moment, if I put the wrong root cause down, but it does affect our data and probably how we can, in the future, help customers, which is really fun.

Then, we had a security token training. So, we have to verify the person is who they say they are. From quality, we identified that that was an area of opportunity for a certain part of our team so we put that into a training module.

Abdullah Kahn:

So there’s no contract with FuboTV, and we also have a no refund policy. The way you sign up right now is to go on our website, click “Start free trial,” enter your credit card info, and sign up for an account.

There's a disclosure that, after seven days, we will charge you at a particular date and time, and you're charged at that exact second. A lot of customers miss that. But we had to start enforcing that we couldn’t give refunds.

When customers would say, "Hey, I want a refund. I got charged. I didn't know I was going to be charged," reps would say something like, "Yes, you did. No refunds."

So, our CSAT dropped 10%, 15% just because those customers really started giving us bad CSAT scores. Our number of CSATs on the good things where we made sales and saved customers were 5% of the overall, and these refund CSATS were 95% of the overall. So, our CEO and CFO came to me and said, "What the hell's going on?"

So, we looked at it. Operations, QA, and training sat down and listened to those calls and saw what the reps were doing. They were literally saying, "No refunds." So, then we started to build some soft skills around that. "You know, I understand, sir. I don't know how you missed the disclosure, but I'm going to cancel your account and you can still view the service till the end of your billing cycle."

So, instead of saying, "No," we added soft skill verbiage, and tried to train reps to sell the value of this. They can ask, "Do you know that this new season is starting this week?" That kind of stuff. It's slightly helped our CSATs, so brought it up by 5% and reduced my Better Business Bureau a little bit, but really definitely helped on positioning that back to the customers.

That's really interesting, identifying a policy change, its impact on CSAT, and then developing soft skill techniques to approach it.

Both of you have dedicated folks to training and dedicated folks to quality assurance. So, when it comes to new hires, trainers handle the onboarding. As they finish the onboarding program and agents become more tenured, how are both trainers and QA specialists involved in training moving forward? Does QA gets involved when it comes to continuous training? How do you find that balance for once they're past the initial onboarding period?

Nick Martin:

So, we have not regular but ad hoc new product launches. We just launched a face mask in collaboration with Heyday, so we needed the trainer to come in and collaborate with the product professional, the person that is going to know the nuts and bolts behind whatever ingredients are in it, anything like that. So, the trainer is very involved with ad hoc and recurring training programs. In the way it works right now, the quality team makes more of a recommendation based on the data. Then, they definitely remain present. We only have a team of 27 associates, and they are going to be there, but the trainer is taking over with the knowledge and adult learning techniques.

Abdullah Kahn:

So, our onboarding training is pretty standard and our training team really at times hates our product team because our product team is continuously churning out updates. I shouldn't be saying this, no one is recording this, right?! We're in the middle of doing a business change. The guy who owns that aspect of the business didn't even know about it and found out when Brad, our training guy, was sitting in a meeting with the larger team. He's like, "Wait? What? When is this happening?"

So, a lot of our training is continuous business change training. So, even after onboarding, training is really on point. What we do is we send a pre-shift note out which has quick updates on any existing thing that we’re focusing on that we found through QA or ops. Then, there is a weekly or biweekly training that we conduct on any major product changes. Then, QA is tied right in with it.

QA says, "Hey, listen. I've heard this on the calls today and you know what? Let's get this feedback out to the team." Training puts it out in our pre-shift notes and we’ll discuss it in our weekly training meeting as well.

It would be awesome to learn about some of the changes your teams have gone through. What are some of the most interesting lessons that, when you look back, you wish you could have done things differently or you would have done things differently. No regrets, of course, but what are the lessons that you'd love to share with folks that might go through similar changes from a training perspective?

Nick Martin:

Yeah. I'll answer it with a training quality perspective. Okay. So, we've had so many different iterations of our team as a hypergrowth startup. I mentioned earlier remote, in-house, part-time, full-time. We've had a BPO for two years when our business was a little bit more volatile and we really needed to lean on the ability to flex up and flex down.

During that process, one of the successes that we've had with our quality and training program was we realized we were using an average QA score from Maestro as our metric, just an average. But then you look at our contacts and, as I mentioned earlier, maybe a shave plan modification is a really easy contact. It's something that most people can do on the website and when it happens, it's quick and easy. The handle time is short. It's a one-touch reply. Our CSAT and CES are very, very high on it.

So, when we're looking at an average quality metric, we were getting, "Hey! Everything's great. It's wonderful." Then, we were listening and reading and the qualitative notes from coming from our auditors was, "It's really not. There's something here with our third party that is just not a consistent quality that we want to provide."

So, the learning from that is we changed to an audit failure rate to get a better understanding, instead of benchmark there. We started to see, "Oh, yeah. There are areas of opportunity here. There's targets that we can concentrate on to create a better consistent quality experience no matter where our associate is sitting." So, that was something we learned a couple months in.

So some of these customer requests were very easy to solve, resulting in a 99% QA score when the reality was that there were actual opportunities to improve. By moving to this new method of audit failure rate, you're better able to identify actual areas for improvement.

Nick Martin:

Yeah, certainly, because they're very easy to solve and they're the bulk of our contacts. So your average is just going to spin high.

Right. We see that a lot, too. People have 98%, 98% but it's not truly representative, they don't necessarily think they have superior, superior customer service.

Nick Martin:

Yeah. We started corroborating with a more holistic approach, like even your internal recognition program, your kudos, whatever you're doing to give kudos and praise, does it match up with their quality? If you have a voice of customer tool, we use Stella. Does that match up with their quality experience? Putting it in a more holistic instead of a one-sided approach has been helpful.

Abdullah Kahn:

So, I think I joined Fubo about a year and a half ago roughly, or a little bit more than that. We really didn't have a platform for support. So, I was able to build a roadmap, which was pretty clear. As I looked at the end of 2018 with the team, I don't think we missed anything in that roadmap due to a reason that we owned.

So, I believe that when you're creating a process and procedures, kind of not thinking one month out, two months out, three months out. Instead to really think, a year, year and a half, two years out and then plug in everything in there. So, that really helped us in not sitting today and saying, "In hindsight, hey, could we have done something better?"

Building a clear vision and having a roadmap of what you want to incorporate and not taking on a lot of things at once really helped us.

Thank you both for taking the time to answer my questions. I would love to open this up and see, do people in the audience have questions?

Audience Member:

I really wanted to go back to something that you said, Abdullah, about discussing the bad customer interactions in front of everyone. We had something similar that we tried to do recently but it was really super awkward and no one liked it or wanted to do it, so I just really wanted you to talk me through the value that you found with that technique versus approaching it in private.

Abdullah Kahn:

Sure. So, we don't do it very often but we do put out examples of contacts where we could have done better. Usually we can make it so that the rep involved in the chat remains anonymous. I think once or twice, we put a phone call out there but that is just done when we really want to make a point that this is something that is not okay. Usually, by that time, that person in one or two cases isn't even there anymore, so that makes it easier.

Audience Member:

Hi. I have two questions. One is about how big does a team need to be before you feel like you need a dedicated QA team or person, and the second is how do you chart things out, what percentage of tickets need to be ignored to get credible results?

Nick Martin:

I'll go. So, I cannot give you an answer for how big a team needs to be. I can tell you what the catalyst was for us, which is resources. We all don't have all of them. So, we started with quality first to get a common language to be able to speak just as a manager to an associate. We started that as we signed on our third party in year two or so of Harry's.

Abdullah Kahn:

So, it's a very interesting question. From my perspective, if I have a budget and money for it, I'll have an independent QA from the very beginning, because the role of a team leader or supervisor is slightly different than the role of QA and training but when you work with a VPO or even internally when you have a large organization, it kind of depends upon how long your contacts are and if you want every single rep to be monitored every week. We have done time and motion studies for our contact types, our team leads, and our quality assurance specialists. To kind of give you high level math, let's say your average contact is eight minutes. You would want your QA to listen to, monitor some calls or contacts which are higher, which are lower, and which are eight minutes.

So, on average, for example, we expect the QA to spend eight minutes listening or reading a contact once. Then, he or she would have to write notes, go back, rewind or fast-forward. So, it usually takes twice the amount of time to monitor and score a contact versus that length of that contact. So, then you end up doing the math on what is your average contact time? How many contacts per rep do you want monitored in a week? Then, how many feedbacks do you want given? Then, do you want the team lead to give that feedback or do you want the QA analyst who's really listen to the call and wrote down those notes to give that feedback?

In the businesses that I've been in, based upon the handling time, it's usually about one QA rep to 25 to 30 agents with an average handling time of eight minutes, nine minutes and doing two monitorings per rep per week and giving feedback. So, it depends upon … If your contacts are two minutes, then you might need one QA rep for 50 reps but if your contacts are 20 minutes, then you might need one for 10. So, it all varies.

Audience Member:

Are you positioning qualitative feedback over quantitative feedback abilities outside of just the active market?

Nick Martin:

So, Dee is our quality analyst and she does a lot of project proposals based on the quality of data that she's looking at. As a company, Harry's leads with data. We try to make the majority of our decisions off very confident data. I think that is how she positions most of her project proposals, which are starting with the three bullet points of the data but never leaving off a few different examples of quotes or an employee experience quote to include as part of that process. It's just like storytelling. You're not going to have an emotional pull if you don't include that qualitative but we do lead with the data.

Abdullah Kahn:

Likewise. Even though our scoring on parts of the form is subjective, there are scores that go behind it. So we do present the data on which question or which section of the form are reps doing well at and where they're struggling for the larger organization. Currently, we kind of look at all of the back-end subjective information and read the notes and feedback that's given, but to the team outside, our QS score is 90, our experience score is 75, and our first call resolution score is 60.

Cool. I think we have time for one more question. I know brand is a huge thing you guys are trying to incorporate into your culture. What do you guys do to incorporate your brand voice into your training program?

Nick Martin:

That's really hard. The first thing I mentioned was hiring, so I think the hiring is a very important process. A lot of our hiring is based on referrals so people are familiar and have sort of flirted with our brand voice at the start. Then, after that, we do give our associates a lot of autonomy. It's really important for quality to give associates a lot of autonomy but we coach, asking questions like how do you think this fits in with our brand? What voice do you think we could use here? Then, we do do some examples. I’d love to get to the place where we have a peer review. We don't have that yet. I think that's a good opportunity but exposure and experience, reverse shadowing and shadowing with tenured associates is usually one of the places where you really start to feel out that brand voice. If we have every single onboarding include a couple different hours of training with some of our more tenured agents and reverse shadowing them, and I think that's probably where you're starting to get that voice.

Abdullah Kahn:

So, I'm going to answer it a little bit differently. We haven't yet got it into brand building in the experience side. One of the reasons behind it is most of our staffing is with our BPO, and they're broad, so they need to be educated on the product that we have. So, a lot of brand building that we do is internal and educating them on what is the NFL and what is the NBA, kind of educating them on the product itself so they will speak intelligently to the consumers. So, this wasn't what your question was but I think that what we're struggling with and prioritizing right now is building that aspect of the business.

Related articles
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies
April 15, 2024
Read More
Elevating Call Center Performance with Six Sigma and MaestroQA
April 19, 2024
Read More
Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative
March 28, 2024
Read More
Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard
April 4, 2024
Read More