AI & Technology in CX

Navigating Legal Risks of AI in Employee Performance Management

Leanna Merrell
October 31, 2024
0 minute read

As AI technology becomes increasingly embedded in employee performance management, HR processes, and customer support, companies encounter both transformative opportunities and emerging risks. In a recent webinar, legal expert Stacey Chiu, Senior Associate at Michelman & Robinson, joined Vasu Prathipati, CEO of MaestroQA, to explore these risks and opportunities. Together, they offered insights into how companies can maximize AI’s potential while protecting themselves against compliance and legal risks.

In this blog, we’ll unpack their discussion and share actionable strategies for adopting AI responsibly in employee evaluations and recruiting.

AI Bias, ADA Compliance, and Legal Risks

AI tools bring undeniable speed and efficiency to employee evaluations, but they also come with significant risks. Chief among these is a lack of contextual understanding, especially when it comes to meeting ADA (Americans with Disabilities Act) compliance standards. In Stacey’s words, while AI might speed up certain processes, it doesn’t inherently know when to adjust its calculations for employees who need special accommodations.

ADA Compliance and Bias Risks in AI Evaluations

One of the biggest risks of using AI in performance assessments is that it can overlook accommodations and nuance, leading to unintentional discrimination. For instance, an AI system might evaluate productivity by looking at keystrokes or task completion rates. But without human oversight, the system might penalize an employee with arthritis who needs regular breaks or an employee with ADHD who requires flexible working hours.

This unintentional bias can expose companies to serious legal repercussions. Stacey explained, “For someone with arthritis or ADHD, an AI system might penalize them for reduced performance without considering these factors. Without human oversight, you’re opening yourself up to significant legal risks.” This points to a clear and pressing need for companies to balance AI’s power with human review, especially in performance evaluations where ADA compliance is crucial.

Steps for Ensuring Fair and Compliant AI Evaluations

Preventing these pitfalls doesn’t require a complete overhaul—just a few straightforward actions can help companies ensure fairness and avoid bias in AI-driven assessments. Regular audits of AI performance evaluations, particularly for ADA compliance, are a good start. Companies should also ensure they have a human “check” on AI outcomes, especially in any assessment that may impact an employee’s job security, compensation, or progression.

Another key step is documentation. By keeping thorough records of how performance evaluations are conducted, companies can not only track for compliance but also provide a defensible process if discrimination claims arise. These simple practices make a significant difference in keeping AI-driven evaluations both accurate and legally sound.

Safeguarding with a "Human in the Loop" Approach

AI can be a powerful tool for identifying performance trends and flagging areas for improvement. However, leaving the final judgment to AI alone, especially in evaluations that impact real people, risks missing the context that only a human can bring. Without the perspective of a manager or team lead, AI might misinterpret performance data, leading to potential inaccuracies and, in some cases, unintentional biases.

Why Human Oversight Matters in AI Evaluations

In many cases, AI might pick up on surface-level metrics—like call completion times or task efficiency—without understanding the factors behind them. For example, an AI might flag an agent for not meeting productivity benchmarks, missing the fact that the agent was handling particularly complex cases or dealing with customer escalations. This is where “Human in the Loop” (HITL) comes into play. HITL combines the speed and scale of AI with the insight and judgment that only humans can provide. This model means AI provides the preliminary insights, while human reviewers add context, ensuring that evaluations are fair and meaningful.

A Practical Approach: MaestroQA’s Copilot Tool

To support this balance of technology and human insight, MaestroQA developed a platform for AI calibration, the Copilot feature. Copilot allows AI to highlight potential issues in employee performance while leaving the final decision to trained human reviewers. This setup not only supports compliance but also ensures that performance reviews account for individual circumstances. For instance, while Copilot can point to patterns or trends across calls or cases, it’s the human review that adds the crucial context and final validation.

As MaestroQA’s CEO, Vasu, puts it, “Use AI to tell you where to look, but let humans make the final judgment to ensure fairness.” This combination of AI-driven insights with human oversight helps organizations get the most out of AI without sidelining the human element—keeping evaluations accurate, fair, and legally sound.

For companies adopting AI in performance management, a “human in the loop” approach offers a practical solution. By involving people in the final evaluation stage, businesses can benefit from AI’s efficiency while making sure assessments align with organizational values and legal requirements.

Emerging Legal Trends in AI Use

As AI continues to play a bigger role in hiring and performance management, states are beginning to take action, implementing regulations to manage the risks AI can pose. These early moves offer a preview of the stricter oversight likely on the horizon.

Current Regulations and the Path Forward

In New York, the Bias Audit Law requires companies using AI in recruiting to conduct annual bias audits, ensuring that their AI tools aren’t discriminating against protected groups. This law applies to automated tools used to evaluate job candidates and mandates audits and transparency in the tool's selection criteria. Illinois has followed with a similar rule: companies must notify candidates when AI tools are used to evaluate video interviews. While these laws are relatively new, they’re a clear signal that AI in HR is on the radar for regulators.

‘These regulations are just the beginning. Similar laws will likely emerge across the U.S. in the coming years,’ Stacey noted, indicating that regulatory momentum is building.

The NYC Bias Audit Law is especially noteworthy as it mandates that employers not only conduct independent audits for bias but also publicly report their findings. This level of transparency may soon be the standard, urging companies nationwide to adopt similar safeguards preemptively.

The Need for Proactive Compliance

Staying on top of these evolving requirements is crucial. Even if a company isn’t directly impacted by these laws today, preparing for future compliance can save time and resources down the road. Companies that are proactive—reviewing AI implementations, working with legal advisors, and conducting internal bias audits—will be better positioned when regulations expand. A structured review process now can help companies avoid disruptions if and when nationwide regulations take effect. Companies using software for AI compliance in HR can establish these proactive checks to stay ahead of regulations.

The Role of QA Teams in AI Monitoring

QA teams play a key role in monitoring and auditing AI processes to ensure compliance. By actively reviewing AI outputs for fairness and accuracy, QA teams can prevent unintended biases before they become compliance issues. This ongoing review process not only safeguards companies from legal exposure but also builds a culture of accountability and fairness.

As AI regulation tightens, companies that take these proactive steps will be better prepared for the future, creating a strong foundation for AI adoption that respects both legal standards and ethical practices.

Proactive Compliance: Why Early Action Matters

When it comes to compliance, waiting until regulations force action is a risky approach. AI in HR and employee evaluations brings with it the potential for discrimination and privacy issues, and regulators are already starting to set their sights on these areas. Acting now—before regulations become stricter—can help companies avoid legal exposure and costly adjustments.

The Risks of Delaying Compliance

One of the key takeaways from Stacey’s insights is that waiting until something goes wrong can be disastrous. As she put it, “The legal system usually catches up when something catastrophic happens. By then, it’s too late.” Companies that delay compliance may find themselves scrambling to address regulatory demands after the fact, often at a high cost and with limited options.

Benefits of Early Compliance Initiatives

Companies can take a few practical steps today to build a defensible compliance framework. Regular bias audits, human oversight, and clear documentation are essential. By documenting AI processes, tracking how decisions are made, and having a clear audit trail, companies create a system that’s both defensible and fair.

Documentation, Stacey noted, isn’t just about having records on hand. It’s also about being ready for a possible regulatory future where companies may need to show exactly how AI systems function, especially in cases involving protected classes or accommodations. Companies can also avoid sudden disruptions by adopting a system of bias checks and audits now, making compliance more manageable if new laws come into effect.

Actionable Compliance Steps

For companies looking to stay ahead of the curve, the steps are straightforward but impactful:

  1. Regularly Review and Update AI Models: Keep AI algorithms current, and test them periodically to ensure they align with compliance standards.
  2. Train Employees on AI Use: Make sure employees understand how AI supports (not replaces) human judgment in evaluations.
  3. Document AI-Assisted Performance Reviews: Maintain detailed records for every AI-influenced decision. This not only supports compliance but also provides transparency if questions arise.
  4. Establish a structured QA process for AI compliance: Regularly review outputs and address any discrepancies early.

Staying proactive about compliance isn’t just a matter of avoiding penalties—it’s an investment in sustainable, responsible AI use that safeguards both companies and employees. By taking steps now, organizations position themselves for success in an increasingly regulated AI landscape.

AI’s Role in Performance Evaluations and Legal Compliance

AI can support HR teams in identifying performance trends, surfacing potential areas for growth, and standardizing certain elements of evaluations. However, when it comes to the final assessment, human judgment is essential to ensuring evaluations are both fair and legally compliant. AI works best as a guide—helping managers spot issues and focus their efforts on areas that may need attention—rather than as the final decision-maker in employee evaluations.

AI as a Supplement, Not a Replacement

The goal in using AI for performance evaluations should be to help HR teams make more informed decisions without replacing the human touch. For example, AI might highlight a dip in productivity, but a manager is still needed to interpret why that dip occurred—perhaps the employee was tackling a particularly complex project or had recently returned from leave. AI tools lack this kind of situational awareness, which is crucial for understanding an employee’s full performance context.

“AI’s power lies in its ability to support human judgment, not replace it. This proactive approach can prevent bias and ensure fair evaluations,” Stacey explained in the webinar, emphasizing AI’s role as a supplemental resource rather than a standalone judge.

Handling AI-Reviewed Data Responsibly

Data from AI assessments should be handled with care to avoid potential bias or misinterpretation. AI can be helpful in backing up a manager’s perspective with data, but HR teams should treat AI evaluations as just one part of the overall assessment. For instance, using AI to track call metrics in customer support can provide objective data on performance trends. However, without human oversight, these metrics alone can overlook factors like call difficulty, client satisfaction, or special accommodations for employees who may work at different paces due to health needs.

Best Practices for Ethical AI in Evaluations

Implementing AI in performance reviews with an ethical approach involves a few best practices. First, establish clear criteria for when and how AI will be used in evaluations. This includes creating guidelines for human oversight, ensuring that every AI-assisted assessment is reviewed by a manager or HR specialist before it informs any final decisions. Transparency with employees is also critical—make sure they understand that AI is a supportive tool, not a replacement for human assessment, and provide clarity around how AI data contributes to their reviews.

By implementing these best practices, companies can avoid common pitfalls associated with AI, such as unintentional bias or a lack of contextual accuracy. Done right, AI becomes a valuable tool that enhances performance management while upholding fairness and compliance with employment standards.

Want to learn more?

If you missed the live webinar, the recording is now available! Watch it here.

To explore how MaestroQA’s Copilot feature can help your team adopt AI responsibly, schedule a demo with us.

Previous Article

Empowering Agents to Overcome Customer Service Challenges and Drive Brand Loyalty

Leanna Merrell

Navigating Legal Risks of AI in Employee Performance Management

Leanna Merrell

How QA is Transforming Sales

Leanna Merrell

Navigating AI Pitfalls and Enhancing CX in Call Centers

Leanna Merrell

Beyond VOC: The Future of Customer Service Conversation Intelligence

Leanna Merrell

How to Optimize Your Chatbot Strategy: QA’s Critical Role in Enhancing Accuracy & Effectiveness

Leanna Merrell

MaestroQA Achieves PCI DSS 4.0 Level 1 Compliance: Leading the Way in Secure QA Solutions

Lauren Alexander

Don't Settle. Dig Beneath The Surface For Customer Insights.

Team MaestroQA

Using Zendesk CSAT Reviews and Slack to Appreciate CX Agents

Customer Support Best Practices From The NYC Support Driven Meetup

Why Failure To Provide Great Customer Service Is A Risk To Company Success

Maintaining Quality Of Customer Support In The Face Of Hyper-Growth

Customer Service Quality Assurance and Soft Skills

The 2 Agent-Controlled Factors to Improve CSAT Scores

Guide to Building Call Center Quality Monitoring Scorecards

How To Improve CSAT Scores for Your Call Center in 3 Steps

Customer Service Quality Assurance for Higher CSAT Explained

How To Build Your First QA Scorecard — A Comprehensive Guide

Innovation in Quality Management with Freshly at The Art of Conversation

A maybe-too-honest perspective on our rebrand

How FabFitFun Uses Customer Service Quality Assurance To Manage A Best-In-Class Support Team

Team MaestroQA

How to Create A Customer Service Quality Assurance Form

See The Future: Be Proactive In Support Of Your Customers

Team MaestroQA

Quality Management and Customer Service Training Programs

Why Paving A Path To Resolution Is A Customer Service Best Practice

Team MaestroQA

CSAT Scores vs. Quality Assurance Metrics – Which Is Better?

5 Ways Quality Assurance Programs Can Improve CSAT Scores

MaestroQA Partnerships: Introducing Zendesk Suite

Is Your Quality Assurance Program Built For 2018?

Fresh Take: How Peer Review Can Identify Improvements

Roger That! Assume Nothing Until You Get Confirmation

Creating a Multi-Channel Quality Form For Contact Centers

The Art of Training with Harry's Razors and FuboTV

Building Customer Loyalty And Trust Through Service

Team MaestroQA

How ActiveCampaign Uses MaestroQA To Scale Their Support Team, And Improve Team Dynamics

Team MaestroQA

Omnichannel Support For Agents And Customers: A Necessity

2 Types Of Agent Skills That Impact Customer Satisfaction

Mastering Customer Interactions in the Age of DSAT

Leanna Merrell

How Shinesty Uses Alternative Positioning as a Best Practice

Dangers Of The 90%+ QA Scores

Using Positive Positioning to Improve Call Center CX

Team MaestroQA

Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies

Leanna Merrell

Elevating Call Center Performance with Six Sigma and MaestroQA

Lauren Alexander

Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative

Leanna Merrell

Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard

Leanna Merrell

The Essential Guide to Chatbot Quality Assurance: Ensuring Excellence in Every Interaction

Leanna Merrell

Unlocking Superior CX: The Bombas Blueprint for Quality and Coaching

Leanna Merrell

Agent Empowerment: 5 Tactics for Customer Retention from Industry Leaders

Mastering Agent Onboarding: Quality Assurance Lessons from ClassPass

How Angi Unlocked Growth and Continuous Improvement with QA

The Transformation of QA: Driving Business Results - Key Takeaways from MaestroQA’s CX Summit

Lauren Alexander

Unleashing the Power of Customer Conversations: Top 6 Tech Trends Revealed at the CX Summit

Lauren Alexander

Important Factors to Consider when Exploring Sentiment Analysis in Customer Support QA: A CX Community Discussion

Driving Business Impact with Targeted QA: Insights from an Expert

The Art of Outsourcing Customer Support: Lessons from Stitch Fix's BPO Partnership

Larrita Browning

How to Revamp QA Scorecards for Enhanced Quality Assurance

De-Villainizing QA Scorecards with Hims & Hers Customer Service

How to Maximize Call Center & BPO Performance | MaestroQA

Larrita Browning

Writing the Auto QA Playbook & Transforming Customer Support

Larrita Browning

MaestroQA Named One of Comparably’s 2023 Best Workplaces in New York for the Second Consecutive Year

Larrita Browning

Advancing Customer Service Metrics with AI Classifiers

Lauren Alexander

MaestroQA Named on Comparably’s Best Workplaces in New York

Larrita Browning

CX Strategy: The Future of AI in Quality Assurance

Larrita Browning

Elevating Customer Satisfaction with Visibility & Coaching

Larrita Browning

How Customers Collaborate with Their BPO Partners Today

5 Key Strategies to Supercharge Your BPO Partnership

Larrita Browning

Champion-Challenger Model: Improve Customer Service In BPOs

Larrita Browning

Kick Start Your Customer Service BPO Partnership Successfully

Larrita Browning

BPO Call Centers: Best Practices for Quality Assurance

Larrita Browning

Call Calibration: What is It & What are the Benefits?

Larrita Browning

Increase QA Team Alignment with Call Calibration & GraderQA

Dan Rorke

Measuring An Organization's 3 Ps: People, Process and Product

Larrita Browning

Empathy in Customer Service: Everything You Need to Know

Larrita Browning

Average Handle Time (AHT): How to Calculate & Reduce It

How to Onboard Your Customer Service Team to a New QA Program

21 Key Customer Experience Definitions for QA Professionals

Should You Have Dedicated Quality Assurance Specialists?

How Top eCommerce Brands Ensure Exceptional Customer Service in a Remote World

The Top 4 CX Books Recommended by Our QA Community

A Guide to Customer Service Quality Assurance Programs

5 Key Components of a Remarkable Customer Service Experience

The Ultimate Guide to Improving First Call Resolution (FCR)

How to Refresh Your Call Center Quality Monitoring Scorecard

The Key to Customer Service Coaching Is More Data (and Fewer Opinions)

Call Center Quality Assurance with Zola and Peloton

How to Update Your QA Scorecard

3 Ways to Test Your Call Center Quality Assurance Scorecard

The 9 Customer Service KPIs Needed To Improve CX

What is DSAT and 5 Steps to Improve It

Leveraging Customer Sentiment to Improve CX in Call Centers

Larrita Browning

Customer Experience Management and Quality Assurance Jobs

How Deeper CX Analytics Lead to Better CSAT | MaestroQA

Customer Service Management 101: Everything You Need to Know

Beyond Low CSAT Scores: Finding the Root Cause of Poor CX

How to Create an Omnichannel Call Center Quality Assurance Scorecard

Achieving Effortless Customer Experiences (CX) with QA

This Is What an Effective Customer Service Coaching Session Looks Like

Customer Service Coaching 101: Improve Agent Performance

Auto-Fail in Call Center QA: What It Means and When to Use It

MaestroQA's Aircall Integration: Bring Your Calls to Life