Legal & Compliance
Using Monitoring Data in Performance Reviews: Legal Risks and Best Practices
Using monitoring data in performance reviews is the practice of incorporating employee activity metrics from workforce tracking software into performance evaluation processes, subject to legal requirements under AI employment decision laws in New York City, Colorado, and Illinois that mandate bias audits and employee notice. Many employers treat monitoring data as objective evidence. It is not, and the legal frameworks now in force reflect that reality.
Why Monitoring Data Is Not the Objective Evidence Employers Assume It Is
Monitoring data in performance reviews feels objective because it is quantitative. An employee was active for 4.2 hours and idle for 3.1 hours. Those are numbers. Numbers feel factual. But the inference from those numbers to a performance conclusion requires a chain of assumptions that courts, regulators, and researchers are now examining closely: that computer activity is a valid proxy for work output, that the activity classification rules correctly identify productive versus non-productive behavior for this employee's specific role, that no legitimate factors (disability accommodation, caregiving responsibility, medical appointment) explain activity patterns the software flags as concerning, and that the data's automated output does not systematically disadvantage a protected class.
Each of these assumptions can fail. When they do, the monitoring data that felt objective becomes the center of a legal dispute. Three overlapping legal frameworks now govern when and how employers in the United States can use monitoring data in employment decisions, with New York City leading the way and other jurisdictions following rapidly.
NYC Local Law 144: The First Major AEDT Regulation
New York City Local Law 144, effective July 5, 2023, requires employers and employment agencies that use automated employment decision tools (AEDTs) in hiring, promotion, or performance evaluation decisions affecting New York City employees to complete an independent bias audit, publish the results publicly, and notify employees at least 10 business days before the tool is used in a decision affecting them.
What Qualifies as an AEDT Under Local Law 144
An AEDT under Local Law 144 is a computational process that substantially assists or replaces discretionary decision-making in employment decisions. The key phrase is "substantially assists or replaces." Raw monitoring data reviewed by a manager who exercises independent judgment is unlikely to qualify. An automated productivity score that the manager reviews and uses directly in setting a performance rating may qualify. An automated flag system that identifies employees as "underperforming" based on activity thresholds and routes them to performance improvement plans is more clearly within scope.
The New York City Department of Consumer and Worker Protection has issued guidance indicating that it will examine whether the tool meaningfully constrains or influences manager discretion. If a manager could not realistically deviate from the monitoring data output in practice, even if technically permitted to do so, the tool may qualify as an AEDT. HR leaders using monitoring software in New York City performance processes should have employment counsel assess their specific workflow against this standard.
The Bias Audit Requirement
Employers subject to Local Law 144 must commission an independent bias audit before using the AEDT and annually thereafter. The audit must analyze whether the tool produces disparate impact on the basis of sex, race, or ethnicity and must be conducted by an independent auditor, not the tool's vendor. Results must be published on the employer's website. For monitoring software used in performance decisions, this means analyzing whether the software's activity classifications, productivity scores, or flagging systems produce systematically different outcomes across protected class groups.
The Employee Notice Requirement
Employers must notify employees at least 10 business days before using an AEDT in a decision affecting them. The notice must identify what tool is being used and what data it relies on. For ongoing performance monitoring incorporated into annual reviews, legal counsel's guidance is needed on whether a standing notice at hire satisfies this requirement or whether decision-specific notice is required each time performance review data is compiled.
Colorado AI Act (SB 24-205): Disclosure and Appeal Rights
Colorado's Artificial Intelligence Act (SB 24-205), effective February 1, 2026, takes a different approach from NYC Local Law 144. Rather than requiring pre-use bias audits, Colorado requires deployers of high-risk AI systems in consequential employment decisions to notify employees that an AI consequence system is being used, disclose the purpose and nature of the system, and provide employees with the opportunity to appeal decisions made using AI assistance.
What Constitutes a High-Risk AI System Under Colorado Law
Colorado's Act defines a high-risk AI system as one that makes or substantially contributes to consequential decisions in employment, including hiring, firing, promotion, and performance evaluation. An AI-powered monitoring system that analyzes employee activity and generates performance-relevant output (productivity scores, anomaly flags, attendance pattern analysis) and feeds that output into a performance review system may qualify. The key is whether the AI system's output substantially contributes to a consequential employment decision.
The Appeal Right Requirement
Colorado's requirement that employees receive the opportunity to appeal AI-assisted decisions is meaningful for monitoring use cases. If a performance rating is lowered based in part on monitoring data processed by an AI system, the employee must have a defined process to contest that outcome. Employers should establish a documented appeal process before using AI-assisted monitoring data in performance decisions affecting Colorado employees.
Illinois and the National Trend
Illinois's AI Video Interview Act is narrower but signals a consistent legislative direction: state legislatures are regulating employer use of AI in employment decisions. The pattern across New York, Colorado, and Illinois suggests that employers who build governance frameworks for monitoring data in performance decisions now will be positioned for compliance as additional states enact similar requirements over the next two to three years.
Title VII Disparate Impact: The Discrimination Risk Built Into Activity Metrics
Title VII of the Civil Rights Act prohibits employment practices that, while facially neutral, produce a disparate impact on a protected class unless the employer can demonstrate business necessity. Monitoring data in performance reviews creates disparate impact risk through a mechanism called proxy discrimination: activity metrics that appear race-neutral and gender-neutral may correlate with protected characteristics in ways that produce systematically different performance scores across groups.
Common Proxy Variables in Monitoring Metrics
Continuous presence metrics measure how consistently an employee maintains active computer status throughout the workday. Employees who take more frequent breaks, step away from their computers to handle caregiving responsibilities, or have medical conditions affecting their ability to maintain continuous focus score lower on continuous presence metrics. Caregiving responsibilities correlate with gender (women disproportionately hold caregiving roles), and some medical conditions correlate with protected disabilities. A presence metric that disadvantages these patterns may produce disparate impact on women and employees with disabilities.
Standard hours scoring measures productivity during core business hours. Employees with religious observance requirements, school pickup schedules, or medical treatment appointments may perform their best work outside standard monitoring windows. Scoring that rewards standard-hours presence over total output quality can disadvantage employees based on religion, sex, or disability.
The Employer's Burden If Disparate Impact Is Shown
If an employee demonstrates that a monitoring-based performance metric produces a statistically significant disparate impact on a protected class, the burden shifts to the employer to demonstrate that the metric is job-related and consistent with business necessity. Demonstrating business necessity for a continuous presence metric in a knowledge work role, where output quality is the actual performance measure, is difficult. Employers should conduct group-level analysis of monitoring-based performance metrics before using them in reviews to identify potential disparate impact before it generates a claim.
ECPA and Data Repurposing: Using Monitoring Data Beyond Its Stated Purpose
The Electronic Communications Privacy Act (ECPA) governs employer monitoring of electronic communications. Many employer monitoring policies describe the purpose of monitoring in terms of security, productivity management, or policy compliance. Using that data in performance reviews may not align with the stated purpose, creating ECPA tension and, more practically, creating a basis for employee claims that the monitoring was used for an undisclosed purpose.
The more significant risk is contractual. Employee monitoring policies are often incorporated by reference into employment agreements or employee handbooks. If the policy states that monitoring data is used "for productivity and security purposes" and the employer then uses that data to support a performance improvement plan or demotion, the employee may have a reasonable argument that the data was used for a purpose outside the disclosed scope. This is not an ECPA claim in most cases, but it weakens the employer's position in any subsequent dispute over the performance action.
The Solution: Accurate Policy Language
Employers who intend to use monitoring data in performance reviews should say so in their monitoring policy. The policy should specifically state that activity data may be used in performance evaluation processes, that managers have access to individual activity data for performance management purposes, and that the data may be retained and referenced in performance reviews and employment decisions. Accurate policy language does not create new legal risk; it eliminates the repurposing argument that inaccurate or incomplete language creates.
Wrongful Termination Using Monitoring Data: The Chain of Custody and Corroboration Requirements
Monitoring data in wrongful termination cases faces two practical evidentiary challenges. First, authentication: the employer must be able to demonstrate that the monitoring data is accurate, has not been manipulated, was collected from the correct employee's device, and represents the relevant time period. Digital records presented without authentication documentation are subject to challenge in arbitration and litigation. eMonitor maintains audit logs that support authentication, but employers must preserve these logs for the period during which an employment claim could be filed.
The Corroboration Principle
Employment law practitioners have developed a practical standard called the corroboration principle for monitoring-based terminations: monitoring data alone is not a sufficient basis for termination. This principle reflects both legal risk management and evidentiary reality. Monitoring data shows activity patterns. It does not explain them. A termination supported only by low activity scores, without manager observation of poor work quality, output deficiencies documented against clear standards, or client complaints, is vulnerable to challenge on the grounds that the monitoring data was misinterpreted.
The corroboration principle does not prohibit using monitoring data in termination decisions. It requires that monitoring data is accompanied by other evidence: manager observation notes contemporaneous with the monitoring period, documented performance conversations where the employee was given notice of the concern and an opportunity to respond, output quality evaluations, and where relevant, comparison data showing that similarly situated employees in comparable roles were not terminated for the same activity patterns.
Documentation Standards for Defensible Termination
Employers who use monitoring data in termination decisions should maintain: a written record of the monitoring data reviewed and the period it covers, documentation of who accessed the data and when, notes from any performance conversations where monitoring data was discussed with the employee, evidence that the employee was given notice of the performance concern and an opportunity to improve, documentation that a human decision-maker reviewed all evidence and made the termination decision, and evidence that similarly situated employees were treated consistently. This documentation package provides the corroboration that makes a monitoring-based termination defensible.
A Defensible Framework for Using Monitoring Data in Performance Reviews
The legal landscape for monitoring data in performance reviews is complex but navigable. The following framework reflects current best practices for employers seeking to use monitoring data without creating unnecessary legal exposure.
Step 1: Assess Whether Your Use Triggers AEDT Requirements
Work with employment counsel to evaluate whether your monitoring software's output, in the way you use it in performance decisions, qualifies as an AEDT under NYC Local Law 144. If you have New York City employees and use any automated scoring or flagging from your monitoring software in performance reviews, the analysis is necessary. If the review concludes that AEDT requirements apply, commission a bias audit before your next performance review cycle.
Step 2: Update Your Monitoring Policy to Reflect Actual Use
Review your monitoring policy language against the actual uses of monitoring data in your HR processes. If performance review use is not disclosed, add it. The disclosure should be specific: "Activity data collected through [software name] may be used by managers and HR in performance evaluation processes, including regular performance reviews, performance improvement plans, and employment decisions."
Step 3: Conduct a Group-Level Disparate Impact Review
Before incorporating monitoring metrics into performance ratings, analyze whether those metrics produce systematically different outcomes across gender, race, or disability status in your workforce. Group-level analysis does not require statistical significance testing for small organizations, but it should be documented. If disparate impact is identified, evaluate whether the metric is genuinely job-related before using it in performance decisions.
Step 4: Establish the Corroboration Standard in Your Performance Process
Document in your performance management procedures that monitoring data serves as one input among several and that employment decisions require corroborating evidence beyond monitoring data alone. Train managers on what corroboration means in practice and on how to document the basis for performance ratings in a way that does not rely solely on monitoring data outputs.
Step 5: Preserve Monitoring Records with Authentication Integrity
Establish a retention policy for monitoring data used in employment decisions that aligns with the statute of limitations for employment claims in your jurisdiction. In most states, this means at least three years for FLSA claims and up to four years for state discrimination claims. Ensure that the data retention system maintains audit logs that can authenticate the records if they are needed in arbitration or litigation.
How eMonitor Positions Activity Data for Defensible Use
eMonitor provides activity data as managerial information, not automated verdicts. The platform does not generate performance ratings, employee rankings, or automated flags that recommend employment decisions. Managers see activity trends, time allocation, and productivity patterns that require human interpretation. This design choice is both philosophically aligned with transparent monitoring and practically significant for the legal frameworks now in force.
Because eMonitor's output requires human interpretation before informing a performance decision, it is less likely to qualify as an AEDT under NYC Local Law 144's "substantially assists or replaces discretionary decision-making" standard than AI scoring tools that generate direct performance recommendations. This matters for employers seeking to use monitoring data in reviews without triggering the bias audit requirement.
At $3.50 per user per month, eMonitor gives HR leaders and general counsel a monitoring platform designed for the regulatory environment that now exists, not the one that existed five years ago when most monitoring software governance frameworks were written. The platform's activity data exports include authentication metadata that supports the chain of custody requirements for defensible employment decisions.
Frequently Asked Questions
Can employers use monitoring data in performance reviews?
Employers can use monitoring data in performance reviews, but specific legal requirements apply in some jurisdictions. NYC Local Law 144 requires independent bias audits and employee notice before using automated employment decision tools in performance decisions. Colorado's AI Act (SB 24-205) requires disclosure and appeal rights. Using monitoring data as one factor among several, with human review required for all employment decisions, minimizes legal exposure and produces more defensible performance evaluations.
What is NYC Local Law 144 and does it apply to monitoring?
NYC Local Law 144, effective July 5, 2023, requires employers to conduct independent bias audits and publish results before using automated employment decision tools (AEDTs) in hiring, promotion, or performance decisions affecting New York City employees. Monitoring software that generates automated productivity scores used in performance reviews may qualify as an AEDT. Employers must provide employee notice at least 10 business days before such a tool is used in a decision affecting them.
Does activity monitoring data qualify as an automated employment decision tool?
Activity monitoring data qualifies as an automated employment decision tool under NYC Local Law 144 when the software substantially assists or replaces discretionary decision-making in employment decisions. Raw activity logs reviewed by a manager exercising independent judgment are less likely to qualify. Automated productivity scores, performance rankings, or flagging systems that directly inform employment decisions are more likely to qualify and trigger bias audit and notice requirements.
What is the Colorado AI Act requirement for monitoring data?
Colorado's Artificial Intelligence Act (SB 24-205), effective February 2026, requires deployers of high-risk AI systems in employment decisions to disclose to employees that an AI consequence system is being used and give employees the opportunity to appeal decisions made using AI. If monitoring software's activity scoring is used as an input to an AI performance decision system, disclosure and appeal rights are required for Colorado employees.
What is disparate impact risk from monitoring data in performance reviews?
Disparate impact occurs when a facially neutral employment practice disproportionately disadvantages a protected class. Monitoring metrics that penalize non-linear work patterns, frequent breaks, or non-standard hours can proxy for gender, disability, or caregiving status. If activity-based performance metrics produce systematically lower scores for women, employees with disabilities, or caregivers, Title VII and the ADA expose employers to discrimination liability even without discriminatory intent.
Can an employee be fired based on monitoring data alone?
Terminating an employee based solely on monitoring data creates significant legal risk. Courts have challenged terminations where the employer could not authenticate digital records, where monitoring data contradicted other evidence of performance, or where the termination had disparate impact on a protected class. The corroboration principle requires that monitoring data be supported by direct observation, output quality evidence, or manager assessment before it supports a termination decision.
How should monitoring data be presented in a performance review?
Monitoring data in performance reviews serves as one input among several, not a primary verdict. Present activity trends over a meaningful period rather than isolated incidents. Pair activity data with output quality evidence, manager observation, and client or peer feedback. Frame data as a starting point for conversation: "The activity data shows this pattern. What context should I understand about this period?" This approach is both legally safer and more effective as a performance management tool.
What documentation is needed when using monitoring data in a performance review?
Documentation for monitoring data in performance reviews requires records of what data was collected and the collection period, what automated scoring or analysis was applied, who reviewed the data and what conclusions they drew, what other evidence was considered alongside monitoring data, that employee notice was provided where required by applicable law, and that a human reviewer made the final employment decision rather than an automated system.
How does eMonitor help employers use data defensibly in reviews?
eMonitor positions activity data as conversation-starting information rather than automated verdict output. The platform provides raw activity trends, time allocation data, and productivity patterns that managers review and interpret with human judgment. eMonitor does not generate automated performance scores or employment decision recommendations, which reduces exposure under NYC Local Law 144's AEDT definition and supports the corroboration principle required for defensible performance decisions.
What notice must employers give employees about monitoring in performance decisions?
Under NYC Local Law 144, employers using automated employment decision tools must notify employees at least 10 business days before using such a tool in a decision affecting them. Under Colorado's AI Act, employees must be informed that a high-risk AI system is being used and given appeal rights. Most states also require general monitoring notice at employment start, separate from the performance decision-specific notice requirements under AI employment laws.
Related Reading
General Counsel Legal Guide
The complete legal framework for employee monitoring in 2026.
Read guideMonitoring False Positives
Why monitoring data misleads managers and how to reduce false flags.
Read articleEmployee Monitoring Legal Guide 2026
State-by-state compliance requirements for monitoring programs.
Read guide