
Adam Bouka
By Adam Bouka
Employers have long used reductions in force (“RIFs”) as a high-risk but familiar response to economic pressure, restructuring, or strategic change. Traditionally, employers evaluated RIF-related risk through relatively discrete lenses: compliance with the Worker Adjustment and Retraining Notification (“WARN”) Act, potential discrimination claims, and the adequacy of internal documentation.
That approach today, may no longer be sufficient.
As employers increasingly rely on data-driven tools, including AI-assisted systems, in their RIF processes, the reductions are becoming more complex. They may now include structured, data-rich decision systems that can be reconstructed and challenged under multiple legal frameworks. At the same time, regulators—particularly in jurisdictions like California—are making clear that using these tools does not necessarily reduce liability exposure.
It could actually increase it.
The Old Model: RIF Risk in Silos
Traditionally, employers approached RIF planning through three largely independent workstreams:
- WARN compliance, focused on headcount thresholds and notice timing
- Discrimination risk, typically assessed after selections were made and using adverse impact analysis and data
- Documentation, designed to articulate legitimate, non-discriminatory reasons.
While each remains relevant, this framework reflects an earlier era—one in which decision-making was less structured and more discretionary, less data-driven, and more difficult to systematically evaluate.
How AI Changes RIF Risk
Employers increasingly rely on structured and data-driven inputs to make workforce decisions, including performance scoring systems, workforce analytics platforms, and AI-assisted decision tools. These tools are often adopted to promote efficiency and consistency. But as reliance on them expands, they do not simply change how decisions are made—they change how those decisions are evaluated and the input into such decisions.
AI-assisted processes can generate defined inputs, structured outputs, and embedded weighting of criteria. This creates systems that can be reverse-engineered in litigation, allowing challenges not just to outcomes, but to how those outcomes were produced. At the same time, tools designed to promote consistency can amplify risk: a flawed input or weighting may affect an entire population, increasing exposure to disparate impact claims and even class-wide challenges. What might once have been isolated decisions can become systemic by design.
These risks are compounded by opacity and scale. Many AI systems—particularly vendor-provided tools—are difficult to explain, yet employers remain responsible for demonstrating that decisions are job-related, consistently applied, and free from unlawful bias. An inability to explain how a decision was made may itself become a liability. In addition, AI-assisted processes generate broader and more structured evidentiary records, including underlying data, rankings, and internal communications, expanding the scope of discovery. As a result, increased reliance on AI may not reduce legal risk—it could transform RIF decisions into structured systems that are more transparent, more easily reconstructed, and more vulnerable to scrutiny.
Litigation Is Catching Up: AI as Decision-Maker
As I discussed last year, courts are now confronting these systems directly. In Mobley v. Workday, Inc., a federal court allowed discrimination claims to proceed based on allegations that an AI-powered platform was not merely applying employer-defined criteria, but participating in, rather than merely implementing, employment decisions. The court allowed claims to proceed under an “agent” theory of liability, finding it plausible at the pleading stage that employers had effectively delegated traditional hiring functions—such as screening and rejecting candidates—to the system.
That development builds on the same concern highlighted previously: employers cannot avoid liability by relying on third-party tools. Even where an AI system is vendor-provided, the employer remains responsible for its outcomes and the information input into such tools.
While Mobley arises in the hiring context, its implications extend beyond applicant screening. If an AI system is treated as part of the decision-making process itself, it cannot be characterized as a neutral tool. Instead, it becomes subject to the same scrutiny—and the same requirement that employers can explain, justify, and defend the outcomes it produces. That framing is particularly relevant in the RIF context, where employers increasingly rely on structured scoring, ranking, and analytics tools to identify employees for termination.
Regulation Is Catching Up: AI Use Triggers Obligations
Regulators are codifying these same concerns.
As discussed in prior guidance, California’s framework governing automated decision systems makes clear that AI tools used in employment—including in evaluation and termination decisions—are subject to the same anti-discrimination standards as traditional processes. Employers cannot treat these tools as insulated from liability or insulated from bias. They must be governed, auditable, and explainable.
Other jurisdictions are going further. Colorado’s Artificial Intelligence Act (SB 24-205), for example, imposes affirmative obligations on employers using “high-risk” AI systems—including those influencing employment decisions—to use reasonable care to prevent algorithmic discrimination. The law requires risk management, impact assessment, and oversight mechanisms designed to ensure accountability.
These developments reflect a broader shift: as reliance on AI expands, so does the expectation that employers can justify, audit, and defend how those systems are used.
Practical Guidance: Rethinking RIF Planning
Employers should assume that any system used to inform RIF decisions—including AI tools, rankings, or scoring models—will be discoverable and subject to scrutiny. That makes front-end diligence critical: conduct pre-decision impact and consistency reviews, and ensure the process is explainable. In jurisdictions like California, reliance on technology is not a shield—it may increase scrutiny—so employers must be prepared to defend both the process and the outcome.
Documentation should also be built in real time. Records should clearly define selection criteria, reflect how they were applied, and tie decisions to legitimate business objectives—and specifically document any aspect of the process involving AI.
Takeaway
As employers increasingly rely on AI and data-driven tools, RIF-related risk is changing in kind—not just degree. In jurisdictions like California, reliance on automated systems does not reduce liability—it heightens the expectation that employers can explain, audit, and defend their decisions.
At the same time, emerging litigation suggests AI may be treated not merely as a tool, but as a participant in the decision-making process, making outcomes directly attributable to the employer. RIF decisions are no longer just business judgments—they are structured systems subject to reconstruction and challenge.
