
Adam Bouka
By Adam Bouka
Artificial Intelligence (AI) is transforming how companies find, evaluate, and hire talent—but it’s also raising red flags among regulators and courts. Two big developments in May 2025 show that HR teams must take a closer look at their hiring tools to avoid legal and compliance risks.
Let’s break it down.
What’s Happening in California?
California is preparing to implement new civil rights regulations that are likely to impact the use of automated decision-making systems (ADSs) in employment and other state-supported programs. These rules—expected to take effect as soon as July 1, 2025—aim to prevent discrimination based on protected characteristics such as race, gender, age, disability, or religion. While the regulations don’t ban AI tools outright, they make it unlawful to use any system, automated or not, that results in discriminatory outcomes.
What Counts as Discriminatory?
The new rules target AI tools that analyze candidates’ voices, facial expressions, personality, or availability—especially if those tools lead to biased outcomes.
Example: If an AI tool interprets not smiling during a video interview as a sign of unfriendliness, that could unfairly penalize candidates from cultures where smiling less is common.
Bottom line: If an AI tool results in different outcomes for people in protected groups, it could violate the law—even if there’s no intent to discriminate.
What About the Workday Lawsuit?
At the same time, a major collective action lawsuit against Workday, the popular HR tech provider, is moving forward in federal court. The claim? That its AI-powered hiring software discriminated against applicants over age 40.
- The lead plaintiff, Derek Mobley, is a Black man over 40 with anxiety and depression. He says he applied to 100+ jobs using Workday’s systems and was rejected every time.
- On May 16, 2025, a judge ruled that his age discrimination case can proceed as a nationwide collective action under the ADEA, potentially involving hundreds of thousands or even millions of job seekers.
This case is a wake-up call for employers: Even if you didn’t build the AI tool yourself, you can still be liable for the discriminatory impact of third-party algorithms used in your hiring process.
What Should HR Teams Do Now?
Whether you’re in California or not, these developments show that AI compliance is now an HR priority. Here’s your action plan:
1. Review Your Tools
Audit your hiring systems—especially those involving AI. Do they analyze resumes, screen video interviews, or give “fit scores”? If yes, ask for proof they’re bias-tested.
2. Demand Transparency from Vendor
If you use third-party platforms like Workday, ask for:
- Documentation of bias testing
- Clear explanations of how decisions are made
- Contracts that protect you from legal risk
3. Keep a Human in the Loop
Don’t let AI make the final call. Ensure someone in HR reviews and can override automated decisions.
4. Track Outcomes
Analyze hiring data regularly. Are you seeing unexplained gaps by age, race, or gender? These could be signs of disparate impact, a legal red flag.
5. Form an AI Governance Team
Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing use of AI in employment.
Why It Matters
California’s regulations and the Workday lawsuit are just the beginning. With the federal government scaling back enforcement, states and private lawsuits are picking up the slack. This means more legal exposure for companies using AI—especially if they’re not watching closely.
HR isn’t just a user of these tools anymore. You’re now the first line of defense against AI-driven bias. AI can help you hire better, faster—but only if it’s used responsibly and fairly. Take these changes seriously, get ahead of the curve, and make sure your hiring process is both efficient and equitable.