Skip to Main Content
Publications

Workplace Discrimination Lawsuits: Juries Won't Blame AI, They'll Blame You

Workplace discrimination claims remain one of the most significant areas of legal exposure for employers, from hiring to firing and everything in between. An employer's reliance on artificial intelligence (AI), especially in the hiring process, does not lessen this exposure. AI may even do the opposite and create exposure, particularly with respect to disparate impact claims. Before implementing AI tools in hiring, here is what you need to know.

An aggrieved employee often uses one of two theories to prove a discrimination claim: (1) disparate treatment or (2) disparate impact.

Disparate Treatment

Claims of disparate treatment—or treatment claims—involve specific, identifiable acts. "The company took an adverse action against me," an employee might claim, for example, "because of my disability," or because of another protected characteristic, such as race, color, religion, sex, age, national origin, age, veteran status, or genetic information. "The company intended to discriminate against me," is what the employee argues. "This was done on purpose." Treatment claims, in short, center on allegations of intentional discrimination.

Disparate Impact

Claims of disparate impact—or impact claims—center on allegations of unintentional discrimination. Where treatment claims focus on the intent of an employer's actions, impact claims focus on the outcome. With impact claims, an employer's actions are often facially neutral: They are neither designed nor intended to discriminate against anyone, but the actions have a discriminatory impact on a protected characteristic, or so an employee might allege. Say, for example, an employer only hires individuals over six feet tall. Height is not a protected characteristic. This policy is facially neutral. Men, however, tend to be taller than women, so this policy would have a disparate impact on women. It would be discriminatory.

Impact claims are often filed on a class basis and, compared to treatment claims, can be far more expensive to defend. The reason is that, with impact claims, the primary focus is never what the employer intended to do. Good or bad, the employer's intentions are not the focal point with impact claims. The focus is on the outcome – what happened because of the employer's actions, not why the employer did what it did. To evaluate any outcome, parties usually rely on statistical comparisons and hire experts to provide them.

Potential Risks in Using AI

Against this backdrop, artificial intelligence can prove costly for employers. With ever-increasing urgencies, employers have been turning to AI, especially with respect to hiring and HR functions. Employers, for example, often turn to AI for their initial review of résumés and applications. Whether pre-programmed or specifically designed for an employer, AI tools then evaluate, select, and discard applicants – usually in the blink of an eye. In the eyes of the law, these actions are evaluated no differently than if they were performed by people. It does not matter that AI was involved.

However, against this backdrop, AI can prove costly for employers. If a company uses AI and it results in a disparate impact on, say, female applicants, the fact that AI was involved will have no bearing on the impact claims that follow. The use of AI is in no way a defense to discrimination claims.

Similarly, why the employer implemented AI tools also has no bearing on impact claims. Even if the employer turned to AI to help eliminate discrimination in the workplace, it does not matter. With impact claims, an employer's intentions are not relevant. The focus is not on what the employer intended to do. The focus is on what the employer did and the outcome. It does not matter why the employer implemented AI tools.

Finally, it does not matter if the AI was flawed or poorly programmed – or even if it worked perfectly. Once an employer uses the AI, the employer is responsible for the outcome. This is true even if the employer did not know exactly what the AI was evaluating. Think of using AI like a hiring decision. If an employer "hires" the AI and puts it in the position to make employment decisions, like selecting and rejecting applicants, the employer is responsible for those decisions. No court or jury is going to blame the AI; they are going to blame the employer.

Takeaways

So, what can employers do to mitigate these risks? There is no one-size-fits-all answer. But if you or your company are considering an AI tool, here are some questions to ask:

  • What characteristics or data points will this tool be analyzing and considering?
  • Are those characteristics or data points protected by a workplace law, like Title VII, the Age Discrimination in Employment Act (ADEA), or the Americans with Disabilities Act (ADA)?
  • Have we run this tool against any data samples to evaluate whether it might result in a disparate impact on any protected group or class?
  • What does any licensing or purchasing agreement say about indemnification or contribution in the event the company is sued over its use of this tool?

These questions will not provide all of the answers – no set of questions could. But they will help flag important issues and help mitigate the risks of AI.

For additional information or if you would like assistance in reviewing company policies related to the use of AI, please contact the authors, Zachary B. Busey or Catherine A. Karczmarczyk, or any member of the Labor & Employment Group.

Email Disclaimer

NOTICE: The mailing of this email is not intended to create, and receipt of it does not constitute an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
Cancel Accept