Skip to Main Content
Publications

AI in HR Investigations: Game-Changer or Pandora's Box?

The term Artificial intelligence (AI) might be the phrase of 2025. It's written and spoken about everywhere by everyone, and essentially markets itself as the newest, shiniest, and most efficient tool for every industry to consider and put into practice.

But the term "Artificial intelligence" is quite broad. The definition of AI is broadened even further when considering that different technologies can be combined to create new AI systems. Despite the ever-expanding definition of AI, it is important to take inventory of the different types of AI to determine which is best suited, if any, for a workplace investigation.

Ways AI Can Help in Workplace Investigations

Data Management, Analysis, and Pattern Recognition

Deploying AI in a workplace investigation can streamline an often long and intensive process by organizing, synthesizing, and digesting large amounts of data. Because AI tools are analyzing documents while sorting, an investigator can employ searches to identify information like patterns, involved parties, and problematic documents, allowing for the creation of visual timelines of the event.

Interview Support and Information Synthesis

AI tools can also quickly analyze and cross-reference large volumes of documents to detect irregularities, including fake or altered evidence. Another benefit to using AI tools in an internal investigation is in identifying additional witnesses who might not have initially appeared, but are flagged by analyzing a document's metadata, allowing investigators to see who last edited a document. Alternatively, when preparing for interviews, AI tools can be supplied with a fact pattern to role-play different responses to answers and suggest follow-up questions, take notes, and transcribe the record in real-time.

Report Generation, Recommendations, and Documentation

When specifically drafting the report, AI can be used to summarize the facts of a case or witness statements, draw conclusions, and cite specific evidence. After a conclusion is reached, AI can provide recommendations based on the facts and conclusions in the report, and ultimately draft a proposed outcome letter with action plans based on the investigation, findings, and recommendations. AI tools may give recommendations that are creative and not initially thought of by an investigator.

Concerns, Compliance Considerations, and Downfalls of AI

State Laws, Federal Laws, and Discrimination Liability

Many states have started to enact legislation that regulates the use of AI in the workplace, many of which require consent by employees before deploying a program and require regular maintenance of the programs. The Equal Employment Opportunity Commission (EEOC), for example, issued guidelines reinforcing the principle that investigators can be held liable for discrimination, even when it results from the use of AI. Investigators using AI in employment investigations continue to expose themselves to liability under the Fair Labor Standards Act (FLSA), Family and Medical Leave Act (FMLA), and Employee Polygraph Protection Act (EPPA), depending on the recommendation resulting from the investigation.

Biases Embedded in AI Programs

AI tools are created and maintained by humans, who innately have implicit biases. Because of this, there is a significant risk that these biases will be embedded in AI programs. These implicit biases could result in excluding outlier data, like a complaint raised twenty years ago, further perpetuating the issue of biases. In addition to the program having built-in biases, prompting can be done in a biased manner, resulting in biased outputs. For example, if AI is to be used for notetaking, the system could potentially misinterpret speech if a witness has a strong accent. Other common examples include inaccurate conclusions, favoring one class over another when giving recommendations, and unfair treatment of witnesses.

AI Hallucinations and the Black Box Problem

AI tools can hallucinate, meaning a program will output false information that appears to be true and accurate. Most often, AI tools hallucinate when a deployer is searching for specific information. AI hallucinations can occur due to training biases and the program being too complex. Because there is a lack of transparency regarding how a system arrives at its decisions, this can hinder efforts to address issues such as bias or questions about the validity of its outputs.

Data Breaches

Investigations involve sensitive information from internal procedures to employee information, making data breaches a high risk. An in-house program allows investigators to control the privacy settings, whereas a third-party program may use information to continue training its models. Additionally, these third-party vendors may lack transparency on how data is stored and used, leaving many questions unanswered. Beyond this, and importantly, an employer needs to consider confidentiality, privilege implications, and/or waivers.

Mitigating the Risks

Identifying the Specific and Intentional Needs of the Investigator

Identifying the needs and ideal uses in investigations allows an investigator to decide whether an in-house, third-party vendor, or hybrid program is best. Depending on the AI program being deployed, the mitigation looks different. For instance, if a third-party vendor is being used, investigators need to have in-depth conversations and conduct intensive research into the vendor's data security.

Conducting Tests on Programs

Bias audits are one test that can be performed on a program to limit disparate impacts. Conducting audits allows investigators to flag potential disparate impacts and reduce liability for violations of local, state, or federal laws. Another option for investigators is to conduct "parallel tests." Parallel testing is where a human and an AI program perform the same task and identify differences and similarities among the results. Reviewing the discrepancies will potentially result in an investigator considering the bias in decisions or the efficiency of the tool.

Using Quality AI Tools

To mitigate the risks of hallucinations, investigators should use high-quality data. Using a diverse range of data will minimize the possibility of bias and yield more effective outputs. Defining clear bounds of what the tool can and cannot do will minimize and hopefully eliminate hallucinations because they are often a result of the lack of constraints.

Policies, Record Keeping, and Education

Establishing internal AI policies for its use in investigation and outlining ways to challenge AI findings can help prevent overreliance on AI. Once AI has been deployed, investigators should keep detailed notes and documentation of the way in which it was deployed in order to potentially help shield an investigator from liability in government investigations or lawsuits. The records should detail any audits performed, central portal monitoring outcomes, security issues, and other functional issues identified by users. Educating users/investigators on AI tools is also crucial in order to mitigate the risks.

Human Oversight

The most important mitigation tool is human oversight. Constantly checking outputs for errors and interpreting them in terms of the investigation will mitigate the many risks of deploying these tools. Relying on human oversight is important to maintain the integrity of investigations and ensure errors are spotted and corrected. Ultimately, human oversight will prevent AI tools from being the "ultimate workforce decision."

Many questions remain unanswered about the use of AI in workplace investigations. The good news that remains is that employers and investigators will continue to have the autonomy to decide if, how, and when they deploy AI and which investigations are best suited to incorporate AI.

The Firm's attorneys are prepared to assist you in navigating the complexities surrounding the use of AI tools in workplace investigations. Please reach out to Chaitra Gowda, Alex Moylan, or any member of Baker Donelson's Labor & Employment Group, or any member of the Firm's AI Group.

Macy Hamlett, a summer associate at Baker Donelson, contributed to this article.

Subscribe to
Publications

Author

Have Questions?
Let's Talk!

To discuss how this topic could affect
your company, click above to email us.

Email Disclaimer

NOTICE: The mailing of this email is not intended to create, and receipt of it does not constitute an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
Cancel Accept