Skip to Main Content
Publications

How Employers Can Begin Preparing for the EEOC's Focus on AI

Diversity Matters Newsletter
Share

For years, employers have increasingly automated the recruiting and hiring process using artificial intelligence (AI) and other algorithmic tools. While there is added value in such automation, researchers have cautioned that the technology may result in biased and/or discriminatory results, largely because the data (e.g., search terms, traits of ideal candidates, etc.) entered into this technology may suffer from past biases and unlawful discrimination. We previously wrote about these concerns in a past edition. Tellingly, the EEOC has examined the issues of people analytics, big data, and AI in hiring and other employment decisions since 2016, even holding a public meeting on the implications of big data in the workplace. These and other actions by the EEOC demonstrate its early concern that potential systemic discrimination issues might arise from the use of AI and big data.

The pandemic, along with other issues featured in the media, such as issues with identification of Black female faces using facial detection software and algorithms producing biased results, highlight potential concerns underlying these technologies. Social distancing during the pandemic required remote work and the need for and use of more AI and related technology throughout the employment process, including interviewing, recruiting, and hiring. The use of such technologies is expected to continue to increase. Indeed, in December 2020, ten United States senators sent a joint letter to the chair of the EEOC inquiring about the agency's "oversight authority for hiring technologies," noting that as businesses begin to reopen in accordance with COVID-19 guidance, "some companies will seek to hire staff more quickly" and "are likely to turn to technology to manage and screen large numbers of applicants to support a physically distant hiring process." The senators went on to note that "Black and Latino workers are experiencing significantly higher unemployment rates than their white counterparts," with the "gap between Black and white workers [being] the highest it's been in five years." The senators indicated that the Commission is tasked with ensuring that these hiring technologies do not act as "built-in headwinds for minority groups," explaining that there should be proactive efforts to effectively oversee the use of these hiring technologies.

Subsequently, on October 28, 2021, Charlotte A. Burrows, EEOC chair, announced that the Commission was "launching an initiative to ensure that artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces."1 Notably, Title VII of the Civil Rights Act of 1964 prohibits the use of neutral policies and procedures that disproportionately adversely impact (or here screen out) a group protected under the Act because of their race, national origin, ethnicity, gender, disability, or other protected trait. This initiative will be used to examine how these technologies are being applied during the recruiting and hiring process and to provide guidance to not only employers but also applicants, employees, vendors, and those developing the technology.

As part of these efforts, the EEOC is hosting listening sessions to gather more information on how these technologies are used or could otherwise adversely impact others, most recently focusing a session on the impact on people with disabilities. As the EEOC continues its efforts, it is important for employers who are using these technologies to be proactive. It will not be enough to simply say you relied on the representations made by the vendor or software developers concerning what they are doing to prevent potential violation of federal, state, or local employment laws.  Employers should create measures to ensure that the vendors are vetted and that the technologies (in whatever form) being used for recruiting, interviewing, hiring, testing, or other aspects of the employment relationship are validated and audited for potential biases and legal concerns. In short, employers should ensure the technologies are actually doing what they are supposed to do in a lawful manner. This includes having a working knowledge of what algorithms are being used; understanding what information is being used, the source of the information, and in what manner the information is being used throughout the process; and determining whether there are any concerning patterns in the results from using the tools (e.g., are certain groups always screened out, are people who live in certain zip codes always screened out, is the candidate pool diverse, etc.). Employers should also continue to train human resource and management employees on best practices when navigating the hiring process to add additional layers of bias interrupters. This might include DEI-related training topics on inclusive leadership, interview techniques, communication skills, reducing the influence of implicit bias, how to properly use the technologies, and other topics. 

As the use of AI and related algorithms and technologies continues to increase, employers should be diligent about ensuring there are multiple measures in place to overcome claims that an organization's use of these technologies is discriminatory or otherwise adversely impacting groups of people based on their protected traits. This goal cannot be separated from the need to invest in the company's human capital and to continue human interaction, the importance of which was elevated by the pandemic. To discuss potential training, best practices, and policy and procedure considerations that are best for your company, feel free to contact the author or another member of Baker Donelson's Labor & Employment team.

1 https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness

Subscribe to
Publications
Have Questions?
Let's Talk!

To discuss how this topic could affect
your company, click above to email us.

Listen to Diversity Ever After,
Baker Donelson's D&I Podcast.

Email Disclaimer

NOTICE: The mailing of this email is not intended to create, and receipt of it does not constitute an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
Cancel Accept