Bloomberg Law
Jan. 12, 2023, 10:30 AMUpdated: Jan. 12, 2023, 3:55 PM

EEOC Targets AI-Based Hiring Bias in Draft Enforcement Plan (1)

J. Edward Moreno
J. Edward Moreno
Reporter

The US Equal Employment Opportunity Commission will turn its enforcement attention to artificial intelligence tools used by employers to hire workers that can also introduce discriminatory decision-making, according to a new agency playbook.

The EEOC’s draft Strategic Enforcement Plan published in the Federal Register Tuesday, includes updates that for the first time take into account “employers’ increasing use of automated systems, including artificial intelligence or machine learning,” to make hiring and recruiting decisions. The commission also announced earlier this month that it would host a public hearing Jan. 31 on the use of AI tools in employment decisions.

The enforcement plan gives the agency’s attorneys a road map for action. It is informed in part by a series of listening sessions that took place last year, as well as comments submitted through the Federal Register. The five-member commission—which is currently down a member—will vote on a final version of the plan.

The document builds upon the previous SEP, adopted in 2018, adding “emerging and developing issues” like AI bias, discrimination related to the Covid-19 pandemic, and violations of the newly enacted Pregnant Workers Fairness Act, which requires employers to grant reasonable accommodations for pregnant employees.

The document also seeks to expand the agency’s focus on vulnerable workers including to those with intellectual or developmental disabilities as well as limited English proficiency.

Eye On AI

The EEOC is signaling in its draft SEP that it intends to enforce federal nondiscrimination laws equally, whether the discrimination takes place through old-fashioned recruiting or through automated tools, said Victoria Lipnic, a former Republican EEOC commissioner.

“Employers need to take note: if allegations of discrimination are made they are not going to be allowed to claim ‘it wasn’t me; the algorithm did it!’” she said in a statement.

According to a February 2022 survey from the Society for Human Resource Management, 79% of employers use AI for recruitment and hiring.

Employers may use AI tools to screen resumes for qualities and experiences that fit the role they need to fill. That may run awry of nondiscrimination laws if the tools—either deliberately or inadvertently—reject candidates based on protected characteristics like age or gender.

The last time the EEOC formally weighed in on hiring tools was in 1978, when it issued guidelines that established a “four-fifths rule,” which looks at whether a hiring test has a selection rate of less than 80% for protected groups compared to others.

But a key enforcement barrier for the EEOC will be identifying cases of alleged discrimination caused by AI tools in the first place, according to Alex Engler, a fellow at the Brookings Institute. There is currently no federal requirement for employers to disclose the AI technology they use.

“There are a number of limitations to agency authorities around AI systems that require a Congressional change,” Engler said. “In a list of things you would need from Congress to meaningfully oversee algorithms, a disclosure requirement is one.”

The EEOC launched its first lawsuit in this space last year, suing an English-language tutoring services company “iTutorGroup” for allegedly programming its online recruitment software to automatically reject older applicants.

According to the complaint, an applicant that was initially rejected only realized they may have been discriminated against after resubmitting an identical application with a more recent birth date and then being offered an interview.

“That’s one of the tricky parts of this area, because depending on the nature of the tool it may not be visible to employees,” said Jenn Betts, a shareholder at Ogletree Deakins.

The EEOC alongside the Department of Justice issued guidance in May indicating that employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations.

The guidance specified that an employer’s use of algorithmic decision-making tools can violate the Americans With Disabilities Act if the system intentionally or unintentionally screens out an employee with an ADA-covered disability.

It gave the example of a chatbot used for hiring that screens out applicants who indicate while interacting with the AI that they have gaps in their employment history. If those gaps are related to a disability, this may be an ADA violation, according to the commission.

Other hiring tools, such as personality tests and camera sensors, can also potentially pose an ableist bias, accordng to advocates.

The EEOC guidance, along with the pending implementation of a New York City law that requires employers to conduct an independent audit of the automated tools they use, has led employers to take a closer look at their AI systems, according to Betts.

“There have been a couple developments over the last year that have, from a compliance and risk mitigation perspective, focused in-house counsel on the necessity to have really robust review pre-implementation and on an ongoing basis,” she said.

More Guidance Needed

But beyond that, guidance on the use of AI as it relates to other protected groups in the workplace has been sparse.

In response to the lack of direction on avoiding discrimination while using such tools, The Institute for Workplace Equality issued a report with advice for employers using those technologies. The report suggests employers be transparent and seek consent from applicants when using AI tools, and audit those systems regularly.

“Given employers’ increasing use of automation and AI, the EEOC will need to continue stepping up in these areas, perhaps through additional guidance and certainly by watching for cases where employers are using AI tools,” said Lipnic, who took part in the creation of the report.

The public has until Feb. 9 to submit comments through the Federal Register. The EEOC may adjust the draft SEP with those comments in mind before the final panel vote.

The commission currently has a Democratic chair, Charlotte Burrows, and a 2-2 partisan split. President Joe Biden’s nominee to fill the fifth seat on the commission, Democrat Kalpana Kotagal, failed to be confirmed last year.

Both Burrows and Republican Commissioner Keith Sonderling have expressed an interest in tackling the use of biased AI tools by employers.

Sonderling has suggested using commissioner charges or directed investigations to target AI bias where victims may not be aware they were being evaluated by such tools.

In cases where victims do come forward, the EEOC’s general counsel’s office is tasked with deciding whether to bring a suit. Biden tapped civil rights attorney Karla Gilbride to serve as the agency’s general counsel last year, a post that’s been vacant and run by career staff since March 2021.

Biden renominated both Gilbride and Kotagal on Jan. 4, giving them another shot at confirmation in a new Congressional session with a majority Democratic Senate.

(Adds information about EEOC's May guidance on hiring and AI and a statement from the interview with Betts. Clarifies that the Institute for Workplace Equality issued the report on hiring technology.)

To contact the reporter on this story: J. Edward Moreno in Washington at jmorenodelangel@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Genevieve Douglas at gdouglas@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.