November 2, 2024
Despite a growing perception that artificial intelligence-powered hiring tools are being used to discriminate against certain races and genders, the civil rights framework around employment discrimination may not be equipped to take on the technology. AI platforms are well known for having diversity, equity, and inclusion ideology written into their algorithms. If a user asks […]

Despite a growing perception that artificial intelligence-powered hiring tools are being used to discriminate against certain races and genders, the civil rights framework around employment discrimination may not be equipped to take on the technology.

AI platforms are well known for having diversity, equity, and inclusion ideology written into their algorithms. If a user asks most AI tools what it thinks if DEI, it will respond with a glowing report about how important the ideology is in the modern workplace and society. That ideological dedication could mean that, when it comes to using the tools for employment decisions, DEI may be in the driver’s seat.

“If not carefully designed or properly used, these tools — they could be used to discriminate,” Commissioner Keith Sonderling of the Equal Employment Opportunity Commission told the Washington Examiner. “The access to the data and the ability to act on it through AI allows discrimination to be scaled to the likes we’ve never seen before.”

DEI has become increasingly controversial in recent years and has taken significant public relations blows this year in particular — from the ouster of former Harvard President Claudine Gay to the security failures under former Secret Service Director Kimberly Cheatle to GOP criticisms of Vice President Kamala Harris as the new Democratic nominee for president.

While DEI programs became popular particularly among corporations during the 2020 George Floyd riots as a way to curry favor with supporters of a “racial reckoning” in American society, recent political pushback and the perception that DEI is inherently discriminatory against certain groups, such as white and Asian people or men, has resulted in many corporations paring back their dedication to the ideology.

However, even as corporations claim to be scaling down their DEI bureaucracies, many companies still use AI, with DEI ideology ingrained in the core of its functionality, to aid hiring decisions.

“Let’s say you do not want to hire a woman for a role (which of course is unlawful),” Sonderling said. “Before the AI technology, you have to go through every resume and say ‘is this a female sounding name?’ ‘Did they go to a women’s college?’ versus these algorithms that can look at millions of resumes in a millisecond, and separate them all quickly by race, sex, and ethnicity or really any other protected characteristic that the EEOC says you are not allowed to factor into a hiring decision and with a few clicks, eliminate all those applicants from the hiring pool.”

“You can also find candidates unlawfully using these tools to include them unlawfully in the hiring pool based on protected characteristics such as race, sex, national origin, etc., which is just as unlawful as excluding them — so it goes both ways,” the commissioner added.

A recent Freedom Economy Index study revealed that over 75% of employees or job seekers surveyed believe that AI tools are being used to screen out certain applicants who do not fit “preferred” DEI profiles being used to fulfill corporate diversity quotas.

Over 37% of respondents believe they have experienced DEI-related discrimination, and only 23.8% do not believe they have. Some who responded to the survey recalled personal experiences with DEI in hiring.

“I have a Hispanic last name but am white,” one respondent said. “I have encountered disappointment when I get on a Zoom or Teams video call and my interviewer sees I am not Hispanic.”

“Been rejected for employment for being a straight white man,” another job seeker said. “Almost every time I’d gotten an interview was when I opted out of identifying my race or sex on my application.”

While political ideology is not a protected characteristic under federal civil rights law, 63% of respondents feared that if they were found to be Republican or conservative, the information would have a negative impact on their careers.

Nearly 77% of the respondents said they would take less money in order to avoid a DEI-oriented workplace.

But there is a disconnect between the perception of discrimination and the ability for workers to seek justice through the civil rights framework provided by the EEOC. It is not clear whether there are any cases before the EEOC regarding AI-fueled discrimination, as the commission’s law enforcement function does not allow disclosure of pending cases, but there are also no private causes of action regarding AI-DEI discrimination, either.

While Sonderling maintains that the EEOC will get to the bottom of any potential employment discrimination, one major issue with DEI discrimination through an AI tool is that most employees will never even know if they have been subjected to the technology in the first place.

“For the EEOC to take action if there is discrimination, employees have to complain. Generally, we do not start our own investigations. Employees have to basically know or feel like they’re discriminated against, and then bring a case to us where we would investigate,” Sonderling explained. “And the tricky part of this [is] without consent, without employees knowing that they’re being subject to these algorithms, it is unlikely they will be able to know they are potentially discriminated by AI. It’s very hard for them to bring a case of discrimination against an algorithm when they have no idea that they were even subject to an algorithm during their interview process.”

There are no federal consent requirements affording applicants or employees the knowledge that they are being evaluated in part with the assistance of an AI tool. Aside from some states, such as New York, Colorado, and California, most do not require companies to disclose that they are using the technology either.

“Without such a requirement, it would be very hard for the employees or the EEOC to know that AI’s involved to then investigate if there is discrimination,” Sonderling said. “So when asked about enforcement, the biggest issue is disclosure.”

Disclosure requirements bring with them a whole host of other issues as well, including the likelihood of rampant litigation at the sign of any employment decision that does not go in favor of the candidate.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

“For companies and for AI vendors, it doesn’t necessarily make sense for them to voluntarily disclose what technology you’re using,” Sonderling said. “They could be setting themselves up for a lawsuit if the candidate who doesn’t get the job wants to blame the AI and not the actual business’s reasons for not making the hire.”

Whatever private lawsuits might occur challenging the use of AI, Sonderling explained that the EEOC’s purview is with the company itself, not the AI vendor, leaving all liability for hiring decisions at the feet of the employer.

Leave a Reply