Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias

Bloomberg [unpaywalled] – “Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone, Companies tend to hire the most at the start of the year, mainly because of hiring budgets that have been set and go into effect in the first quarter. “Everybody came back to work, and it’s been kind of insane,” Becker said in a recent interview. In her professional groups and in forums for human resources and recruiting, everyone is buzzing about the same thing: using new artificial intelligence tools to ease the workload. In the race to embrace artificial intelligence, some businesses are using a new crop of generative AI products that can help screen and rank candidates for jobs — and some think these tools can even evaluate candidates more fairly than humans. But a Bloomberg analysis found that the best-known generative AI tool systematically produces biases that disadvantage groups based on their names. OpenAI, which makes ChatGPT, the AI-powered chatbot that can churn out passable song lyrics and school essays, also sells the AI technology behind it to businesses that want to use it for specific tasks, including in HR and recruiting. (The company says it prohibits GPT from being used to make an automated hiring decision.) Becker, who has tested some of these AI-powered hiring tools, said that she’s skeptical of their accuracy. OpenAI’s underlying AI model, which is developed using a vast number of articles, books, online comments and social media posts, can also mirror and amplify the biases in that data. In order to understand the implications of companies using generative AI tools to assist with hiring, Bloomberg News spoke to 33 AI researchers, recruiters, computer scientists and employment lawyers. Bloomberg also carried out an experiment inspired by landmark studies that used fictitious names and resumes to measure algorithmic bias and hiring discrimination. Borrowing methods from these studies, reporters used voter and census data to derive names that are demographically distinct — meaning they are associated with Americans of a particular race or ethnicity at least 90% of the time — and randomly assigned them to equally-qualified resumes. When asked to rank those resumes 1,000 times, GPT 3.5 — the most broadly-used version of the model — favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups. While this test is a simplified version of a typical HR workflow, it isolated names as a source of bias in GPT that could affect hiring decisions. The interviews and experiment show that using generative AI for recruiting and hiring poses a serious risk for automated discrimination at scale…”

Sorry, comments are closed for this post.