Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Report – disparate impact theory and bias whose roots are in algorithms

Lauren Kirchner is a senior reporting fellow at ProPublica., writing in the Atlantic – When Discrimination Is Baked Into Algorithms: “…Over the past several decades, an important tool for assessing and addressing discrimination has been the “disparate impact” theory. Attorneys have used this idea to successfully challenge policies that have a discriminatory effect on certain groups of people, whether or not the entity that crafted the policy was motivated by an intent to discriminate. It’s been deployed in lawsuits involving employment decisions, housing, and credit. Going forward, the question is whether the theory can be applied to bias that results from new technologies that use algorithms. The term “disparate impact” was first used in the 1971 Supreme Court case Griggs v. Duke Power Company. The Court ruled that, under Title VII of the Civil Rights Act, it was illegal for the company to use intelligence test scores and high school diplomas—factors which were shown to disproportionately favor white applicants and substantially disqualify people of color—to make hiring or promotion decisions, whether or not the company intended the tests to discriminate. A key aspect of the Griggs decision was that the power company couldn’t prove their intelligence tests or diploma requirements were actually relevant to the jobs they were hiring for….So how will the courts address algorithmic bias? From retail to real estate, from employment to criminal justice, the use of data mining, scoring software, and predictive analytics programs is proliferating at an exponential rate. Software that makes decisions based on data like a person’s zip code can reflect, or even amplify, the results of historical or institutional discrimination.“[A]n algorithm is only as good as the data it works with,” Solon Barocas and Andrew Selbst write in their article “Big Data’s Disparate Impact,” forthcoming in the California Law Review. “Even in situations where data miners are extremely careful, they can still affect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.”

Sorry, comments are closed for this post.