When you search for something on the internet, do you scroll through page after page of suggestions — or pick from the first few choices?
Because most people choose from the tops of these lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce.
In a new paper, Cornell University researchers introduce a tool they’ve developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.
“If you could examine all your choices equally and then decide what to pick, that may be considered ideal. But since we can’t do that, rankings become a crucial interface to navigate these choices,” said computer science doctoral student Ashudeep Singh, co-first author of “Controlling Fairness and Bias in Dynamic Learning-to-Rank,” which won the Best Paper Award at the Association for Computing Machinery SIGIR Conference on Research and Development in Information Retrieval.
“For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar,” Singh said. “And this happens because of the way search results are presented to us. We generally go down the ranking linearly and our attention drops off fast.”
The researchers’ method, called FairCo, gives roughly equal exposure to equally relevant choices and avoids preferential treatment for items that are already high on the list. This can correct the unfairness inherent in existing algorithms, which can exacerbate inequality and political polarization, and curtail personal choice.
“What ranking systems do is they allocate exposure. So how do we make sure that everybody receives their fair share of exposure?” said Thorsten Joachims, professor of computer science and information science, and the paper’s senior author. “What constitutes fairness is probably very different in, say, an e-commerce system and a system that ranks resumes for a job opening. We came up with computational tools that let you specify fairness criteria, as well as the algorithm that will provably enforce them.”
Algorithms seek the most relevant items to searchers, but because the vast majority of people choose one of the first few items in a list, small differences in relevance can lead to huge discrepancies in exposure. For example, if 51% of the readers of a news publication prefer opinion pieces that skew conservative, and 49% prefer essays that are more liberal, all of the top stories highlighted on the home page could conceivably lean conservative, according to the paper.
“When small differences in relevance lead to one side being amplified, that often causes polarization, where some people tend to dominate the conversation and other opinions get dropped without their fair share of attention,” Joachims said. “You might want to use it in an e-commerce system to make sure that if you’re producing a product that 30% of people like, you’re getting a certain amount of exposure based on that. Or if you have a resume database, you could formulate safeguards to make sure it’s not discriminating by race or gender.”
The research was partly supported by the National Science Foundation and by Workday.
Story Source:
Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length.