Web

NIST research effort to measure bias in results we get from search engines: ‘Fair Ranking’

NIST wants to correct a bias problem in info retrieval that can have ‘real-world consequences’

The National Institute of Standards and Technology (NIST) launches a research effort to measure bias in search engine results.

“The perfect search engine would be like the mind of God” — Sergey Brin

Artificial Intelligence algorithm based search engines often inherit bias from previous searches, which NIST says can have ‘real-world consequences’.

“It’s now recognized that systems aren’t unbiased. They can actually amplify existing bias because of the historical data the systems train on,” said Ellen Voorhees, a NIST computer scientist.

“The systems are going to learn that bias and recommend you take an action that reflects it.”

Read More: Majority of Americans don’t trust mass media, why does Google?

As part of its long-running Text Retrieval Conference (TREC), which is taking place this week at NIST’s Gaithersburg, Maryland, campus, NIST has launched the Fair Ranking track this year, which is an incubator for a new area of study that aims to bring fairness in research.

The track has been proposed and organized by researchers from Microsoft, Boise State University and NIST, who hope to find strategies for removing bias, by finding apt ways to measure the amount of bias in data and search techniques.

“Search engines have the power to amplify exposure. Whoever is on the first page gets more.”

“We would like to develop systems that serve all of their users, as opposed to benefiting a certain group of people,” said Asia Biega, a postdoctoral researcher at Microsoft Research Montreal and one of the track’s co-organizers.

“We are trying to avoid developing systems that amplify existing inequality.”

Other problems the effort proposes to solve is the appearance of the same answers at the top of a list every time after running a particular search term.

Search Neutrality

NIST’s efforts lean towards supporting the principle of ‘search neutrality’, a concept that came into popular use in context of the Internet in around 2009.

The New York Times defined it as:

“A principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance.”

According to the NIST’s research track’s measure, a ‘fair algorithm’ wouldn’t generate the same list in the same order in response to a query, but instead would give other articles a chance to appear.

This means the same query will show both renown as well as less renown items in the list. As NIST says, “It would contain answers relevant to the searcher’s needs, but it would vary in ways that would be quantifiable.”

“A principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance”

Voorhees admits though that this can’t be the only criteria to determine fairness, also that a single research project isn’t enough to solve such a broad societal problem. However, she says, it’s a start.

“It’s important for us to be able to measure the amount of bias in a system effectively enough that we can do research on it,” she said. “We need to measure it if we want to try.”

What Skeptics Say

There has been some skepticism against search neutrality. Skeptics say that making search engines behave ‘neutrally’ will not produce the desired goal of neutral search results.

After all, what construes as the best search results are often not just the most prestigious and renown, but also the most useful. Meaning, the inherited bias in a search often works for most of us.

“Search neutrality is likely to make search results spammier, more confusing, and less diverse”

For example, if we are looking for the best dentists in town, we want to know the ones that are most renown and therefore, searched. Also, they must be near us geographically. As a person with a toothache, the searcher cannot be expected to desire a neutral search.

James Grimmelmann, the author of “Internet Law: Cases and Problems,” wrote in an essay:

“Search is inherently subjective: it always involves guessing the diverse and unknown intentions of users. Most of the common arguments for search neutrality either duck the issue or impose on search users a standard of ‘right’ and ‘wrong’ search results they wouldn’t have chosen for themselves.”

Read More: Politicians on both sides agree big tech needs regulation, American citizens are split

“Search engines help users avoid the websites they don’t want to see; search neutrality would turn that relationship on its head. As currently proposed, search neutrality is likely to make search results spammier, more confusing, and less diverse,” he adds.

Only the Best Answers at the Top

However, NIST argues that this can cause a problem when there are too many worthwhile answers. Very few searchers look beyond the first page, which obscures the rest of the results that could have been worth looking into.

“The results on that first page influence people’s economic livelihood in the real world,” Biega said. “Search engines have the power to amplify exposure. Whoever is on the first page gets more.”

“It’s important for us to be able to measure the amount of bias in a system effectively enough that we can do research on it”

So going back to the dentist example, if there are a hundred equally good dentists in town, every searcher will only see five of them on the first page, so that rest will never get much business, even though they offer the same quality.

NIST will make the official call for the Fair Ranking track in December for participation in 2020 TREC, which will take place from November 18-20, 2020, in Gaithersburg, Maryland.

Navanwita Sachdev

An English literature graduate, Navanwita is a passionate writer of fiction and non-fiction as well as being a published author. She hopes her desire to be a nosy journalist will be satisfied at The Sociable.

Recent Posts

Not Your Typical CPA Firm: A CEO on Mission to Guide Companies Through the Ever-Changing World of Tech Compliance (Brains Byte Back Podcast)

In today’s episode of the Brains Byte Back podcast, we speak with Mike DeKock, the founder…

19 hours ago

‘Social problems in substituting humans for machines will be easier in developed countries with declining populations’: Larry Fink to WEF

Blackrock CEO Larry Fink tells the World Economic Forum (WEF) that developed countries with shrinking…

2 days ago

Meet Nobody Studios, the enterprise creating 100 companies amidst global funding winter 

Founders and investors alike were hopeful the funding winter would start to thaw in 2024.…

2 days ago

As fintech innovation picks up pace, software experts like 10Pearls help lead the way

Neobanks and fintech solutions hit the US market more than a decade ago, acting as…

3 days ago

CBDC will hopefully replace cash, ‘be one hundred percent digital’: WEF panel

Central bank digital currencies (CBDCs) will hopefully replace physical cash and become fully digital, a…

4 days ago

Ethical Imperatives: Should We Embrace AI?

Five years ago, Frank Chen posed a question that has stuck with me every day…

1 week ago