Business

Amazon proposes ethical guidelines on facial recognition software use

After months of public outcry over potential ethical abuses of Amazon’s facial recognition software Rekognition, Amazon proposes a set of guidelines to policymakers.

Outside groups testing Rekognition “have refused to make their training data and testing parameters publicly available”

In response to claims that Amazon’s Rekognition software could potentially abuse civil and human rights, Amazon came up with a set of five guidelines for policymakers to consider as potential legislation and rules to be considered in the US and other countries.

  1. Facial recognition should always be used in accordance with the law, including laws that protect civil rights.
  2. When facial recognition technology is used in law enforcement, human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.
  3. When facial recognition technology is used by law enforcement for identification, or in a way that could threaten civil liberties, a 99% confidence score threshold is recommended.
  4. Law enforcement agencies should be transparent in how they use facial recognition technology.
  5. There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings.

Last month Amazon shareholders filed a resolution demanding that Amazon stop selling facial recognition software Rekognition to government and law enforcement, citing concerns of potential civil and human rights abuses.

Read More: Shareholders tell Amazon to stop selling Rekognition facial recognition tech to govt

On Thursday Michael Punke, VP of Global Public Policy at AWS, responded to the claims stating, “In recent months, concerns have been raised about how facial recognition could be used to discriminate and violate civil rights. You may have read about some of the tests of Amazon Rekognition by outside groups attempting to show how the service could be used to discriminate.”

“Facial recognition is actually a very valuable tool for improving accuracy and removing bias”

Among the “tests of Amazon Rekognition by outside groups,” include those by the American Civil Liberties Union (ACLU), that we have documented many times on The Sociable.

Read More: ACLU files FOIA request demanding DHS, ICE reveal how they use Amazon Rekognition

study by the ACLU found that Rekognition incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.

The members of Congress who were falsely matched with the mugshot database used in the test included Republicans and Democrats, men and women, and legislators of all ages, from all across the country.

Nearly 40% of Rekognition’s false matches in the test were of people of color, even though they made up only 20% of Congress.

Read More: ‘450 Amazon employees tell Bezos to kick Palantir off AWS’

“In each case we’ve demonstrated that the service was not used properly”

Claiming inaccuracies with tests by outside groups such as the ACLU and MIT, Amazon explained on Thursday, “In each case we’ve demonstrated that the service was not used properly; and when we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes.”

Additionally, Amazon accused groups like the ACLU of withholding their research, claiming, “These groups have refused to make their training data and testing parameters publicly available, but we stand ready to collaborate on accurate testing and improvements to our algorithms, which the team continues to enhance every month.”

“New technology should not be banned or condemned because of its potential misuse. Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced,” wrote Punke.

Read More: Big tech employees voicing ethical concerns echo warnings from history: Op-ed

“AWS dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including using training data sets that reflect gender, race, ethnic, cultural, and religious diversity,” he added.

Amazon developed the proposed guidelines after months of talking to “customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

Read More: Amazon, Palantir are aiding mass deportations of govt ‘undesirables’: report

“It’s critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology.”

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

WEF founder launches ‘Schwab Academy’ to guide humanity through the ‘Intelligent Age’

At the age of 87 World Economic Forum (WEF) founder Klaus Schwab finds a new…

4 days ago

From Pilots to Practice. What Healthcare Professionals Say About AI.

AI is quickly becoming part of the healthcare toolkit. It’s reshaping how care is delivered,…

5 days ago

Latin America to host conference on AI and Industrial Innovation 

Latin America is set to welcome leading professionals of the industrial maintenance sector to the…

6 days ago

Immigrants power over half of U.S. unicorns- now they have their own summit 

Immigrants in the U.S. are behind 55% of unicorn startups- valued at $1 billion USD…

1 week ago

Alternative App Stores Are Opening Up On iOS: Onside and Playgama Bring 300 Million Gamers to Europe’s iPhones

Europe’s digital landscape is entering into a new phase of openness. For the first time,…

1 week ago

The ‘DARPAVERSE’ is coming to model, simulate & optimize military operations

DARPA is metaphorically manifesting Eris, the Greek goddess of discord and strife, by attempting to…

2 weeks ago