Last week, within 24 hours of each other, both Amazon and Microsoft pledged not to sell facial recognition tools to US police departments, acknowledging the concerns of researchers and activists who say the technology is biased and has grave potential for misuse. Their statements came just days after IBM announced, in a letter to Congress, that it would no longer offer general-purpose facial recognition software.
The decisions follow growing pressure on firms to respond to the death in police custody of George Floyd. Facial recognition technology has been criticised for some time over potential bias, with studies showing that most algorithms are more likely to wrongly identify the faces of black people and other minorities than those of white people.
“The move by Amazon, as sudden as it appears, comes after years of pressure by civil society groups, investors, and other advocates calling on the company to stop selling Rekognition to law enforcement,” said Isedua Oribhabor, US Policy Analyst at Access Now.
In the letter to Congress, IBM’s CEO Arvind Krishna said the “fight against racism is as urgent as ever”, and set out three areas where the company wanted to work with Congress: police reform, responsible use of technology, and broadening skills and educational opportunities.
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. He also said the company will no longer even develop or research the technology.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.
In 2018, research by Joy Buolamwini and Timnit Gebru revealed for the first time the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work and the pair’s subsequent studies led to mainstream criticism of these algorithms and ongoing attempts to rectify bias. The technology has also come under fire for its role in privacy violations.
Also in 2018, the American Civil Liberties Union found that Rekognition, Amazon’s facial recognition software, incorrectly matched 28 members of Congress to faces picked from 25,000 public mugshots, for example.
Another company, Clearview AI, came under heavy scrutiny earlier this year when it was discovered that its facial recognition tool, built with more than 3 billion images compiled in part from scraping social media sites, was being widely used by private sector companies and law enforcement agencies. Clearview has since been issued numerous cease and desist orders and is at the center of a number of privacy lawsuits. The European Data Protection Board (EDPB) said last week that using the facial recognition technology developed by Clearview could be illegal in the EU.
Facial recognition and other surveillance software pose serious risks for human rights including freedom of assembly and privacy, some of which can be mitigated only by comprehensive and robust privacy and data protection laws. Yet in many countries, the right to privacy is often overlooked as countries have introduced controversial measures to limit the transmission of the coronavirus.
This week, Norway decided to halt its COVID-19 track and trace app and delete all data collected so far after criticism from the Norwegian Data Protection Authority, the Norwegian Institute of Public Health (NIPH).
In the UK, health secretary Matt Hancock has rolled out a similar app which centralises data collection that cannot be deleted. The British NGO Big Brother Watch stated that the government’s contact tracing strategy “raises many questions and offers few answers.”
“Why does the UK government, unlike other democracies, want a state-controlled contact tracing app? Why do they want a centralised database? Why are Amazon, Google and Microsoft handling the app data?”