Like all biometric surveillance technologies, facial recognition threatens our private lives. Recently, it has been mainly denounced for its racist bias. By concentrating criticism on its discriminatory biases, is there not a risk that it will be improved and thus become unavoidable? Meanwhile, some organisations are campaigning for its outright prohibition.
The end of 2020 was marked by a series of alerts on the shortcomings of artificial intelligence, particularly with regard to facial recognition. Increasingly used in the field of security, this technique makes it possible to identify people by analysing their facial features. As we have already mentioned here, several studies accuse it of racist bias and show, for example, that a black woman is much more likely to be misidentified by these devices than a white man.
At the end of November 2020, the UN Committee on the Elimination of Racial Discrimination was thus alarmed in a report on the increasing use of facial recognition by the police. One of the committee’s experts, Verene Shepherd, told the AFP (Agence France-Press, the world’s oldest news agency):
“There is a great danger that artificial intelligence will reproduce, reinforce bias and aggravate or lead to discriminatory practices”.
Shortly after the publication of this UN report, a warning was issued at the European level: for the European Union Agency for Fundamental Rights (FRA), European states should strengthen their legislation to protect fundamental rights in the face of artificial intelligence. In its 100-page report, the FRA cites the example of the British Court of Appeal, which in August 2020 ruled that the facial recognition programme used by the Cardiff police could be racially or sexually prejudiced.
The American Scarecrow
In the US, at least three people have been wrongly arrested as a result of law enforcement misidentification via facial recognition software. These three individuals happen to be African-American males.
In an article published in late December, the New York Times tells the story of Nijeer Parks. In February 2019, police in Woodbridge, New Jersey, arrived at a hotel where (candy) robberies were allegedly committed. The suspect is an African-American who admits the facts, apologises and presents the police with a driver’s license. When the officers that his license is not in order and then spot what appears to be cannabis in his pockets, the suspect flees behind the wheel of his car, almost running over a police officer. The next day, the authorities submit the photo of the fraudulent driver’s licence to facial recognition software in an attempt to identify the fugitive.
The software identifies Nijeer Parks, a 33-year-old local resident. This African-American denies the facts and finds himself hardly resembling the picture on his driver’s license (“The only thing we have in common is the beard”, he says). It doesn’t matter to the police who, given a few criminal records, place him in detention for ten days.
Due to a lack of evidence, the charges are dropped in November 2019. For his part, Parks obtains confirmation via Western Union that at the time of the theft of which he was wrongly accused, he was about 30 miles from the scene. According to the New York Times, Nijeer Parks, who has had to pay some $5,000 in legal fees to defend himself, has sued the police, the prosecutor and the city of Woodbridge for arbitrary arrest and detention and violation of his civil rights. During the proceedings, although innocent, Parks had considered the possibility of signing an agreement with the prosecutor to plead guilty: “I sat down with my family and discussed it, I was afraid to go to trial, I knew I would get 10 years if I lost”.
“Racist and unreliable results”
The first similar case referred to in the press, and already detailed by the same specialised journalist (Kashmir Hill) in the New York Times, concerned an arrest in January 2020, for acts committed in October 2018 in Michigan.
The American Civil Liberties Union (ACLU) had also documented in a report visible below the story in question: Robert Williams was arrested at his home, in front of his wife and two daughters, in shock. Based on a blurry CCTV image, Williams, 42 was accused of stealing watches from a store he said he had not set foot in for four years. Placed in custody for 30 hours, he was finally released on bail – “it looks like the computer was wrong”, a police officer reportedly sanctimoniously said in front of him during his interrogation, while Williams held a photo of the suspect against his face to point out their difference.
In the description of the video, the NGO writes:
“Robert is likely not the first person to be wrongfully arrested or interrogated based off of a bogus face recognition hit. There are likely many more people like Robert who don’t know that it was a flawed technology that made them appear guilty in the eyes of the law. When you add racist and broken technology to a racist and broken criminal legal system, you get racist and broken outcomes”.
The affair had been mediatized during a summer marked by numerous actions of the anti-racist movement Black Lives Matter.
This questioning of the US police force, during an election year and in the wake of the death of an African-American (George Floyd) killed by police officers during an arrest, was accompanied by various announcements from major American companies active in the field of artificial intelligence. In June, Microsoft, IBM and Amazon had announced that they would stop – or suspend – their facial recognition solutions for law enforcement agencies because of the risk of discrimination.
Instrumentalised anti-racism
A “cheeky” positioning, to say the least, according to the Quadrature du Net. In a press release published on 22 June 2020, the association for the defence of Internet rights and freedom denounced a certain hypocrisy on the part of these major industrial players, accused of “using the anti-racist struggle to rebuild their image”:
“Through these statements, these companies try to guide our answers to the classic question that arises for many new technologies: is facial recognition bad in itself or is it just misused? The answer to this question from companies that make a profit from these technologies is always the same: the tools are neither good nor bad, it is how they are used that matters. Thus, an announcement that resembles a push back on the deployment of facial recognition is in fact a validation of its usefulness. Once the State has established a clear framework for the use of facial recognition, companies will be free to deploy and sell their tools”
La Quadrature thus denounces a sleight of hand: “the debate about algorithmic biases focuses attention, while at the same time questions of respect for personal data are ignored”. Contacted by CTRLZ, the journalist Olivier Tesquet, author of the book À la trace : Enquête sur les nouveaux territoires de la surveillance, abounds:
“Critical discourse should not focus solely on the technical efficiency of facial recognition, as this depoliticizes the fundamental question of whether we as a society need this type of device?”.
Two approaches of techno-criticism
The difference in approach between the United States and the old continent would, according to the journalist, come from the profile of the activists involved in these issues:
“The Anglo-Saxon approach is more pragmatic, especially in the American academic community. Some researchers are campaigning for ethical algorithms and many of them have worked in the industry, for example at Google. These are people who have seen the inside of the machine and who are getting out of it, whereas in Europe, the critical discourse, which is much sharper, comes rather from civil society, from people outside the companies concerned”.
Whether the criticism of facial recognition is conciliatory or radical, it still needs to be audible. In these times of health and economic crisis, the protection of personal data does not seem to be at the heart of citizens’ priorities. “We feel real resistance”, says Tesquet, according to whom “the face has an inviolable sanctuary side, which is not treated like other personal data delivered to platforms”. He acknowledges that “the generalisation of tools presented as playful, such as mobile applications for facial ageing, can create a phenomenon of habituation and weaken our immune defences with regard to surveillance issues”. All the more so as these applications “make it possible to build up databases on which algorithms are trained in relative opacity”.
Several initiatives to ban mass biometric surveillance were launched at the end of 2020, such as the Reclaim Your Face campaign.
This movement, which brings together several organisations at continental level, is banking on the European Citizens’ Initiative (ECI) which, as Wikipedia points out, gives “a right of political initiative to a gathering of at least one million citizens of the European Union, coming from at least a quarter of the member countries”. Judging by the counter visible on the site, fewer than 13,000 signatures have so far been collected.
A GOOD LINK: - Just because it can stop neo-Nazis doesn't mean that facial recognition is a good technology, read about it on Slate.com, after the violent events on Washington's Capitol Hill on January 6, 2021.