in

70 academics ask the Government for a moratorium before generalizing facial recognition

RENFE recently published a tender to develop a facial recognition and analysis system that, among other things, it should make it possible to identify the gender, ethnicity or even the emotional state of the passengers. Its image processing would also serve to alert about “fights” or “antisocial attitudes”.

In the end, the railway operator has withdrawn the ad, but it is just one example of how the use of these systems begins to spread among passenger transport companies, security companies, educational, work, health, leisure and other environments, a move towards its massive use that worries some experts.

The signatories request the Government of Spain to create an investigation commission and establish a moratorium on the use of facial recognition and analysis systems until the Cortes Generales and the European institutions debate it.

Up to 70 professors, professors, researchers and professionals in the field of Philosophy, Computing and Social Sciences have signed a petition, an open letter to which more experts can join, asking the Government of Spain for a commission of inquiry to study the need to establish a moratorium on the use and commercialization of facial recognition and analysis systems by companies, both public and private.

The moratorium it would be maintained until the Cortes Generales and the European legislative institutions debate which, in what way, under what conditions, with what guarantees and with what objectives, if possible, the use of these systems should be allowed, “which have potential harmful effects on the well-being, interests and fundamental needs and rights of the Spanish population in general ”.

The signatories request the rapid intervention of the Government “before these systems continue to expand and become de facto standards, despite the interference they imply in the private sphere of people without their explicit consent.”

Intervention is requested before these systems continue to expand and become standards, despite the interference they imply in the private sphere of people without their consent.

“They are at stake,” they add. fundamental issues of social justice, human dignity, equity, equal treatment and inclusion. The systems of recognition and analysis of images of people (of their faces, gestures, hairstyles, postures and body movements, clothing, textures and / or skin colors) and, by extension, the machine learning algorithms that support them computationally have serious problems that have been widely documented and discussed by the scientific community and governmental and civil entities ”.

Five major problems

The signatories highlight five difficulties. One is that associating a person with a certain characteristic or trend (usually represented through an individual score) based on population statistics “It is highly problematic,” especially when a system makes operational decisions that affect individual people according to predictions valid only at the group level.

Another drawback is that no accepted scientific models in psychology, anthropology or sociology that indicate that a type of nose, a particular rictus, or a gait are adequate predictors of future individual behaviors.

For example, the probability of committing a crime, of learning history or engineering, or of performing correctly in a certain job does not depend on any of those variables that facial recognition methods collect and analyze to classify people and make decisions about them.

On the other hand, these systems are considered black boxes whose opacity makes it difficult to know how they make their decisions and based on what criteria: “Although technically and theoretically it is possible that they are more transparent, the current ones are not specially designed to allow accountability that meets the requirements of a democratic society” .

Boarding gate with biometric facial scanners at Atlanta airport (USA). / John Paul Van Wert / Rank Studios 2018

In addition, they are systems not very robust, since the quality of its results is highly dependent on contextual issues, leading to both false positives and false negatives. The results may be incorrect, for example if the actual lighting conditions differ from those used during training.

Its possible benefits do not outweigh the potential negative effects, especially for groups and collectives that often suffer injustice and discriminatory treatment

And, finally, also “it is easy for a sample bias significantly affects predictive quality of a system when different groups are not represented equally in the training data ”. This is the case of radiology applications that show highly promising predictive success rates in fair-skinned people, but much worse if their dermis is dark.

Possible benefits do not outweigh negative effects

For all these reasons, the signatories conclude: “Due to the serious deficiencies and risks that these systems present, the possible benefits that they might offer do not in any way outweigh their potential negative effects, especially for users. groups and collectives that often suffer injustices and discriminatory treatment: among others, women, LGTBIQ + 13 people, racialized people, migrants, people with disabilities or people in situations of poverty and risk of social exclusion ”.

Rights: Creative Commons.

‘House of the Dragon’ signs Graham McTavish from ‘Outlander’

Nike swaps sports big stars for lousy fans