Facial recognition tools and their usage in Spain

Dozens of academics, professionals and activists from various fields have called on the Spanish Government to ban the use of facial recognition tools in Spain until there is a law to regulate them. The request comes at a time when the technology is already being used in both public and private settings.

The petition’s signatories are calling for a moratorium on the use and marketing of facial recognition and analysis systems by public and private companies. They want the European legislative institutions to discuss which tools can be used, in what way, under what conditions, with which guarantees and for what purposes the use of such systems should be permitted.

The petitioners argue that the Government must consider regulating the technology before its usage continues to expand and become more prevalent. In short, if facial recognition does not fall under any current specific law to safeguard citizens’ rights, they fear it is the law that must adapt to existing practices.

The signatories refer to the fact that the technology represents an intrusion into people’s private lives without their explicit consent, calling into question fundamental issues linked to social justice, human dignity, equity, equal treatment and inclusion.

The use of facial analysis programmes can lead to civil rights issues. Specifically, they say that assimilating a person to a group based on their biometric traits or data is highly problematic because it perpetuates stereotypes, regardless of the field in which it is used. For example, assuming that a person may be dangerous or likely to default because others like her are is an unfair premise.

There is ample evidence to suggest that associating postures, gestures, facial features, skin colours, hairstyles, or clothing with possible problematic behaviours or intellectual and financial capabilities may result in racist, classist, or sexist classifications.

Furthermore, facial recognition has led to false positives and false negatives on many occasions because it predominantly relies on how the artificial intelligence is trained and with what type of images. If it is trained with lots of photographs of white men or with specific light conditions, to name two examples, the facial analysis will tend to be less accurate for black people or in different light conditions.

There are, therefore, multiple reasons – both technical and ethical – for creating a commission to investigate the need for a moratorium, which is considered essential and urgent. To conclude, it has also been suggested that this commission should be an independent body composed of scientists, jurists, experts in ethics and artificial intelligence and members of civil society, particularly from those groups most likely be affected by these systems.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Leave a Reply