The Automatic Facial Recognition System of London’s Metropolitan Police force is questioned by independent research

The results of a study carried out by two teachers of the University Of Essex in collaboration with London’s Metropolitan Police have just been published[1], highlighting the fact that only a third of the identifications carried out using the automatic facial recognition system are correct. In the remaining cases the persons identified do not correspond to those being sought after.

The study, which has also been echoed internationally[2], also points to doubts about the appropriateness of the right to use this system of artificial intelligence on the part of the Metropolitan Police. First of all, there is no legal base for using this system generically, meaning that if we bear in mind that it involves limiting rights, it does not abide by  the obligation to be applied in accordance with the law. Secondly, there is no justification for the need to use this technology, meaning that if the problem cannot be resolved with another less intrusive method, the impact that this may have on the rights of those affected cannot be assessed (which has also been established more recently by a report from the Video Surveillance Commission[3]).

The very construction of the list of people sought after with which the faces viewed are contrasted with the cameras does not seem to follow a criteria that is clear and uniform when choosing the people who are part of it. These include people sought after by both the judiciary and the police force and they have not committed an offence in all cases.

At an operative level, the results have been very poor, of the 46 identifications carried out by the system only 26 were considered to be credible by the officers involved, but in four of the cases the people identified as wanted were not stopped, as they blended into the crowd. Of the remaining 22, only eight led to the arrest of the sought-after person, while the other 14 showed that that the person who had effectively been stopped did not correspond to the one being sought after. The decision-making process once the camera image is received does not seem to have been the correct one in several cases, detecting, among other deficiencies, precipitation when intervening.

It is important, however, to recognise the collaboration with the Metropolitan Police itself in the research work. Indeed, the use of this instrument was taken into consideration over a three-year test period, during which tests have been carried out related to its functioning (too centred on purely technical questions according to the study by the University of Essex) promoted by the police force itself. The test period ended in July 2019. The results of this observation as a whole must serve to modify its use in the future[4].

[1] Vid. https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report.pdf

[2] Vid. http://www.polizei-newsletter.de/links.php?L_ID=638

[3] Vid. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/786392/AFR_police_guidance_of_PoFA_V1_March_2019.pdf

[4] Vid. https://www.met.police.uk/live-facial-recognition-trial/

_____

Aquest apunt en català / Esta entrada en español / Post en français

Deixa un comentari