Using glasses can help to prevent people from being identified by facial recognition systems. This has been demonstrated by a team of researchers from Carnegie Mellon University, Pittsburgh, who have designed glasses which meet this objective.
The deception is seen as an attack against these systems and the starting point for academics was to achieve a discreet and physically feasible method, two factors which had not been taken into consideration in previous research. The first, discretion, meant that the system must not detect the fact that someone is trying to evade it. With the second factor, they wanted the method to be usable against current identification systems (previous studies were based on obsolete technology), especially those using automatic learning algorithms.
Using coloured glasses printed with photographic paper, in some cases printed on photographs and used by the researchers, a person detected by a facial recognition system, either is not identified or is even identified as a another person. These results have been achieved both with a commercial facial recognition system and with generic systems based on automatic learning algorithms.
The authors recognise that the effectivity of their system may be conditioned by external factors which influence the way that images are captured, such as lighting or distance from the camera. They also admit that glasses may be discreet for recognition systems, but not for humans, who regard the glasses as strange. Despite this, they would like to point out that these systems are not infallible and that they are not without their vulnerabilities. Furthermore, they also make it clear that a line to be worked on in the future is that of exploring ways of reducing risks which this type of attack may pose to facial recognition systems.
Research like this shows us that technological advances, although they are obviously an asset for security purposes, are fallible and we must be aware of their weaknesses when it is decided to implant them or use them.
- “Want to beat facial recognition? Get some funky tortoiseshell glasses”. The Guardian, 4 November 2016
- Sharif, M.; Bhagavatula, S.; Bauer, L. & Reiter, M. “Accessorize to a Crime: Real and Stealthy Attacks on State-Of-The-Art Face Recognition”. In Proc. CCS, 2016. [PDF | VÍDEO]
 Automatic learning algorithms start with a database with a series of labelled images, with multiple images per person. For each person, they analyse all the images available and obtain a pattern or model which characterises, distinguishes and identifies them. When they receive a new image, they compare it with the patterns to decide whether they correspond to any of the people on the database.
Aquest apunt en català / Esta entrada en español / Post en français