Researchers with the University of Cambridge in the UK, the National Institute of Technology and Indian Institute of Science have developed a deep learning approach to solving the problem of "Disguised Facial Identification" — how to identify people at protests who have covered their faces with masks, hats, scarves, beards or glasses.
Using deep learning and a dataset of pictures of individuals donning various disguises, researchers were able to train a neural network to potentially identify obscured faces with some reliability.
Sociologist Zeynep Tufekci shared the work on Twitter, noting such technology could easily become a tool of oppression, with authoritarian states using it to identify anonymous protesters and stifle dissent.
No So Accurate
However, despite its alarming implications, there is cause for relief for privacy advocates.
The results recorded by the researchers were far less accurate than even current industry-level standards.
When someone was wearing a cap, sunglasses, and a scarf, the system could only identify them 55 percent of the time, it utilized a small dataset, and its methodology is of questionable strength.
On the last point, the researchers' system doesn't actually match disguised faces to mugshots or portraits, instead using "facial keypoints" (the distances between facial features like eyes, noses, lips and the like) as a proxy for an individual's identity.
Despite these shortcomings of course, many researchers the world over are working in the field.
Facebook, for instance, has already trained neural networks to recognize personal characteristics such as hair, body shape, and posture. Facial recognition systems that work on portions of the face have also been developed, and other non-facial identification methods, such as AI-powered gait analysis, recognize individuals with a high degree of accuracy.
One system for identifying masked individuals developed at the University of Basel in Switzerland recreates a 3D model of the target's face based on what it can see. The researchers forecast significant development in the area in future, but still believe there will always be some way or another of fooling a machine, no matter how sophisticated its underlying tech.
V for Vendetta
For instance, Jack Clark, Strategy and Communications Director at OpenAI, suggests wearing a rigid mask that covers an entire face would give facial recognition systems nothing to go on — and other researchers have developed patterned glasses that are designed to trick and confuse AI facial recognition systems.
"In the long run, perhaps this will increase the likelihood of people using masks like the V for Vendetta one adopted by anonymous instead of soft ones like scarves, balaclavas, and so on. I also think that the datasets and underlying machine learning techniques will need to get dramatically better and larger for this sort of approach to be tractable and practical — especially when dealing with diverse groups of protesters," Clark blogged.
Import AI: Issue 58: AI makes facial identification systems see through masks, creating Yelp-foolin’ fake reviews, a https://t.co/GWeNruxdGO— Data science blogs (@dsguide_) September 4, 2017
Nonetheless, there are indications law enforcement agencies are happy to embrace facial recognition software despite its flaws.
In August, at London's famed Notting Hill Carnival, the Metropolitan Police used real-time facial recognition to scan people attending the annual event.
Prior to the carnival, authorities assembled a "bespoke dataset" of images of over 500 individuals who were either banned from attending or wanted for arrest, then set up cameras at one of the Carnival's main thoroughfares.
According to a report from human rights group Liberty, one attendee was successfully identified using this system, but his arrest warrant was out-of-date — and there were at least 35 false positives. The police still deemed its rollout a success.