fbpx
 
Home / News, Videos & Publications / News / Research News /

BGU Researchers Use Makeup to Defeat Facial Recognition Tech

BGU Researchers Use Makeup to Defeat Facial Recognition Tech

October 7, 2021

Research News

VICE — Researchers have found a new and surprisingly simple method for bypassing facial recognition software using makeup patterns.

A new study from Ben-Gurion University of the Negev found that software-generated makeup patterns can be used to consistently bypass state-of-the-art facial recognition software, with digitally and physically-applied makeup fooling some systems with a success rate as high as 98 percent.

In their experiment, the researchers defined their 20 participants as blacklisted individuals so their identification would be flagged by the system. They then used a selfie app called YouCam Makeup to digitally apply makeup to the facial images according to the heatmap which targets the most identifiable regions of the face. A makeup artist then emulated the digital makeup onto the participants using natural-looking makeup in order to test the target model’s ability to identify them in a realistic situation.

“​​I was surprised by the results of this study,” Nitzan Guettan, a doctoral student and lead author of the study, told Motherboard. “[The makeup artist] didn’t do too many tricks, just saw the makeup in the image and then she tried to copy it into the physical world. It’s not a perfect copy there. There are differences but it still worked.”

The researchers tested the attack method in a simulated real-world scenario in which participants wearing the makeup walked through a hallway to see whether they would be detected by a facial recognition system. The hallway was equipped with two live cameras that streamed to the MTCNN face detector while evaluating the system’s ability to identify the participant.

“Our attacker assumes a black-box scenario, meaning that the attacker cannot access the target FR model, its architecture, or any of its parameters,” the paper explains. “Therefore, [the] attacker’s only option is to alter his/her face before being captured by the cameras that feeds the input to the target FR model.”

The experiment saw 100 percent success in the digital experiments on both the FaceNet model and the LResNet model, according to the paper.

Read more on Vice >>