We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Facial recognition systems are controversial, to say the least. Amazon made headlines last week over supplying law enforcement agencies with face-scanning tech. Schools in China are using facial recognition cameras to monitor students. And studies show that some facial recognition algorithms have built-in biases against certain ethnicities.

Concerns about encroaching AI-powered surveillance systems motivated researchers in Toronto to develop a shield against them. Parham Aarabi, a professor at the University of Toronto, and Avishek Bose, a graduate student, created an algorithm that can disrupt facial recognition systems dynamically, on the fly, by applying light transformations to images.

“Personal privacy is a real issue as facial recognition becomes better and better,” Aarabi said in a statement. “This is one way in which beneficial anti-facial-recognition systems can combat that ability.”

Products and software that purport to defeat facial recognition are nothing new. In a November 2016 study, researchers at Carnegie Mellon designed spectacle frames that could trick systems into misidentifying people. And in November 2017, experts at MIT and Kyushu University fooled an algorithm into labeling a picture of a 3D-printed turtle as a rifle by altering a single pixel.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here
facial recognition algorithm

Above: The researchers’ anti-facial recognition system in action.

Image Credit: University of Toronto

But this is one of the first solutions that uses AI, according to Bose and Aarabi.

Their algorithm, which was trained on a dataset of 600 faces, spits out a real-time filter that can be applied to any picture. Because it targets highly specific, individual pixels in the image, it’s almost imperceptible to the human eye.

The two researchers employed adversarial training, a form of AI that comprises two neural networks — a “generator” that produces outputs from data and a “discriminator” that detects fake data fabricated by the generator — to train the network. Aarabi and Bose’s system uses the generator to identify faces and the discriminator to disrupt the facial recognition.

In the research paper, which is due to be published at the 2018 IEEE International Workshop on Multimedia Signal Processing, Bose and Aarabi claim that their algorithm reduces the proportion of detected faces in facial recognition systems to 0.5 percent.

They hope to make the neural network available in an app or website.

“Ten years ago these algorithms would have to be human-defined, but now neural nets learn by themselves — you don’t need to supply them anything except training data,” says Aarabi. “In the end they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics