Menu

Search

  |   Technology

Menu

  |   Technology

Search

Researchers Build 'Privacy Filter' That Confuses Facial Recognition AI

Illustration of a facial recognition system. Image credit: teguhjatipras (CC0 1.0 Creative Commons) via Pixabay

Attempts are being made to bring facial recognition technology closer to consumers in a non-frightening manner. For example, social media websites can tag a person to an uploaded photo — within seconds — even if the uploader has thousands of other online friends. There has also been a conscious attempt among tech companies to deliver computers and smartphones that can authenticate payments and other tasks just by recognizing the owner’s face.

However, these efforts cannot overshadow the slew of reports concerning AI-powered facial recognition services that are borderline invading people’s privacy.

While these two perceptions of facial recognition are still up for debate, University of Toronto (U of T) Professor Parham Aarabi and alumnus Avishek Bose recently came up with what can be considered a remedy.

According to a press release, Aarabi and Bose led a group of U of T engineering researchers that formulated an algorithm to “disrupt facial recognition systems.”

As seen in facial recognition illustrations, machines scan a person’s face and produce a 3D map through a neural pattern that is specific to every face detection tech. This neural pattern is what is being disrupted by Aarabi and Bose’s invention.

The anti-facial recognition solution relies on a deep-learning method they call adversarial training. To achieve this, Aarabi and Bose’s team first needed to build a face detection technology and then the privacy filter that would attack and confuse it.

This process of fighting AI neural systems resulted in an “Instagram-like filter” that tweaks “very specific pixels in the image.” Bose further explained, “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector, they’re significant enough to fool the system.”

Aarabi and Bose tested their privacy filter on the industry standard dataset of 600 faces with “wide range of ethnicities, lighting conditions and environments.” From almost perfect face detection results, their AI-disruptive filter reportedly reduced the matches to 0.5 percent.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.