Hiding Images From Prying AIs

Researchers in Singapore have developed a way of distorting images so that they will not be recognized by machines but are still intelligible to humans.

AsianScientist (Aug. 19, 2020) – In a development that could help preserve privacy online, researchers at the National University of Singapore (NUS) have found a way to conceal the contents of images from machines while leaving them recognizable to humans. Their findings have been published in Proceedings of the 27the ACM International Conference on Multimedia.

Even the most eagle-eyed among us can only scan through a few photographs each second. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. Computer vision algorithms help platforms like Facebook and Instagram to automatically tag a user in photos, while Google’s own image recognition technology can group photos by the people it detects in them. On the flip side, preventing machines from harvesting personal data from images has now become a key concern for maintaining digital privacy.

In the present study, a team led by the dean of the NUS School of Computing, Professor Mohan Kankanhalli, has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms.

Currently available techniques to distort images tend to ruin the esthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the researchers developed a ‘human sensitivity map’ that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes.

The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted. After analyzing the results, the research team discovered that human sensitivity is influenced by factors including illumination, texture, object sentiment and semantics.

Using this human sensitivity map the team fine-tuned their technique to apply visual distortion with minimal disruption to the image esthetics by injecting them into areas with low human sensitivity.

An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right. Photo credit: National University of Singapore.

“It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on artificial intelligence (AI) is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm.” said Kankanhalli.

End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default.

The team next plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.


The article can be found at: Shen et al. (2019) Human-imperceptible Privacy Protection Against Machines.

———

Source: National University of Singapore; Photo: Shutterstock.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Asian Scientist Magazine is an award-winning science and technology magazine that highlights R&D news stories from Asia to a global audience. The magazine is published by Singapore-headquartered Wildtype Media Group.

Related Stories from Asian Scientist