Perceptual hashing is widely used to search similar images for digital forensics and cybercrime study. Unfortunately, the robustness of perceptual hashing algorithms is not well understood in these contexts. In this paper, we examine the robustness of perceptual hashing and its dependent security applications both experimentally and empirically.
We develop a series of attack algorithms to subvert perceptual hashing based image search. This is done by generating attack images that effectively enlarge the hash distance to the original image while having minimal visual changes. So the original image will not be returned or ranked high when searching the attack image. We design the attack algorithms under a black-box setting, augmented with novel designs (e.g., grayscale initialization) to improve efficiency and transferability. We evaluate our attack against the standard pHash as well as its robust variants. We then empirically test against real-world reverse image search engines including TinEye, Google, Microsoft Bing, and Yandex. We find that our attack is highly successful on TinEye and Bing, and is moderately successful on Google and Yandex.
Qingying Hao is currently a CS PhD student at University of Illinois Urbana-Champaign. Her research focuses on the intersection area of security and machine learning. Some of her recent work explores developing robust ML systems to construct effective security detection and defense for online attacks.