Accessible and powerful machine learning has its downsides. A recent New York Times article profiled clearview.ai, an unregulated facial recognition service that has downloaded over 3 billion photos of people from the Internet and used them to build facial recognition models for citizens without their knowledge or permission. Clearview.ai demonstrates just how easy it is to build invasive tools for monitoring and tracking using deep learning. So how do we protect ourselves against unauthorized third parties building facial recognition models that recognize us wherever we may go?
In this talk, I will present our system Fawkes, an algorithm and software tool that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes "poisons" models that try to learn what you look like, by putting hidden changes into your photos that poison any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you, thus protecting your privacy.
Note the changed time.
Shawn Shan is a Ph.D. student at University of Chicago. He works in the SAND Lab, co-advised by Professor Ben Y. Zhao and Professor Heather Zheng. His research lies in the intersection of machine learning, and security and privacy, exploring the limitations, vulnerabilities, and privacy implications of neural networks. Shawn received Bachelor of Science in computer science from University of Chicago in 2020. He has also spent two summers at Facebook as a software engineer on the privacy team.