z-logo
Premium
Adversaries or allies? Privacy and deep learning in big data era
Author(s) -
Liu Bo,
Ding Ming,
Zhu Tianqing,
Xiang Yong,
Zhou Wanlei
Publication year - 2018
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.5102
Subject(s) - computer science , deep learning , big data , adversarial system , image (mathematics) , information privacy , the internet , artificial intelligence , noise (video) , internet privacy , computer security , data science , quality (philosophy) , data mining , world wide web , philosophy , epistemology
Summary Deep learning methods have become the basis of new AI‐based services on the Internet in big data era because of their unprecedented accuracy. Meanwhile, it raises obvious privacy issues. The deep learning–assisted privacy attack can extract sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against deep learning tools, along with two new metrics that measure image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our solution is validated by simulations on two different datasets. Our research shows that we can protect the image privacy by adding a small amount of noise that has a humanly imperceptible impact on the image quality, especially for images of complex structures and textures.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here