z-logo
open-access-imgOpen Access
Perceptual Carlini-Wagner Attack: A Robust and Imperceptible Adversarial Attack Using LPIPS
Author(s) -
LiMing Fan,
Anis Salwa Mohd Khairuddin,
HaiChuan Liu,
Khairunnisa Binti Hasikin
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3588113
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Adversarial attacks on deep neural networks (DNNs) present significant challenges by exploiting model vulnerabilities using perturbations that are often imperceptible to human observers. Traditional approaches typically constrain perturbations using p -norms, which do not effectively capture human perceptual similarity. In this work, we propose the Perceptual Carlini-Wagner (PCW) attack, which integrates the Learned Perceptual Image Patch Similarity (LPIPS) metric into the adversarial optimization process. By replacing p -norm constraints with LPIPS, PCW generates adversarial examples that are both highly effective at inducing misclassification and visually indistinguishable from the original images. We evaluate PCW on the CIFAR-10, CIFAR-100, and ImageNet datasets. On ImageNet, adversarial examples crafted using PCW achieve an LPIPS distance of only 0.0002 from clean images, in contrast to 0.3 LPIPS for those produced by the CW and PGD attacks. In terms of robustness, PCW shows superior performance under common image processing defenses such as JPEG compression and bit-depth reduction, outperforming CW and SSAH and rivaling PGD. Additionally, we test PCW against adversarially trained models from RobustBench and find that it maintains high attack success rates, significantly outperforming CW and PGD in this more challenging setting. Finally, we assess the transferability of PCW across model architectures. While LPIPS contributes to perceptual alignment, it does not significantly improve transferability, with results comparable to those of the original CW attack.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom