z-logo
open-access-imgOpen Access
StyleMI: An Image Processing Based Method for Detecting Unauthorized Style Mimicry in Fine-tuned Diffusion Models in a More Realistic Scenario
Author(s) -
Xinjun Zhang,
Shixin Zuo,
Yu Sang
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3574053
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The rapid development of diffusion models and model fine-tuning methods have enabled widespread applications in artistic style mimicry while also leading to significant concerns about copyright infringement. Existing approaches protect artists from unauthorized style mimicry by introducing adversarial examples, watermarks, or membership inference attacks. However, the former two are vulnerable to image pre-processing and unable to be applied to published artworks, while the latter requires access to the target model, which is typically inaccessible in real-world scenarios. To this end, the task of detecting unauthorized style mimicry without access to the target model is formulated into a novel membership inference problem. The key insight is that images generated by a diffusion model inherently carry information of its training dataset. A style comparison-based method is proposed, which consists of a LoRA-based style transfer module and a CLIP-based style extraction and comparison module. The style transfer module first learns the artist’s artistic style, then removes the style of the given image to avoid its influence, and finally transfers the artist’s style to the given image. The style extraction and comparison module utilizes a diffusion model to extract the content information of an image and then remove it in a latent semantic space with the pre-trained CLIP model to acquire style-related features and compare them. Experiments on Stable Diffusion demonstrate the effectiveness of the proposed method. The best result achieved a True Positive Rate of 85%, a False Positive Rate of 0%, an Attack Success Rate of 99.29%, a Precision of 100%, and an Area Under the Curve of 1.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here