z-logo
open-access-imgOpen Access
Neural Linguistic Steganalysis via Multi-Head Self-Attention
Author(s) -
Saimei Jiao,
Haifeng Wang,
Kun Zhang,
Yaqi Hu
Publication year - 2021
Publication title -
journal of electrical and computer engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.318
H-Index - 25
eISSN - 2090-0155
pISSN - 2090-0147
DOI - 10.1155/2021/6668369
Subject(s) - steganalysis , softmax function , steganography , computer science , natural language processing , categorization , representation (politics) , artificial intelligence , head (geology) , cover (algebra) , speech recognition , pattern recognition (psychology) , image (mathematics) , artificial neural network , engineering , mechanical engineering , geomorphology , politics , political science , law , geology
Linguistic steganalysis can indicate the existence of steganographic content in suspicious text carriers. Precise linguistic steganalysis on suspicious carrier is critical for multimedia security. In this paper, we introduced a neural linguistic steganalysis approach based on multi-head self-attention. In the proposed steganalysis approach, words in text are firstly mapped into semantic space with a hidden representation for better modeling the semantic features. Then, we utilize multi-head self-attention to model the interactions between words in carrier. Finally, a softmax layer is utilized to categorize the input text as cover or stego. Extensive experiments validate the effectiveness of our approach.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom