z-logo
open-access-imgOpen Access
Dual Transformers with Latent Amplification for Multivariate Time Series Anomaly Detection
Author(s) -
Yeji Choi,
Kwanghoon Sohn,
Ig-Jae Kim
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3594473
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Anomaly detection in multivariate time series is crucial for applications such as industrial monitoring, cybersecurity, and healthcare. Transformer-based reconstruction methods have recently shown strong performance but often suffer from overgeneralization, where anomalies are reconstructed too accurately, thereby reducing the separability between normal and abnormal patterns. Prior works have attempted to mitigate this by incorporating two-stage frameworks or external memory modules to explicitly store normal patterns and amplify deviations from abnormal patterns. However, such approaches increase model complexity and incur additional computational overhead. In this paper, we propose Dual Transformers with Latent Amplification (DT-LA), a novel framework designed to mitigate overgeneralization within a unified architecture. The core idea of DT-LA is to enhance anomaly separability by jointly leveraging both input and latent space reconstructions, rather than merely improving reconstruction fidelity. In particular, we propose the Modified Reverse Huber (MRH) loss that amplifies meaningful deviations in the latent space by applying inverse scaling. It allows the model to retain informative discrepancies that would otherwise be suppressed, thereby improving its ability to detect subtle anomalies. Second, we incorporate sparse self-attention with entropy-based regularization to capture essential inter-sensor relationships and suppress redundancy. Third, we refine the anomaly scoring process using a scaled-softmax function, which balances relative and absolute deviations to reduce softmax-induced bias. Extensive experiments on four benchmark datasets (SMAP, MSL, PSM, and SMD) demonstrate that DT-LA achieves state-of-the-art performance, with F1-scores of 97.02% on SMAP and 98.42% on PSM, highlighting its robustness and practical competitiveness as a single-stage framework.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom