
Spatial attention model‐modulated bi‐directional long short‐term memory for unsupervised video summarisation
Author(s) -
Zhong Rui,
Xiao Diyang,
Dong Shi,
Hu Min
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12111
Subject(s) - computer science , redundancy (engineering) , salient , term (time) , artificial intelligence , variety (cybernetics) , reinforcement learning , frame (networking) , focus (optics) , long short term memory , unsupervised learning , machine learning , recurrent neural network , telecommunications , physics , quantum mechanics , artificial neural network , optics , operating system
Compared with surveillance video, user‐created videos contain more frequent shot changes, which lead to diversified backgrounds and a wide variety of content. The high redundancy among keyframes is a critical issue for the existing summarising methods in dealing with user‐created videos. To address the critical issue, we designed a salient‐ area‐size‐based spatial attention model (SAM) on the observation that humans tend to focus on sizable and moving objects in videos. Moreover, the SAM is taken as guidance to refine frame‐wise soft selected probability for the bi‐directional long short‐term memory model. The reinforcement learning framework, trained by the deep deterministic policy gradient algorithm, is adopted to do unsupervised training. Extensive experiments on the SumMe and TVSum datasets demonstrate that our method outperforms the state‐of‐the‐art in terms of F ‐score.