z-logo
open-access-imgOpen Access
Robust audio retrieval method based on anti‐noise fingerprinting and segmental matching
Author(s) -
Zhang Xueshuai,
Zhan Ge,
Wang Wenchao,
Zhang Pengyuan,
Yan Yonghong
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
ISSN - 1350-911X
DOI - 10.1049/el.2019.3086
Subject(s) - computer science , robustness (evolution) , fingerprint (computing) , window (computing) , speech recognition , artificial intelligence , pattern recognition (psychology) , template matching , audio signal processing , matching (statistics) , precision and recall , audio signal , fingerprint recognition , pattern matching , computer vision , speech coding , mathematics , biochemistry , chemistry , statistics , image (mathematics) , gene , operating system
For classical Philips audio retrieval, the short duration and the long silent period in inserted template audio make a major challenge to the robustness in actual environments. In this study, a novel audio retrieval method is proposed to handle the challenge by modifying both the fingerprinting stage and the matching stage. While extracting audio fingerprints, the silent segments are firstly detected. Then, a specific fingerprint is arranged to the silent segments for distinguishment. In the matching stage, a window‐by‐window search is performed to figure out the inserted audio templates. Moreover, the searching window is divided into several segments for precise comparison between the template audio and the test audio. A testing dataset is made by randomly arranging the duration of the inserted template audio to be from 3 to 5 s. Experiment results show that mean average precision and recall are significantly improved by the proposed method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here