z-logo
Premium
A multimodal fusion system for people detection and tracking
Author(s) -
Yang MauTsuen,
Wang ShihChun,
Lin YongYuan
Publication year - 2005
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.20046
Subject(s) - computer science , artificial intelligence , computer vision , kalman filter , feature (linguistics) , process (computing) , heuristic , scalability , filter (signal processing) , philosophy , linguistics , database , operating system
Because a people detection system that considers only a single feature tends to be unstable, many people detection systems have been proposed to extract multiple features simultaneously. These detection systems usually integrate features using a heuristic method based on the designers' observations and induction. Whenever the number of features to be considered is changed, the designer must change and adjust the integration mechanism accordingly. To avoid this tedious process, we propose a multimodal fusion system that can detect and track people in a scalable, accurate, robust, and flexible manner. Each module considers a single feature and all modules operate independently at the same time. A depth module is constructed to detect people based on the depth‐from‐stereo method, and a novel approach is proposed to extract people by analyzing the vertical projection in each layer. A color module that detects the human face, and a motion module that detects human movement are also developed. The outputs from these individual modules are fused together and tracked over time, using a Kalman filter. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 131–142, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20046

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here