
Development of gesture‐based human–computer interaction applications by fusion of depth and colour video streams
Author(s) -
Dondi Piercarlo,
Lombardi Luca,
Porta Marco
Publication year - 2014
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2013.0323
Subject(s) - computer science , gesture , gesture recognition , usability , robustness (evolution) , computer vision , artificial intelligence , kalman filter , flexibility (engineering) , human–computer interaction , biochemistry , chemistry , statistics , mathematics , gene
Hand detection and gesture recognition are two of the most studied topics in human–computer interaction (HCI). The increasing availability of sensors able to provide real‐time depth measurements, such as time‐of‐flight cameras or the more recent Kinect, has helped researchers to find more and more efficient solutions for these issues. With the main aim to implement effective gesture‐based interaction systems, this study presents an approach to hand detection and tracking that exploits two different video streams: the depth one and the colour one. Both hand and gesture recognition are based only on geometrical and colour constraints, and no learning phase is needed. The use of a Kalman filter to track hands guarantees system robustness also in presence of many persons in the scene. The entire procedure is designed to maintain a low computational cost and is optimised to efficiently execute HCI tasks. As use cases two common applications are described: a virtual keyboard and a three‐dimensional object manipulation virtual environment. These applications have been tested with a representative sample of non‐trained users to assess the usability and flexibility of the system.