z-logo
open-access-imgOpen Access
Multimodal Interaction Recognition Mechanism by Using Midas Featured By Data-Level and Decision-Level Fusion
Author(s) -
Muhammad Habib,
Noor ul Qamar
Publication year - 2017
Publication title -
lahore garrison university research journal of computer science and information technology
Language(s) - English
Resource type - Journals
eISSN - 2521-0122
pISSN - 2519-7991
DOI - 10.54692/lgurjcsit.2017.010227
Subject(s) - computer science , sensor fusion , gesture , human–computer interaction , fusion , provisioning , fusion mechanism , artificial intelligence , operating system , philosophy , lipid bilayer fusion , linguistics
Natural User Interfaces (NUI's) dealing with gestures is an alternative of traditional input devices on multi-touch panels. Rate of growth in the Sensor technology has increased the use of multiple sensors to deal with various monitoring and compatibility issues of machines. Research on data-level fusion models requires more focus on the fusion of multiple degradation-based sensor data. Midas, a novel declarative language to express multimodal interaction patterns has come up with the idea of developers required patterns description by employing multi-model interaction mechanism. The language as a base interface deals with minimum complexity issues like controlling inversion and intermediary states by means of data fusion, data processing and data selection provisioning high-level programming abstractions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here