z-logo
open-access-imgOpen Access
Design of a multimodal hearing system
Author(s) -
Bernd Tessendorf,
Matjaž Debevc,
Peter Derleth,
Manuela Feilner,
Franz Gravenhorst,
Daniel Roggen,
Thomas Stiefmeier,
Gerhard Tröster
Publication year - 2013
Publication title -
computer science and information systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.244
H-Index - 24
eISSN - 2406-1018
pISSN - 1820-0214
DOI - 10.2298/csis120423012t
Subject(s) - computer science , modalities , human–computer interaction , wireless , modality (human–computer interaction) , speech recognition , low latency (capital markets) , latency (audio) , telecommunications , computer network , social science , sociology
Hearing instruments (HIs) have become context-aware devices that analyze the acoustic environment in order to automatically adapt sound processing to the userʼs current hearing wish. However, in the same acoustic environment an HI user can have different hearing wishes requiring different behaviors from the hearing instrument. In these cases, the audio signal alone contains too littlecontextual information to determine the userʼs hearing wish. Additional modalities to sound can provide the missing information to improve the adaption. In this work, we review additional modalities to sound in HIs and present a prototype of a newly developed wireless multimodal hearing system. The platform takes into account additional sensor modalities such as the userʼs body movement and location. We characterize the system regarding runtime, latency and reliability of the wireless connection, and point out possibilities arising from the novel approach

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom