z-logo
open-access-imgOpen Access
Varying Microphone Patterns for Meeting Speech Segmentation Using Spatial Audio Cues
Author(s) -
Eva Cheng,
Lee Burnett,
Christian Ritz
Publication year - 2006
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
ISBN - 3-540-48766-2
DOI - 10.1007/11922162_26
Subject(s) - microphone , speech recognition , computer science , segmentation , microphone array , sensory cue , audio visual , artificial intelligence , multimedia , telecommunications , sound pressure
Meetings, common to many business environments, generally involve stationary participants. Thus, participant location information can be used to segment meeting speech recordings into each speaker’s ‘turn’. The authors’ previous work proposed the use of spatial audio cues to represent the speaker locations. This paper studies the validity of using spatial audio cues for meeting speech segmentation by investigating the effect of varying microphone pattern on the spatial cues. Experiments conducted on recordings of a real acoustic environment indicate that the relationship between speaker location and spatial audio cues strongly depends on the microphone pattern.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom