z-logo
open-access-imgOpen Access
A multi-scale investigation of the human communication system's response to visual disruption
Author(s) -
James P. Trujillo,
Stephen C. Levinson,
Judith Holler
Publication year - 2022
Publication title -
royal society open science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.84
H-Index - 51
ISSN - 2054-5703
DOI - 10.1098/rsos.211489
Subject(s) - gesture , computer science , modality (human–computer interaction) , speech recognition , motion (physics) , kinematics , eye tracking , human–computer interaction , artificial intelligence , physics , classical mechanics
In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here