z-logo
open-access-imgOpen Access
Voice over: Audio-visual congruency and content recall in the gallery setting
Author(s) -
Merle T. Fairhurst,
Minnie Scott,
Ophélia Deroy
Publication year - 2017
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0177622
Subject(s) - crossmodal , recall , portrait , cognitive psychology , psychology , perception , sensory cue , narrative , communication , visual perception , computer science , linguistics , art , visual arts , neuroscience , philosophy
Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here