Premium
Decoders' processing of emotional facial expression — a top‐down or bottom‐up mechanism?
Author(s) -
Wallbott Harald G.,
RicciBitti Pio
Publication year - 1993
Publication title -
european journal of social psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.609
H-Index - 111
eISSN - 1099-0992
pISSN - 0046-2772
DOI - 10.1002/ejsp.2420230408
Subject(s) - facial expression , psychology , schema (genetic algorithms) , emotional expression , decoding methods , expression (computer science) , action (physics) , cognitive psychology , context (archaeology) , face (sociological concept) , top down and bottom up design , action selection , social psychology , communication , computer science , linguistics , neuroscience , perception , information retrieval , telecommunications , paleontology , philosophy , physics , software engineering , quantum mechanics , biology , programming language
Abstract To date little evidence is available as to how emotional facial expression is decoded, specifically whether a bottom‐up (data‐driven) or a top‐down (schema‐driven) approach is more appropriate in explaining the decoding of emotions from facial expression. A study is reported (conducted with N = 20 subjects each in Germany and Italy), in which decoders judged emotions from photographs of facial expressions. Stimuli represented a selection of photographs depicting both single muscular movements (action units) in an otherwise neutral face, and combinations of such action units. Results indicate that the meaning of action units changes often with context; only a few single action units transmit specific emotional meaning, which they retain when presented in context. The results are replicated to a large degree across decoder samples in both nations, implying fundamental mechanisms of emotion decoding.