Premium
What happens to our representation of identity as familiar faces age? Evidence from priming and identity aftereffects
Author(s) -
Laurence Sarah,
Baker Kristen A.,
Proietti Valentina M.,
Mondloch Catherine J.
Publication year - 2022
Publication title -
british journal of psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.536
H-Index - 92
eISSN - 2044-8295
pISSN - 0007-1269
DOI - 10.1111/bjop.12560
Subject(s) - psychology , stimulus (psychology) , priming (agriculture) , cognitive psychology , developmental psychology , repetition priming , cognition , neuroscience , botany , germination , biology , lexical decision task
Matching identity in images of unfamiliar faces is error prone, but we can easily recognize highly variable images of familiar faces – even images taken decades apart. Recent theoretical development based on computational modelling can account for how we recognize extremely variable instances of the same identity. We provide complementary behavioural data by examining older adults’ representation of older celebrities who were also famous when young. In Experiment 1, participants completed a long‐lag repetition priming task in which primes and test stimuli were the same age or different ages. In Experiment 2, participants completed an identity after effects task in which the adapting stimulus was an older or young photograph of one celebrity and the test stimulus was a morph between the adapting identity and a different celebrity; the adapting stimulus was the same age as the test stimulus on some trials (e.g., both old) or a different age (e.g., adapter young, test stimulus old). The magnitude of priming and identity after effects were not influenced by whether the prime and adapting stimulus were the same age or different age as the test face. Collectively, our findings suggest that humans have one common mental representation for a familiar face (e.g., Paul McCartney) that incorporates visual changes across decades, rather than multiple age‐specific representations. These findings make novel predictions for state‐of‐the‐art algorithms (e.g., Deep Convolutional Neural Networks).