
Visual-Spatial Perspective-Taking in Spatial Scenes and in American Sign Language
Author(s) -
Kristen Secora,
Karen Emmorey
Publication year - 2020
Publication title -
journal of deaf studies and deaf education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.862
H-Index - 59
eISSN - 1465-7325
pISSN - 1081-4159
DOI - 10.1093/deafed/enaa006
Subject(s) - american sign language , psychology , mental rotation , comprehension , sign language , cognitive psychology , sentence , perspective (graphical) , cognition , perspective taking , linguistics , artificial intelligence , computer science , social psychology , philosophy , neuroscience , empathy
As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer's hands requiring a mental transformation for successful communication about spatial scenes. It is unknown whether visual-spatial perspective-taking (VSPT) or mental rotation (MR) abilities support signers' comprehension of perspective-dependent American Sign Language (ASL) structures. A total of 33 deaf ASL adult signers completed tasks examining nonlinguistic VSPT ability, MR ability, general ASL proficiency (ASL-Sentence Reproduction Task [ASL-SRT]), and an ASL comprehension test involving perspective-dependent classifier constructions (the ASL Spatial Perspective Comprehension Test [ASPCT] test). Scores on the linguistic (ASPCT) and VSPT tasks positively correlated with each other and both correlated with MR ability; however, VSPT abilities predicted linguistic perspective-taking better than did MR ability. ASL-SRT scores correlated with ASPCT accuracy (as both require ASL proficiency) but not with VSPT scores. Therefore, the ability to comprehend perspective-dependent ASL classifier constructions relates to ASL proficiency and to nonlinguistic VSPT and MR abilities.