Open Access
Let's Read: Designing a smart display application to support CODAS when learning spoken language
Author(s) -
Katie Rodeghiero,
Yingying Yuki Chen,
Annika M. Hettmann,
Franceli L. Cibrian
Publication year - 2021
Publication title -
avances en interacción humano computadora
Language(s) - English
Resource type - Journals
ISSN - 2594-2352
DOI - 10.47756/aihc.y6i1.80
Subject(s) - spoken language , computer science , citizen journalism , plan (archaeology) , heuristic , psychology , multimedia , artificial intelligence , world wide web , archaeology , history
Hearing children of Deaf adults (CODAs) face many challenges including having difficulty learning spoken languages, experiencing social judgment, and encountering greater responsibilities at home. In this paper, we present a proposal for a smart display application called Let's Read that aims to support CODAs when learning spoken language. We conducted a qualitative analysis using online community content in English to develop the first version of the prototype. Then, we conducted a heuristic evaluation to improve the proposed prototype. As future work, we plan to use this prototype to conduct participatory design sessions with Deaf adults and CODAs to evaluate the potential of Let's Read in supporting spoken language in mixed-ability family dynamics.