z-logo
open-access-imgOpen Access
Blind Users Accessing Their Training Images in Teachable Object Recognizers
Author(s) -
Jonggi Hong,
Jaina Gandhi,
Ernest Essuah Mensah,
Farnaz Zamiri Zeraati,
Ebrima Jarjue,
Kyungjun Lee,
Hernisa Kacorri
Publication year - 2022
Publication title -
pubmed central
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/3517428.3544824
Subject(s) - computer science , artificial intelligence , object (grammar) , training (meteorology) , computer vision , cognitive neuroscience of visual object recognition , speech recognition , multimedia , physics , meteorology
Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in ( N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here