Premium
Automatic detection of Alzheimer’s disease and mild cognitive impairment from spontaneous speech collected during tablet‐based interviews: A preliminary result
Author(s) -
Shinkawa Kaoru,
Kosugi Akihiro,
Kobayashi Masatomo,
Nishimura Masafumi,
Nemoto Miyuki,
Tsukada Eriko,
Ota Miho,
Nemoto Kiyotaka,
Arai Tetsuaki,
Yamada Yasunori
Publication year - 2020
Publication title -
alzheimer's and dementia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.713
H-Index - 118
eISSN - 1552-5279
pISSN - 1552-5260
DOI - 10.1002/alz.042900
Subject(s) - binary classification , verbal fluency test , yesterday , population , cognition , neuropsychology , fluency , psychology , computer science , support vector machine , speech recognition , artificial intelligence , medicine , psychiatry , physics , mathematics education , environmental health , astronomy
Background As the world’s elderly population increases, health monitoring technologies that can automatically detect subtle changes resulting from Alzheimer’s disease (AD) have become increasingly important. In this respect, speech is one of the promising clues due to the expansion of voice‐based interaction systems such as smartphones and tablets. Indeed, previous studies have succeeded in quantifying language dysfunctions and identifying AD and mild cognitive impairment (MCI) from speech data collected during neuropsychological tests conducted by clinicians. Conducting such assessments in an automated fashion by using computer devices would extend opportunities for assessments and help with the early detection of AD. In particular, if we can detect language dysfunctions related to AD from various types of speech data (e.g., question answering and daily conversations), it would extend the scope of application and help improve the current worldwide low‐diagnosis coverage. Method In this study, we developed a tablet‐based application and collected spontaneous speech data during interview tasks from 106 Japanese seniors consisting of 48 healthy controls (HC), 33 MCI, and 25 AD. Participants answered nine questions relating to their current condition, yesterday’s dinner, games played as a child, and future travel plans. For comparison, we also collected speech data during neuropsychological tests (e.g., verbal fluency and picture description tasks) using the tablet. We extracted vocal and prosodic features from both speech data and then built binary classification models for differentiating MCI or AD from HC using a support vector machine with a feature selection method. We evaluated the models by leave‐one‐subject‐out cross‐validation. Result We found that the models using speech data during neuropsychological tests achieved the accuracy of 86.3% for HC vs. AD and 80.2% for HC vs. MCI. The models using spontaneous speech data from interview tasks achieved comparable accuracies: 87.7% for HC vs. AD and 81.6% for HC vs. MCI. Conclusion We demonstrated the possibility of using tablet‐based automatic assessments for detecting patients with both MCI and AD. In addition, our results suggest that speech data from not only neuropsychological tasks but also when answering everyday questions might contain useful information to help early detection of AD.