z-logo
Premium
Poor agreement between the automated risk assessment of a smartphone application for skin cancer detection and the rating by dermatologists
Author(s) -
Chung Y.,
van der Sande A.A.J.,
de Roos K.P.,
Bekkenk M.W.,
de Haas E.R.M.,
KellenersSmeets N.W.J.,
Kukutsch N.A.
Publication year - 2020
Publication title -
journal of the european academy of dermatology and venereology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.655
H-Index - 107
eISSN - 1468-3083
pISSN - 0926-9959
DOI - 10.1111/jdv.15873
Subject(s) - medicine , skin cancer , dermatology , risk assessment , kappa , medical assessment , cancer detection , cancer , physical therapy , linguistics , philosophy , computer security , computer science
Background Several smartphone applications (app) with an automated risk assessment claim to be able to detect skin cancer at an early stage. Various studies that have evaluated these apps showed mainly poor performance. However, all studies were done in patients and lesions were mainly selected by a specialist. Objectives To investigate the performance of the automated risk assessment of an app by comparing its assessment to that of a dermatologist in lesions selected by the participants. Methods Participants of a National Skin Cancer Day were enrolled in a multicentre study. Skin lesions indicated by the participants were analysed by the automated risk assessment of the app prior to blinded rating by the dermatologist. The ratings of the automated risk assessment were compared to the assessment and diagnosis of the dermatologist. Due to the setting of the Skin Cancer Day, lesions were not verified by histopathology. Results We included 125 participants (199 lesions). The app was not able to analyse 90 cases (45%) of which nine BCC , four atypical naevi and one lentigo maligna. Thirty lesions (67%) with a high and 21 with a medium risk (70%) rating by the app were diagnosed as benign naevi or seborrhoeic keratoses. The interobserver agreement between the ratings of the automated risk assessment and the dermatologist was poor (weighted kappa = 0.02; 95% CI −0.08‐0.12; P  = 0.74). Conclusions The rating of the automated risk assessment was poor. Further investigations about the diagnostic accuracy in real‐life situations are needed to provide consumers with reliable information about this healthcare application.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here