z-logo
Premium
Exploring scoring methods for research studies: Accuracy and variability of visual and automated sleep scoring
Author(s) -
Berthomier Christian,
Muto Vincenzo,
Schmidt Christina,
Vandewalle Gilles,
Jaspar Mathieu,
Devillers Jonathan,
Gaggioni Giulia,
Chellappa Sarah L.,
Meyer Christelle,
Phillips Christophe,
Salmon Eric,
Berthomier Pierre,
Prado Jacques,
Benoit Odile,
Bouet Romain,
Brandewinder Marie,
Mattout Jérémie,
Maquet Pierre
Publication year - 2020
Publication title -
journal of sleep research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.297
H-Index - 117
eISSN - 1365-2869
pISSN - 0962-1105
DOI - 10.1111/jsr.12994
Subject(s) - kappa , pairwise comparison , computer science , cohen's kappa , artificial intelligence , reliability (semiconductor) , machine learning , data mining , statistics , mathematics , power (physics) , physics , geometry , quantum mechanics
Sleep studies face new challenges in terms of data, objectives and metrics. This requires reappraising the adequacy of existing analysis methods, including scoring methods. Visual and automatic sleep scoring of healthy individuals were compared in terms of reliability (i.e., accuracy and stability) to find a scoring method capable of giving access to the actual data variability without adding exogenous variability. A first dataset (DS1, four recordings) scored by six experts plus an autoscoring algorithm was used to characterize inter‐scoring variability. A second dataset (DS2, 88 recordings) scored a few weeks later was used to explore intra‐expert variability. Percentage agreements and Conger's kappa were derived from epoch‐by‐epoch comparisons on pairwise and consensus scorings. On DS1 the number of epochs of agreement decreased when the number of experts increased, ranging from 86% (pairwise) to 69% (all experts). Adding autoscoring to visual scorings changed the kappa value from 0.81 to 0.79. Agreement between expert consensus and autoscoring was 93%. On DS2 the hypothesis of intra‐expert variability was supported by a systematic decrease in kappa scores between autoscoring used as reference and each single expert between datasets (.75–.70). Although visual scoring induces inter‐ and intra‐expert variability, autoscoring methods can cope with intra‐scorer variability, making them a sensible option to reduce exogenous variability and give access to the endogenous variability in the data.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here