z-logo
open-access-imgOpen Access
Labeling Poststorm Coastal Imagery for Machine Learning: Measurement of Interrater Agreement
Author(s) -
Goldstein Evan B.,
Buscombe Daniel,
Lazarus Eli D.,
Mohanty Somya D.,
Rafique Shah Nafis,
Anarde Katherine A.,
Ashton Andrew D.,
Beuzen Tomas,
Castagno Katherine A.,
Cohn Nicholas,
Conlin Matthew P.,
Ellenson Ashley,
Gillen Megan,
Hovenga Paige A.,
Over JinSi R.,
Palermo Rose V.,
Ratliff Katherine M.,
Reeves Ian R. B.,
Sanborn Lily H.,
Straub Jessamin A.,
Taylor Luke A.,
Wallace Elizabeth J.,
Warrick Jonathan,
Wernette Phillipe,
Williams Hannah E.
Publication year - 2021
Publication title -
earth and space science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.843
H-Index - 23
ISSN - 2333-5084
DOI - 10.1029/2021ea001896
Subject(s) - computer science , inter rater reliability , set (abstract data type) , artificial intelligence , process (computing) , data set , focus (optics) , training set , machine learning , supervised learning , statistics , artificial neural network , mathematics , rating scale , physics , optics , programming language , operating system
Classifying images using supervised machine learning (ML) relies on labeled training data—classes or text descriptions, for example, associated with each image. Data‐driven models are only as good as the data used for training, and this points to the importance of high‐quality labeled data for developing a ML model that has predictive skill. Labeling data is typically a time‐consuming, manual process. Here, we investigate the process of labeling data, with a specific focus on coastal aerial imagery captured in the wake of hurricanes that affected the Atlantic and Gulf Coasts of the United States. The imagery data set is a rich observational record of storm impacts and coastal change, but the imagery requires labeling to render that information accessible. We created an online interface that served labelers a stream of images and a fixed set of questions. A total of 1,600 images were labeled by at least two or as many as seven coastal scientists. We used the resulting data set to investigate interrater agreement: the extent to which labelers labeled each image similarly. Interrater agreement scores, assessed with percent agreement and Krippendorff's alpha, are higher when the questions posed to labelers are relatively simple, when the labelers are provided with a user manual, and when images are smaller. Experiments in interrater agreement point toward the benefit of multiple labelers for understanding the uncertainty in labeling data for machine learning research.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here