z-logo
open-access-imgOpen Access
Analytical guidelines to increase the value of community science data: An example using eBird data to estimate species distributions
Author(s) -
Johnston Alison,
Hochachka Wesley M.,
StrimasMackey Matthew E.,
Ruiz Gutierrez Viviana,
Robinson Orin J.,
Miller Eliot T.,
Auer Tom,
Kelling Steve T.,
Fink Daniel
Publication year - 2021
Publication title -
diversity and distributions
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.918
H-Index - 118
eISSN - 1472-4642
pISSN - 1366-9516
DOI - 10.1111/ddi.13271
Subject(s) - citizen science , occupancy , sample (material) , sampling bias , computer science , species distribution , metric (unit) , sample size determination , data science , statistics , ecology , geography , habitat , mathematics , engineering , biology , botany , chemistry , operations management , chromatography
Aim Ecological data collected by the general public are valuable for addressing a wide range of ecological research and conservation planning, and there has been a rapid increase in the scope and volume of data available. However, data from eBird or other large‐scale projects with volunteer observers typically present several challenges that can impede robust ecological inferences. These challenges include spatial bias, variation in effort and species reporting bias. Innovation We use the example of estimating species distributions with data from eBird, a community science or citizen science (CS) project. We estimate two widely used metrics of species distributions: encounter rate and occupancy probability. For each metric, we critically assess the impact of data processing steps that either degrade or refine the data used in the analyses. CS data density varies widely across the globe, so we also test whether differences in model performance are robust to sample size. Main conclusions Model performance improved when data processing and analytical methods addressed the challenges arising from CS data; however, the degree of improvement varied with species and data density. The largest gains we observed in model performance were achieved with 1) the use of complete checklists (where observers report all the species they detect and identify, allowing non‐detections to be inferred) and 2) the use of covariates describing variation in effort and detectability for each checklist. Occupancy models were more robust to a lack of complete checklists. Improvements in model performance with data refinement were more evident with larger sample sizes. In general, we found that the value of each refinement varied by situation and we encourage researchers to assess the benefits in other scenarios. These approaches will enable researchers to more effectively harness the vast ecological knowledge that exists within CS data for conservation and basic research.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here