Premium
Fast, scalable, and automated identification of articles for biodiversity and macroecological datasets
Author(s) -
Cornford Richard,
Deinet Stefanie,
De Palma Adriana,
Hill Samantha L. L.,
McRae Louise,
Pettit Benjamin,
Marconi Valentina,
Purvis Andy,
Freeman Robin
Publication year - 2021
Publication title -
global ecology and biogeography
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.164
H-Index - 152
eISSN - 1466-8238
pISSN - 1466-822X
DOI - 10.1111/geb.13219
Subject(s) - computer science , bottleneck , scalability , biodiversity , identification (biology) , workflow , data science , machine learning , artificial intelligence , data mining , information retrieval , ecology , database , biology , embedded system
Abstract Aim Understanding broad‐scale ecological patterns and processes is necessary if we are to mitigate the consequences of anthropogenically driven biodiversity degradation. However, such analyses require large datasets and current data collation methods can be slow, involving extensive human input. Given rapid and ever‐increasing rates of scientific publication, manually identifying data sources among hundreds of thousands of articles is a significant challenge, which can create a bottleneck in the generation of ecological databases. Innovation Here, we demonstrate the use of general, text‐classification approaches to identify relevant biodiversity articles. We apply this to two freely available example databases, the Living Planet Database and the database of the PREDICTS (Projecting Responses of Ecological Diversity in Changing Terrestrial Systems) project, both of which underpin important biodiversity indicators. We assess machine‐learning classifiers based on logistic regression (LR) and convolutional neural networks, and identify aspects of the text‐processing workflow that influence classification performance. Main conclusions Our best classifiers can distinguish relevant from non‐relevant articles with over 90% accuracy. Using readily available abstracts and titles or abstracts alone produces significantly better results than using titles alone. LR and neural network models performed similarly. Crucially, we show that deploying such models on real‐world search results can significantly increase the rate at which potentially relevant papers are recovered compared to a current manual protocol. Furthermore, our results indicate that, given a modest initial sample of 100 relevant papers, high‐performing classifiers could be generated quickly through iteratively updating the training texts based on targeted literature searches. These findings clearly demonstrate the usefulness of text‐mining methods for constructing and enhancing ecological datasets, and wider application of these techniques has the potential to benefit large‐scale analyses more broadly. We provide source code and examples that can be used to create new classifiers for other datasets.