Premium
Machine learning in systematic reviews: Comparing automated text clustering with Lingo3G and human researcher categorization in a rapid review
Author(s) -
Muller Ashley Elizabeth,
Ames Heather Melanie R.,
Jardim Patricia Sofia Jacobsen,
Rose Christopher James
Publication year - 2022
Publication title -
research synthesis methods
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.376
H-Index - 35
eISSN - 1759-2887
pISSN - 1759-2879
DOI - 10.1002/jrsm.1541
Subject(s) - cluster analysis , categorization , computer science , artificial intelligence , context (archaeology) , machine learning , precision and recall , conceptual clustering , identification (biology) , recall , natural language processing , data mining , fuzzy clustering , psychology , canopy clustering algorithm , paleontology , botany , cognitive psychology , biology
Abstract Systematic reviews are resource‐intensive. The machine learning tools being developed mostly focus on the study identification process, but tools to assist in analysis and categorization are also needed. One possibility is to use unsupervised automatic text clustering, in which each study is automatically assigned to one or more meaningful clusters. Our main aim was to assess the usefulness of an automated clustering method, Lingo3G, in categorizing studies in a simplified rapid review, then compare performance (precision and recall) of this method compared to manual categorization. We randomly assigned all 128 studies in a review to be coded by a human researcher blinded to cluster assignment (mimicking two independent researchers) or by a human researcher non‐blinded to cluster assignment (mimicking one researcher checking another's work). We compared time use, precision and recall of manual categorization versus automated clustering. Automated clustering and manual categorization organized studies by population and intervention/context. Automated clustering failed to identify two manually identified categories but identified one additional category not identified by the human researcher. We estimate that automated clustering has similar precision to both blinded and non‐blinded researchers (e.g., 88% vs. 89%), but higher recall (e.g., 89% vs. 84%). Manual categorization required 49% more time than automated clustering. Using a specific clustering algorithm, automated clustering can be helpful with categorization of and identifying patterns across studies in simpler systematic reviews. We found that the clustering was sensitive enough to group studies according to linguistic differences that often corresponded to the manual categories.