Open Access
Application of Efficient Data Cleaning Using Text Clustering for Semistructured Medical Reports to Large-Scale Stool Examination Reports: Methodology Study
Author(s) -
Hyunki Woo,
Kyunga Kim,
KyeongMin Cha,
Jin Young Lee,
Han Song Mun,
Soo Cho,
Ji In Chung,
Jeung Hui Pyo,
Kun-Chul Lee,
Mira Kang
Publication year - 2019
Publication title -
jmir. journal of medical internet research/journal of medical internet research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.446
H-Index - 142
eISSN - 1439-4456
pISSN - 1438-8871
DOI - 10.2196/10013
Subject(s) - cluster analysis , scale (ratio) , computer science , data science , data mining , medicine , artificial intelligence , quantum mechanics , physics
Background Since medical research based on big data has become more common, the community’s interest and effort to analyze a large amount of semistructured or unstructured text data, such as examination reports, have rapidly increased. However, these large-scale text data are often not readily applicable to analysis owing to typographical errors, inconsistencies, or data entry problems. Therefore, an efficient data cleaning process is required to ensure the veracity of such data. Objective In this paper, we proposed an efficient data cleaning process for large-scale medical text data, which employs text clustering methods and value-converting technique, and evaluated its performance with medical examination text data. Methods The proposed data cleaning process consists of text clustering and value-merging. In the text clustering step, we suggested the use of key collision and nearest neighbor methods in a complementary manner. Words (called values) in the same cluster would be expected as a correct value and its wrong representations. In the value-converting step, wrong values for each identified cluster would be converted into their correct value. We applied these data cleaning process to 574,266 stool examination reports produced for parasite analysis at Samsung Medical Center from 1995 to 2015. The performance of the proposed process was examined and compared with data cleaning processes based on a single clustering method. We used OpenRefine 2.7, an open source application that provides various text clustering methods and an efficient user interface for value-converting with common-value suggestion. Results A total of 1,167,104 words in stool examination reports were surveyed. In the data cleaning process, we discovered 30 correct words and 45 patterns of typographical errors and duplicates. We observed high correction rates for words with typographical errors (98.61%) and typographical error patterns (97.78%). The resulting data accuracy was nearly 100% based on the number of total words. Conclusions Our data cleaning process based on the combinatorial use of key collision and nearest neighbor methods provides an efficient cleaning of large-scale text data and hence improves data accuracy.