z-logo
open-access-imgOpen Access
Recent Advances in Representation Learning for Electronic Health Records: A Systematic Review
Author(s) -
Xiaocong Liu,
Huazhen Wang,
Ting He,
Yongxin Liao,
Jian Chen
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2188/1/012007
Subject(s) - computer science , categorization , health records , data science , field (mathematics) , representation (politics) , graph , artificial intelligence , health care , theoretical computer science , politics , political science , mathematics , pure mathematics , law , economics , economic growth
Representation Learning (RL) aims to convert data into low-dimensional and dense real-valued vectors, so as to realize reasoning in vector space. RL is one of the important research contents in the analysis of health data. This paper systematically reviews the latest research on Electronic Health Records (EHR) RL. We searched the Web of Science, Google Scholar, and Association for Computing Machinery Digital Library for papers involving EHR RL. On the basis of literature review, we propose a new taxonomy to categorize the state-of-the-art EHR RL methods into three categories: statistics learning-based RL methods, knowledge RL methods and graph RL methods. We analyze and summarize their characteristics according to the input data form and underlying learning mechanisms. In addition, we provide evaluation strategies to verify the quality of EHR representations from both intrinsic and extrinsic perspectives. Finally, we put forward three promising research directions to promote future research. Overall, this survey aims to provide a profound overview of state-of-the-art developments in the field of EHR RL and to help researchers find the most appropriate methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here