Premium
Opening the black box of AI‐Medicine
Author(s) -
Poon Aaron I F,
Sung Joseph J Y
Publication year - 2021
Publication title -
journal of gastroenterology and hepatology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.214
H-Index - 130
eISSN - 1440-1746
pISSN - 0815-9319
DOI - 10.1111/jgh.15384
Subject(s) - interpretability , outcome (game theory) , artificial intelligence , medicine , black box , reading (process) , process (computing) , machine learning , computer science , mathematics , mathematical economics , political science , law , operating system
Abstract One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.