z-logo
open-access-imgOpen Access
The black box problem of AI in oncology
Author(s) -
Markus Hagenbuchner
Publication year - 2020
Publication title -
journal of physics conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1662/1/012012
Subject(s) - interpretability , pace , black box , artificial intelligence , computer science , machine learning , deep learning , clinical oncology , data science , medicine , cancer , geodesy , geography
The rapidly increasing amount and complexity of data in healthcare, the pace of published research, drug development, biomarker discovery, and clinical trial enrolment in oncology renders AI an approach of choice in the development of machine assisted methods for data analysis and machine assisted decision making. Machine learning algorithms, and artificial neural networks in particular, drive recent successes of AI in oncology. Performances of AI driven methods continue to improve with respect to both speed and precision thus leading to a great potential for AI to improve clinical practice. But the acceptance and a lasting breakthrough of AI in clinical practice is hampered by the black box problem. The black box problem refers to limits in the interpretability of results and to limits in explanatory functionality. Addressing the black box problem has become a major focus of research [1]. This talk describes recent attempts to addressing the black box problem in AI, offers a discussion on the suitability of those attempts for applications to oncology, and provides some future directions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom