z-logo
Premium
Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability
Author(s) -
London Alex John
Publication year - 2019
Publication title -
hastings center report
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.515
H-Index - 63
eISSN - 1552-146X
pISSN - 0093-0334
DOI - 10.1002/hast.973
Subject(s) - predictive power , pace , black box , power (physics) , artificial intelligence , medical knowledge , computer science , data science , machine learning , psychology , epistemology , medicine , medical education , philosophy , physics , geodesy , quantum mechanics , geography
Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here