z-logo
Premium
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care
Author(s) -
Felder Ryan Marshall
Publication year - 2021
Publication title -
hastings center report
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.515
H-Index - 63
eISSN - 1552-146X
pISSN - 0093-0334
DOI - 10.1002/hast.1248
Subject(s) - accountability , black box , health care , institution , function (biology) , healthcare system , opacity , psychology , law , epistemology , medicine , law and economics , sociology , computer science , political science , artificial intelligence , philosophy , physics , optics , evolutionary biology , biology
Abstract The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient‐physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here