Federated learning improves site performance in multicenter deep learning without data sharing
Author(s) -
Karthik V. Sarma,
Stephanie A. Harmon,
Thomas Sanford,
Holger R. Roth,
Ziyue Xu,
Jesse Tetreault,
Daguang Xu,
Mona G. Flores,
A. Raman,
Rushikesh Kulkarni,
Bradford J. Wood,
Peter L. Choyke,
Alan Priester,
Leonard S. Marks,
Steven S. Raman,
Dieter R. Enzmann,
Barış Türkbey,
William Speier,
Corey Arnold
Publication year - 2020
Publication title -
journal of the american medical informatics association
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.614
H-Index - 150
eISSN - 1527-974X
pISSN - 1067-5027
DOI - 10.1093/jamia/ocaa341
Subject(s) - generalizability theory , pooling , computer science , artificial intelligence , federated learning , transfer of learning , institution , deep learning , data sharing , machine learning , training set , psychology , medicine , political science , developmental psychology , alternative medicine , pathology , law
To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL).
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom