z-logo
open-access-imgOpen Access
Concordance in Breast Cancer Grading by Artificial Intelligence on Whole Slide Images Compares With a Multi-Institutional Cohort of Breast Pathologists
Author(s) -
Siddhartha Mantrala,
Paula S. Ginter,
Aditya Mitkari,
Sripad Joshi,
Harish Prabhala,
Vikas Ramachandra,
Lata Kini,
Romana Idress,
Timothy M. D’Alfonso,
Susan Fineberg,
Shabnam Jaffer,
Abida K. Sattar,
Anees B. Chagpar,
Parker C. Wilson,
Kamaljeet Singh,
Malini Harigopal,
Dinesh Koka
Publication year - 2022
Publication title -
archives of pathology and laboratory medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 117
eISSN - 1543-2165
pISSN - 0003-9985
DOI - 10.5858/arpa.2021-0299-oa
Subject(s) - concordance , grading (engineering) , medicine , telepathology , breast cancer , tumor grade , breast carcinoma , digital pathology , medical physics , nuclear medicine , pathology , radiology , cancer , health care , civil engineering , telemedicine , engineering , economics , economic growth
Context.— Breast carcinoma grade, as determined by the Nottingham Grading System (NGS), is an important criterion for determining prognosis. The NGS is based on 3 parameters: tubule formation (TF), nuclear pleomorphism (NP), and mitotic count (MC). The advent of digital pathology and artificial intelligence (AI) have increased interest in virtual microscopy using digital whole slide imaging (WSI) more broadly. Objective.— To compare concordance in breast carcinoma grading between AI and a multi-institutional group of breast pathologists using digital WSI. Design.— We have developed an automated NGS framework using deep learning. Six pathologists and AI independently reviewed a digitally scanned slide from 137 invasive carcinomas and assigned a grade based on scoring of the TF, NP, and MC. Results.— Interobserver agreement for the pathologists and AI for overall grade was moderate (κ = 0.471). Agreement was good (κ = 0.681), moderate (κ = 0.442), and fair (κ = 0.368) for grades 1, 3, and 2, respectively. Observer pair concordance for AI and individual pathologists ranged from fair to good (κ = 0.313–0.606). Perfect agreement was observed in 25 cases (27.4%). Interobserver agreement for the individual components was best for TF (κ = 0.471 each) followed by NP (κ = 0.342) and was worst for MC (κ = 0.233). There were no observed differences in concordance amongst pathologists alone versus pathologists + AI. Conclusions.— Ours is the first study comparing concordance in breast carcinoma grading between a multi-institutional group of pathologists using virtual microscopy to a newly developed WSI AI methodology. Using explainable methods, AI demonstrated similar concordance to pathologists alone.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom