z-logo
Premium
Automated quality control assessment of clinical chest images
Author(s) -
Willis Charles E.,
Nishino Thomas K.,
Wells Jered R.,
Ai H. Asher,
Wilson Joshua M.,
Samei Ehsan
Publication year - 2018
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.13107
Subject(s) - image quality , radiography , digital radiography , medicine , mediastinum , contrast (vision) , nuclear medicine , radiology , computer science , artificial intelligence , image (mathematics)
Purpose The purpose of this study was to determine whether a proposed suite of objective image quality metrics for digital chest radiographs is useful for monitoring image quality in a clinical setting unique from the one where the metrics were developed. Methods Seventeen gridless AP chest radiographs from a GE Optima portable digital radiography (DR) unit (“sub‐standard” images; Group 2) and 17 digital PA chest radiographs (“standard‐of‐care” images; Group 1) and 15 gridless (non‐routine) PA chest radiographs (images with a gross technical error; Group 3) from a Discovery DR unit were chosen for analysis. Group 2 images were acquired with a lower kVp (100 vs 125) and shorter source‐to‐image distance (127 cm vs 183 cm) and were expected to have lower quality than Group 1 images. Group 3 images were expected to have degraded contrast vs Group 1 images. Images were anonymized and securely transferred to the Duke University Clinical Imaging Physics Group for analysis using software described and validated previously. Individual image quality was reported in terms of lung gray level, lung detail, lung noise, rib‐lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm‐lung contrast, and subdiaphragm area. Metrics were compared across groups. To improve precision of means and confidence intervals for routine exams, an additional 66 PA images were acquired, processed, and pooled with Group 1. Three observer studies were conducted to assess whether humans were able to identify images classified by the algorithm as abnormal. Results Metrics agreed with published Quality Consistency Ranges with three exceptions: higher lung gray level, lower rib‐lung contrast, and lower subdiaphragm‐lung contrast. Higher (stored) bit depth (14 vs 12) accounted for higher lung gray level values in our images. Values were most internally consistent for Group 1. The most sensitive metric for distinguishing between groups was mediastinum noise, followed closely by lung noise. The least sensitive metrics were mediastinum detail and rib‐lung contrast. The algorithm was more sensitive than human observers at detecting suboptimal diagnostic quality images. Conclusions The software appears promising for objectively and automatically identifying suboptimal images in a clinical imaging operation. The results can be used to establish local quality consistency ranges and action limits per facility preferences.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here