z-logo
Premium
Evaluators of image reconstruction algorithms
Author(s) -
Herman Gabor T.,
Yeung K. T. Daniel
Publication year - 1989
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.1850010208
Subject(s) - observer (physics) , algorithm , computer science , image (mathematics) , set (abstract data type) , artificial intelligence , mathematics , physics , quantum mechanics , programming language
An image reconstruction algorithm is supposed to present an image that contains medically relevant information that exists in a cross section of the human body. There is an enormous variety of such algorithms. The question arises: Given a specific medical problem, what is the relative merit of two image reconstruction algorithms in presenting images that are helpful for solving the problem? An approach to answering this question with a high degree of confidence is that of ROC analysis of human observer performance. The problem with ROC studies using human observers is their complexity (and, hence, cost). To overcome this problem, it has been suggested to replace the human observer by a numerical observer. An even simpler approach is by the use of distance metrics, such as the root mean squared distance, between the reconstructed images and the known originals. For any of these approaches, the evaluation should be done using a sample set that is large enough to provide us with a statistically significant result. We concentrate in this paper on the numerical observer approach, and we reintroduce in this framework the notion of the Hotelling Trace Criterion, which has recently been proposed as an appropriate evaluator of imaging systems. We propose a definite strategy (based on linear abnormality‐index functions that are optimal for the chosen figure of merit) for evaluating image reconstruction algorithms. We give details of two experimental studies that embody the espoused principles. Since ROC analysis of human observer performance is the ultimate yardstick for system assessment, one justifies a numerical observer approach by showing that it yields “similar” results to a human observer study. Also, since simple distance metrics are computationally less cumbersome than are numerical observer studies, one would like to replace the latter by the former, whenever it is likely to give “similar” results. We discuss approaches to assigning a numerical value to the “similarity” of the results produced by two different evaluators. We introduce a new concept, called rank‐ordering nearness, which seems to provide us with a promising approach to experimentally determining the similarity of two evaluators of image reconstruction algorithms.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here