Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms
Author(s) -
XianHua Han,
YenWei Chen
Publication year - 2011
Publication title -
international journal of biomedical imaging
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.626
H-Index - 41
eISSN - 1687-4196
pISSN - 1687-4188
DOI - 10.1155/2011/241396
Subject(s) - computer science , artificial intelligence , pattern recognition (psychology) , local binary patterns , modality (human–computer interaction) , histogram , support vector machine , classifier (uml) , margin (machine learning) , feature (linguistics) , image retrieval , feature extraction , vocabulary , image (mathematics) , machine learning , linguistics , philosophy
We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom