z-logo
open-access-imgOpen Access
Salient Points for Content-Based Retrieval
Author(s) -
Nicu Sebe,
Michael S. Lew
Publication year - 2001
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.15.42
Subject(s) - image retrieval , computer science , image texture , feature detection (computer vision) , feature vector , automatic image annotation , content based image retrieval , feature (linguistics) , pattern recognition (psychology) , visual word , artificial intelligence , feature extraction , salient , computer vision , image (mathematics) , image processing , linguistics , philosophy
In image retrieval, global features related to color or texture are commonly used to describe the image content. The use of interest points in contentbased image retrieval allows an image index to represent local properties of images. In this paper, we present a wavelet-based salient point extraction algorithm and we show that extracting the color and texture information in the locations given by these points provides significantly improved results in terms of retrieval accuracy, computational complexity, and storage space of feature vectors as compared to the global feature approaches. In a typical content-based image database retrieval application, the user has an image he or she is interested in and wants to find similar images from the entire database. A two-step approach to search the image database is adopted. First, for each image in the database, a feature vector characterizing some image properties is computed and stored in a feature database. Second, given a query image, its feature vector is computed, compared to the feature vectors in the feature database, and images most similar to the query are returned to the user. The features and the similarity measure used to compare two feature vectors should be efficient enough to match similar images as well as being able to discriminate dissimilar ones. In general, the features are often computed from the entire image. The problem with this approach is that these global features cannot handle all parts of the image having different characteristics. Therefore, local computation of image information is necessary. Local features can be computed at different image scales to obtain an image index based on local properties of the image and they need to be sufficiently discriminative to ”summarize” the local image information. These features are too time-consuming to be computed for each pixel in the image and therefore, the feature extraction should be limited to a subset of the image pixels, the interest points [9, 11], where the image information is supposed to be the most important. Besides saving time in the indexing process, these points may lead to a more discriminative index because they are related to the visually most important parts of the image. Schmid and Mohr [9] introduced the notion of interest point in image retrieval. To detect these points, they use the Harris’ corner detector [5]. This detector, as many others [10], was initially designed for robotics, and it is based on a mathematical model for corners. The original goal was to match corners from a pair of stereo images, in order to obtain a representation of the 3D scene. Since the corners detectors were not designed to

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom