<title>CAMEL: concept annotated image libraries</title>
Author(s) -
Apostol Natsev,
Atul Chadha,
Basuki Soetarman,
Jeffrey Scott Vitter
Publication year - 2001
Publication title -
proceedings of spie, the international society for optical engineering/proceedings of spie
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.192
H-Index - 176
eISSN - 1996-756X
pISSN - 0277-786X
DOI - 10.1117/12.410975
Subject(s) - computer science , mainstream , usability , the internet , field (mathematics) , information retrieval , content based image retrieval , world wide web , image retrieval , quality (philosophy) , data science , image (mathematics) , multimedia , computer vision , human–computer interaction , political science , philosophy , mathematics , epistemology , pure mathematics , law
The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the internet, and many important applications requir e searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, conten t-based image querying is still largely unestab- lished as a mainstream field, nor is it widely used by search engines. We believ e that two of the major hurdles for this poor acceptance are poor retrieval quality and usability. In this paper, we introduce the CAMEL system—an acronym for Concept Annotated iMagE Libraries—as an effort to address both of the above problems. The CAMEL system provides and easy-to-use, and yet powerful, text-only query interface, which allows users to search for images based on visual concepts, identified by specifying relevant keywords. Conceptually, CAMEL annotates images with the visual concepts that are relevant to them. In practice, CAMEL defines visual concepts by looking at sample images off-line and extracting their relevant visual f eatures. Once defined, such visual concepts can be used to search for relevant images on the fly, using content-based search methods. The visual concepts are stored in a Concept Library and are represented by an associated set of wavelet features, which in our implement ation were extracted by the WALRUS image querying system. Even though the CAMEL framework applies independently of the underlying query engine, for our prototype we have chosen WALRUS as a back-end, due to its ability to extract and query with image region features. CAMEL improves retrieval quality because it allows experts to build very accurate representations of visual concepts that can be used even by novice users. At the same time, CAMEL improves usability by supporting the familiar text-only interface currently used by most search engines on the web. Both improvements represent a departure from traditional approaches to improving image query systems—instead of focusing on query execution, we emphasize query specification by allowing simpler and yet more precise query specification.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom