z-logo
open-access-imgOpen Access
From Text to Image: Generating Visual Query for Image Retrieval
Author(s) -
Wen-Cheng Lin,
Yih-Chen Chang,
HsinHsi Chen
Publication year - 2005
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
ISBN - 3-540-27420-0
DOI - 10.1007/11519645_65
Subject(s) - computer science , image retrieval , information retrieval , image (mathematics) , visual word , artificial intelligence , computer vision
This paper explores the uses of visual features for cross-language access to an image collection. An approach which transforms textual queries into visual representations is proposed. The relationships between text and images are mined. We employ the mined relationships to construct visual queries from textual ones. The retrieval results using textual and visual queries are combined to generate the final ranked list. We conducted English monolingual and Chinese-English cross-language retrieval experiments. The performances are quite good. The average precision of English monolingual textual run is 0.6304. The performance of cross-lingual retrieval is about 70% of monolingual retrieval. Comparatively, the gain of the generated visual query is not significant. If only appropriate query terms are selected to generate visual query, retrieval performance could be increased.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom