z-logo
open-access-imgOpen Access
J-EDA: A workbench for tuning similarity and diversity search parameters in content-based image retrieval
Author(s) -
João V. O. Novaes,
Lúcio F. D. Santos,
Luiz Olmes Carvalho,
Daniel de Oliveira,
Marcos V. N. Bedo,
Agma J. M. Traina,
Caetano Traina
Publication year - 2021
Publication title -
journal of information and data management
Language(s) - English
Resource type - Journals
ISSN - 2178-7107
DOI - 10.5753/jidm.2021.1990
Subject(s) - computer science , image retrieval , content based image retrieval , information retrieval , similarity (geometry) , workbench , nearest neighbor search , data mining , metric (unit) , query expansion , search engine , relevance (law) , pattern recognition (psychology) , image (mathematics) , artificial intelligence , visualization , operations management , political science , law , economics
Similarity searches can be modeled by means of distances following the Metric Spaces Theory and constitute a fast and explainable query mechanism behind content-based image retrieval (CBIR) tasks. However, classical distance-based queries, e.g., Range and k-Nearest Neighbors, may be unsuitable for exploring large datasets because the retrieved elements are often similar among themselves. Although similarity searching is enriched with the imposition of rules to foster result diversification, the fine-tuning of the diversity query is still an open issue, which is is usually carried out with and a non-optimal expensive computational inspection. This paper introduces J-EDA, a practical workbench implemented in Java that supports the tuning of similarity and diversity search parameters by enabling the automatic and parallel exploration of multiple search settings regarding a user-posed content-based image retrieval task. J-EDA implements a wide variety of classical and diversity-driven search queries, as well as many CBIR settings such as feature extractors for images, distance functions, and relevance feedback techniques. Accordingly, users can define multiple query settings and inspect their performances for spotting the most suitable parameterization for a content-based image retrieval problem at hand. The workbench reports the experimental performances with several internal and external evaluation metrics such as P × R and Mean Average Precision (mAP), which are calculated towards either incremental or batch procedures performed with or without human interaction.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here