z-logo
Premium
A simple kernel co‐occurrence‐based enhancement for pseudo‐relevance feedback
Author(s) -
Pan Min,
Huang Jimmy Xiangji,
He Tingting,
Mao Zhiming,
Ying Zhiwei,
Tu Xinhui
Publication year - 2020
Publication title -
journal of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.903
H-Index - 145
eISSN - 2330-1643
pISSN - 2330-1635
DOI - 10.1002/asi.24241
Subject(s) - query expansion , computer science , term (time) , relevance feedback , relevance (law) , kernel (algebra) , set (abstract data type) , data mining , term discrimination , series (stratigraphy) , information retrieval , artificial intelligence , search engine , web search query , mathematics , image retrieval , concept search , paleontology , physics , quantum mechanics , combinatorics , biology , political science , law , image (mathematics) , programming language
Pseudo‐relevance feedback is a well‐studied query expansion technique in which it is assumed that the top‐ranked documents in an initial set of retrieval results are relevant and expansion terms are then extracted from those documents. When selecting expansion terms, most traditional models do not simultaneously consider term frequency and the co‐occurrence relationships between candidate terms and query terms. Intuitively, however, a term that has a higher co‐occurrence with a query term is more likely to be related to the query topic. In this article, we propose a kernel co‐occurrence‐based framework to enhance retrieval performance by integrating term co‐occurrence information into the Rocchio model and a relevance language model (RM3). Specifically, a kernel co‐occurrence‐based Rocchio method (KRoc) and a kernel co‐occurrence‐based RM3 method (KRM3) are proposed. In our framework, co‐occurrence information is incorporated into both the factor of the term discrimination power and the factor of the within‐document term weight to boost retrieval performance. The results of a series of experiments show that our proposed methods significantly outperform the corresponding strong baselines over all data sets in terms of the mean average precision and over most data sets in terms of P@10. A direct comparison of standard Text Retrieval Conference data sets indicates that our proposed methods are at least comparable to state‐of‐the‐art approaches.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here