z-logo
Premium
Information theory for ranking and selection
Author(s) -
Delshad Saeid,
Khademi Amin
Publication year - 2020
Publication title -
naval research logistics (nrl)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.665
H-Index - 68
eISSN - 1520-6750
pISSN - 0894-069X
DOI - 10.1002/nav.21903
Subject(s) - mathematical optimization , ranking (information retrieval) , computer science , bayesian probability , monotonic function , entropy (arrow of time) , value of information , selection (genetic algorithm) , dynamic programming , mathematics , machine learning , artificial intelligence , mathematical analysis , physics , quantum mechanics
We study the classical ranking and selection problem, where the ultimate goal is to find the unknown best alternative in terms of the probability of correct selection or expected opportunity cost. However, this paper adopts an alternative sampling approach to achieve this goal, where sampling decisions are made with the objective of maximizing information about the unknown best alternative, or equivalently, minimizing its Shannon entropy. This adaptive learning is formulated via a Bayesian stochastic dynamic programming problem, by which several properties of the learning problem are presented, including the monotonicity of the optimal value function in an information‐seeking setting. Since the state space of the stochastic dynamic program is unbounded in the Gaussian setting, a one‐step look‐ahead approach is used to develop a policy. The proposed policy seeks to maximize the one‐step information gain about the unknown best alternative, and therefore, it is called information gradient (IG). It is also proved that the IG policy is consistent, that is, as the sampling budget grows to infinity, the IG policy finds the true best alternative almost surely. Later, a computationally efficient estimate of the proposed policy, called approximated information gradient (AIG), is introduced and in the numerical experiments its performance is tested against recent benchmarks alongside several sensitivity analyses. Results show that AIG performs competitively against other algorithms from the literature.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here