z-logo
Premium
Dynamic pricing based on asymmetric multiagent reinforcement learning
Author(s) -
Könönen Ville
Publication year - 2006
Publication title -
international journal of intelligent systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.291
H-Index - 87
eISSN - 1098-111X
pISSN - 0884-8173
DOI - 10.1002/int.20121
Subject(s) - reinforcement learning , parameterized complexity , computer science , bellman equation , q learning , gradient descent , mathematical optimization , function (biology) , artificial intelligence , markov decision process , dynamic pricing , descent (aeronautics) , markov chain , markov process , machine learning , algorithm , mathematics , artificial neural network , economics , statistics , evolutionary biology , microeconomics , engineering , biology , aerospace engineering
Abstract A dynamic pricing problem is solved by using asymmetric multiagent reinforcement learning in this article. In the problem, there are two competing brokers that sell identical products to customers and compete on the basis of price. We model this dynamic pricing problem as a Markov game and solve it by using two different learning methods. The first method utilizes modified gradient descent in the parameter space of the value function approximator and the second method uses a direct gradient of the parameterized policy function. We present a brief literature survey of pricing models based on multiagent reinforcement learning, introduce the basic concepts of Markov games, and solve the problem by using proposed methods. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 73–98, 2006.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here