z-logo
open-access-imgOpen Access
Asymptotics of Reinforcement Learning with Neural Networks
Author(s) -
Justin Sirignano,
Konstantinos Spiliopoulos
Publication year - 2022
Publication title -
stochastic systems
Language(s) - English
Resource type - Journals
ISSN - 1946-5238
DOI - 10.1287/stsy.2021.0072
Subject(s) - artificial neural network , initialization , limit (mathematics) , independent and identically distributed random variables , reinforcement learning , convergence (economics) , stochastic differential equation , ordinary differential equation , mathematics , riccati equation , differential equation , gradient descent , computer science , mathematical analysis , artificial intelligence , random variable , statistics , economics , programming language , economic growth
We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution that is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on independent and identically distributed data with stochastic gradient descent under the widely used Xavier initialization.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here