z-logo
open-access-imgOpen Access
Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem
Author(s) -
Justin Sirignano,
Konstantinos Spiliopoulos
Publication year - 2020
Publication title -
stochastic systems
Language(s) - English
Resource type - Journals
ISSN - 1946-5238
DOI - 10.1287/stsy.2019.0050
Subject(s) - stochastic gradient descent , intersection (aeronautics) , mathematics , central limit theorem , rate of convergence , limit (mathematics) , convex function , convergence (economics) , regular polygon , gradient descent , stochastic differential equation , mathematical optimization , computer science , mathematical analysis , statistics , computer network , channel (broadcasting) , geometry , machine learning , economic growth , artificial neural network , engineering , economics , aerospace engineering
Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem (CLT) for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An $L^{p}$ convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom