The training response law explains how deep neural networks learn
Author(s) -
Kenichi Nakazato
Publication year - 2022
Publication title -
journal of physics complexity
Language(s) - English
Resource type - Journals
ISSN - 2632-072X
DOI - 10.1088/2632-072x/ac68bf
Subject(s) - simple (philosophy) , artificial intelligence , artificial neural network , computer science , kernel (algebra) , iterated function , process (computing) , construct (python library) , generalization , field (mathematics) , machine learning , mathematics , pure mathematics , mathematical analysis , philosophy , epistemology , programming language , operating system
Deep neural network is the widely applied technology in this decade. In spite of the fruitful applications, the mechanism behind that is still to be elucidated. We study the learning process with a very simple supervised learning encoding problem. As a result, we found a simple law, in the training response, which describes neural tangent kernel. The response consists of a power law like decay multiplied by a simple response kernel. We can construct a simple mean-eld dynamical model with the law, which explains how the network learns. In the learning, the input space is split into sub-spaces along competition between the kernels. With the iterated splits and the aging, the network gets more complexity, but nally loses its plasticity.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom