
Learning Partial Differential Equations from Noisy Data using Neural Networks
Author(s) -
Kashvi Srivastava,
Mihir Ahlawat,
Jaskaran Singh,
Vivek Kumar
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1655/1/012075
Subject(s) - partial differential equation , artificial neural network , noise reduction , nonlinear system , mathematics , noise (video) , computer science , polynomial , algorithm , artificial intelligence , mathematical analysis , physics , quantum mechanics , image (mathematics)
The problem of learning partial differential equations (PDEs) from given data is investigated here. Several algorithms have been developed for PDE learning from accurate data sets. These include using sparse optimization for approximating the coefficients of candidate terms in a general PDE model. In this work, the study is extended to spatiotemporal data sets with various noise levels. We compare the performance of conventional and novel methods for denoising the data. Different architectures of neural networks are used to denoise data and approximate derivatives. These methods are numerically tested on the linear convection-diffusion equation and the nonlinear convection-diffusion equation (Burgers’ equation). Results suggest that modification in the hidden units/hidden layers in the network architecture have an effect on the accuracy of approximations with a significant rate. This is a further improvement on the previously known denoising methods of finite differences, polynomial regression splines and single layer neural network.