z-logo
open-access-imgOpen Access
Estimation of Subsurface Temperatures in the Tattapani Geothermal Field, Central India, from Limited Volume of Magnetotelluric Data and Borehole Thermograms Using a Constructive Back-Propagation Neural Network
Author(s) -
Anthony E. Akpan,
Mahesh Narayanan,
T. Harinarayana
Publication year - 2014
Publication title -
earth interactions
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.309
H-Index - 38
ISSN - 1087-3562
DOI - 10.1175/2013ei000539.1
Subject(s) - initialization , borehole , mean squared error , geothermal gradient , artificial neural network , backpropagation , magnetotellurics , standard deviation , algorithm , statistics , computer science , geology , mathematics , artificial intelligence , electrical resistivity and conductivity , geophysics , engineering , geotechnical engineering , electrical engineering , programming language
A constructive back-propagation code that was designed to run as a single-hidden-layer, feed-forward neural network (SLFFNN) has been adapted and used to estimate subsurface temperature from a small volume of magnetotelluric (MT)-derived electrical resistivity data and borehole thermograms. The code was adapted to use a looping procedure in searching for better initialization conditions that can optimally solve nonlinear problems using the random weight initialization approach. Available one-dimensional (1D) MT-derived resistivity data and borehole temperature records from the Tattapani geothermal field, central India, were collated and digitized at 10-m intervals. The two datasets were paired to form a set of input–output pairs. The paired data were randomized, standardized, and partitioned into three mutually exclusive subsets. The various subsets had 52% (later increased to 61%), 30%, and 18% (later reduced to 9%) for training, validation, and testing, respectively, in the first and second training phases. The second training phase was meant to assess the influence of the training data volume on network performance. Standard statistical techniques including adjusted coefficient of determination (R2a), relative error (ɛ), absolute average deviation (AAD), root-mean-square error (RMSE), and regression analysis were used to quantitatively rate network performance. A manually designed two-hidden-layer, feed-forward network with 20 and 15 neurons in the first and second layers was also adopted in solving the same problem. Performance ratings were observed to be 0.97, 3.75, 4.09, 1.41, 1.18, and 1.08 for R2a, AAD, ɛ, RMSE, slope, and intercept, respectively, compared to an ɛ of 20.33 observed with the manually designed network. The SLFFNN is thus a structurally flexible network that performs better in spite of the small volume of data used in testing the network. The network needs to be tested further.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here