z-logo
open-access-imgOpen Access
Restricted-Variance Molecular Geometry Optimization Based on Gradient-Enhanced Kriging
Author(s) -
Gerardo Raggi,
Ignacio Fdez. Galván,
Christian L. Ritterhoff,
Morgane Vacher,
Roland Lindh
Publication year - 2020
Publication title -
journal of chemical theory and computation
Language(s) - Uncategorized
Resource type - Journals
SCImago Journal Rank - 2.001
H-Index - 185
eISSN - 1549-9626
pISSN - 1549-9618
DOI - 10.1021/acs.jctc.0c00257
Subject(s) - hessian matrix , kriging , variance (accounting) , data point , flexibility (engineering) , function (biology) , algorithm , mathematical optimization , mathematics , surrogate model , optimization problem , computer science , machine learning , statistics , accounting , evolutionary biology , business , biology
Machine learning techniques, specifically gradient-enhanced Kriging (GEK), have been implemented for molecular geometry optimization. GEK-based optimization has many advantages compared to conventional-step-restricted second-order truncated expansion-molecular optimization methods. In particular, the surrogate model given by GEK can have multiple stationary points, will smoothly converge to the exact model as the number of sample points increases, and contains an explicit expression for the expected error of the model function at an arbitrary point. Machine learning is, however, associated with abundance of data, contrary to the situation desired for efficient geometry optimizations. In this paper, we demonstrate how the GEK procedure can be utilized in a fashion such that in the presence of few data points, the surrogate surface will in a robust way guide the optimization to a minimum of a potential energy surface. In this respect, the GEK procedure will be used to mimic the behavior of a conventional second-order scheme but retaining the flexibility of the superior machine learning approach. Moreover, the expected error will be used in the optimizations to facilitate restricted-variance optimizations. A procedure which relates the eigenvalues of the approximate guessed Hessian with the individual characteristic lengths, used in the GEK model, reduces the number of empirical parameters to optimize to two: the value of the trend function and the maximum allowed variance. These parameters are determined using the extended Baker (e-Baker) and part of the Baker transition-state (Baker-TS) test suites as a training set. The so-created optimization procedure is tested using the e-Baker, full Baker-TS, and S22 test suites, at the density functional theory and second-order Møller-Plesset levels of approximation. The results show that the new method is generally of similar or better performance than a state-of-the-art conventional method, even for cases where no significant improvement was expected.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom