z-logo
Premium
An efficient neurodynamic model to solve nonconvex nonlinear optimization problems and its applications
Author(s) -
Moghaddas Mohammad,
Tohidi Ghasem
Publication year - 2020
Publication title -
expert systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 38
eISSN - 1468-0394
pISSN - 0266-4720
DOI - 10.1111/exsy.12498
Subject(s) - karush–kuhn–tucker conditions , computer science , artificial neural network , mathematical optimization , convergence (economics) , nonlinear system , lyapunov function , optimization problem , projection (relational algebra) , mathematics , artificial intelligence , algorithm , physics , quantum mechanics , economics , economic growth
This paper presents a recurrent neural network for solving nonconvex nonlinear optimization problems subject to nonlinear inequality constraints. First, the p ‐power transformation is exploited for local convexification of the Lagrangian function in nonconvex nonlinear optimization problem. Next, the proposed neural network is constructed based on the Karush–Kuhn–Tucker (KKT) optimality conditions and the projection function. An important property of this neural network is that its equilibrium point corresponds to the optimal solution of the original problem. By utilizing an appropriate Lyapunov function, it is shown that the proposed neural network is stable in the sense of Lyapunov and convergent to the global optimal solution of the original problem. Also, the sensitivity of the convergence is analysed by changing the scaling factors. Compared with other existing neural networks for such problem, the proposed neural network has more advantages such as high accuracy of the obtained solutions, fast convergence, and low complexity. Finally, simulation results are provided to show the benefits of the proposed model, which compare to or outperform existing models.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here