z-logo
open-access-imgOpen Access
Neural Networks under Hamiltonian Constraints: A Comprehensive Review on Structural Evolution and Applications
Author(s) -
Zhiyi Zhang,
Shuangcheng Bai,
Yongqi Liang,
Zhengyi Bao
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3638011
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Traditional algorithms and neural networks often suffer from energy drift in long-term predictions when modeling simple and fundamental natural laws, primarily due to the lack of inductive biases rooted in physical structures. By incorporating Hamiltonian mechanics into neural networks, these models leverage the conservation of energy and other desirable properties of Hamiltonian systems, leading to improved performance in tasks such as trajectory prediction, fundamental law learning, and image recognition. This study reviews the development and applications of neural networks constructed with Hamiltonian mechanics. We analyze two main categories: Hamiltonian Neural Networks (HNNs) and generative Hamiltonian neural networks, focusing on their architectural designs, data representations, energy conservation capabilities, loss function formulations, and choices of numerical integrators. Furthermore, we assess the advantages and challenges of these models in real-world applications. Hamiltonian-based neural networks demonstrate significant potential in preserving energy conservation and enhancing physical consistency. Future research should aim to optimize network architectures and parameter update mechanisms to improve both performance and interpretability. Additionally, exploring the integration of Hamiltonian neural networks with large language models may offer novel insights for tackling complex tasks and achieving more efficient learning.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom