Open AccessUniversal Approximation Theorem for Vector- and Hypercomplex-Valued Neural NetworksOpen Access
Author(s)
Marcos Eduardo Valle,
Wington L. Vital,
Guilherme Vieira
Publication year2024
The universal approximation theorem states that a neural network with onehidden layer can approximate continuous functions on compact sets with anydesired precision. This theorem supports using neural networks for variousapplications, including regression and classification tasks. Furthermore, it isvalid for real-valued neural networks and some hypercomplex-valued neuralnetworks such as complex-, quaternion-, tessarine-, and Clifford-valued neuralnetworks. However, hypercomplex-valued neural networks are a type ofvector-valued neural network defined on an algebra with additional algebraic orgeometric properties. This paper extends the universal approximation theoremfor a wide range of vector-valued neural networks, includinghypercomplex-valued models as particular instances. Precisely, we introduce theconcept of non-degenerate algebra and state the universal approximation theoremfor neural networks defined on such algebras.
Language(s)English
Seeing content that should not be on Zendy? Contact us.
To access your conversation history and unlimited prompts, please
Prompt 0/10