z-logo
Premium
Neural nets – types, configurations and pitfalls
Author(s) -
Schmitter Ernst Dieter
Publication year - 1995
Publication title -
steel research
Language(s) - English
Resource type - Journals
eISSN - 1869-344X
pISSN - 0177-4832
DOI - 10.1002/srin.199501152
Subject(s) - interpretability , overfitting , computer science , artificial neural network , artificial intelligence , soft computing , backpropagation , fuzzy logic , set (abstract data type) , context (archaeology) , machine learning , feature (linguistics) , paleontology , biology , linguistics , philosophy , programming language
Fuzzy logic, neural nets and genetic algorithms form the core of soft computing methods. They are useful when there is no possibility to compute an exact mathematical model (hard computing). Neural nets have the ability to learn by example. This advantage is exploited by a lot of applications and many software packages make it quite easy to use neural nets. A stage is reached, where some critical remarks should be made in order to avoid disappointments. Some frequently used net types (backpropagation, LVQ, SOM) are discussed together with configuration and training problems. Important topics are the avoidance of under‐ and overfit and the remark that neural nets produce correct outputs only if the inputs lie in the part of the feature space, the net was trained for. Therefore, a detailed analysis of the training data set should be made. In the context of safety relevant applications the missing interpretability of neural net outputs is often criticized. Fuzzy‐neuro‐systems try to improve this situation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here