z-logo
Premium
Image transformation with cellularly connected evolutionary neural networks
Author(s) -
Otsuka Junji,
Yata Noriko,
Nagao Tomoharu
Publication year - 2013
Publication title -
electronics and communications in japan
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.131
H-Index - 13
eISSN - 1942-9541
pISSN - 1942-9533
DOI - 10.1002/ecj.11460
Subject(s) - artificial neural network , computer science , transformation (genetics) , image (mathematics) , artificial intelligence , pixel , genetic programming , cellular neural network , pattern recognition (psychology) , computer vision , biochemistry , chemistry , gene
Abstract The construction of image transformation manually requires large amounts of eort, and thus several methods of automating it with machine learning, such as neural networks or genetic programming, have been proposed. Most of these methods are just constructed image lters that calculate an output value from values in the local area of each pixel independently. However, in several tasks, such as area detection, information on more distant areas is helpful to processing. In this paper, we introduce a new neural network model for automatic construction of image transformations. The proposed model is composed of a regular array of identical evolutionary neural networks, the Real‐Valued Flexibly Connected Neural Networks (RFCN) that we previously proposed, and each RFCN is connected to neighboring RFCNs. The proposed model is called Cellular RFCN (CRFCN). Because of the local connections, each RFCN can consider information on distant areas indirectly. We apply CRFCN to three image transformation tasks, compare it with other methods, and examine its eectiveness. © 2013 Wiley Periodicals, Inc. Electron Comm Jpn, 96(5): 17–27, 2013; Published online in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/ecj.11460

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here