z-logo
open-access-imgOpen Access
Deep neural network with FGL for small dataset classification
Author(s) -
Guo Chunsheng,
Li Ruizhe,
Yang Meng,
Tang Xianghong
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.5616
Subject(s) - mnist database , computer science , artificial neural network , artificial intelligence , convergence (economics) , pattern recognition (psychology) , feature (linguistics) , machine learning , contextual image classification , training set , data mining , image (mathematics) , linguistics , philosophy , economics , economic growth
In certain applications, classification models have to be trained with small datasets. This study proposes a new deep neural network with a feature generalisation layer (FGL). First, instead of using a generative network for data augmentation, the FGL is modelled using a latent variable model to diversify features directly by sharing other layers. Then, dual‐objective functions are defined to optimise the parameters of the network: one minimises the generation error and the other minimises the classification error. Finally, a parallel multibranch structure is used in the FGL to improve the convergence of model training. The classification accuracy obtained using various quantities of training samples increased up to 4.63% on the MNIST dataset, up to 3.00% on the CIFAR10 nature image dataset, over the reference model. These experimental results illustrate the effectiveness of the authors’ method for training classification models with small datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here