z-logo
open-access-imgOpen Access
Multi‐sample inference network
Author(s) -
Liang Daojun,
Yang Feng,
Wang Xiuping,
Ju Xiaohui
Publication year - 2019
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2018.5126
Subject(s) - inference , computer science , artificial neural network , artificial intelligence , sample (material) , network architecture , process (computing) , generative adversarial network , machine learning , data mining , contrast (vision) , pattern recognition (psychology) , deep learning , chemistry , chromatography , computer security , operating system
This study explores whether neural networks can classify multiple samples simultaneously in a forward process. Therefore, a multi‐input multi‐prediction network architecture has been proposed. The authors call this method a multi‐sample inference network (MSIN). In addition to maximising the use of network shared parameters, the network can also use multiple samples for training. MSIN allows multiple samples to be randomly combined to act as data augmentation, and the random combination of corresponding labels can regularise the network as a loss regularisation, which makes MSIN have better generalisation performance. In contrast, category expansion is a problem that is difficult to solve because neural networks can only predict a fixed number of categories. The network proposed in this study can solve the category expansion problem by expanding the initial layers and the final layers. It is trained by using samples of multiple domains at the same time to ensure that the network has no significant decline in the predictive performance of the existing categories. The MSIN method can also be applied to the generative adversarial network to enable it to simultaneously generate samples of multiple sample domains.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here