z-logo
open-access-imgOpen Access
Learning the Structure of Deep Architectures Using L1 Regularization
Author(s) -
Praveen Kulkarni,
Joaquin Zepeda,
Frédéric Jurie,
Patrick Pérez,
Louis Chevallier
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.29.23
Subject(s) - discriminative model , diagonal , regularization (linguistics) , computer science , artificial intelligence , deep learning , pattern recognition (psychology) , row and column spaces , feature selection , algorithm , row , mathematics , geometry , database
International audienceWe present a method that formulates the selection of the structure of a deep architecture as a penalized, discriminative learning problem. Up to now, the structure of deep architectures has been fixed by hand, and only the weights are learned using discriminative learning. Our work is a first attempt towards a more formal method of deep structure selection. We consider architectures consisting only of fully-connected layers, and our approach relies on diagonal matrices inserted between subsequent layers. By including an L1 norm of the diagonal entries of said matrices as a regularization penalty, we force the diagonals to be sparse, accordingly selecting the effective number of rows (respectively, columns) of the corresponding layer's (next layer's) weights matrix. We carry out experiments on a standard dataset and show that our method succeeds in selecting the structure of deep architectures of multiple layers. One variant of our architecture results in a feature vector of size as little as $36$, while retaining very high image classification performance

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom