z-logo
open-access-imgOpen Access
Interpreting Deep Visual Representations via Network Dissection
Author(s) -
Bolei Zhou,
David Bau,
Aude Oliva,
Antonio Torralba
Publication year - 2018
Publication title -
ieee transactions on pattern analysis and machine intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.811
H-Index - 372
eISSN - 1939-3539
pISSN - 0162-8828
DOI - 10.1109/tpami.2018.2858759
Subject(s) - interpretability , artificial intelligence , initialization , computer science , convolutional neural network , deep learning , machine learning , pattern recognition (psychology) , artificial neural network , programming language
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom