Sunday, 7 Oct 2018
Time: 10:00 am - 13.30 pm.
Location information: Room Kokkali

Schedule

10:00-10:30Introduction
10:30-11:30Methods for Explanation
11:30-12:00Coffee Break
12:00-12:30Validating Explanations
12:30-13:30Applications

Summary

Powerful machine learning algorithms such as deep neural networks (DNNs) are now able to harvest very large amounts of training data and to convert them into highly accurate predictive models. DNN models have reached state-of-the-art accuracy in a wide range of practical applications. At the same time, DNNs are generally considered as black boxes, because given their nonlinearity and deeply nested structure it is difficult to intuitively and quantitatively understand their inference, e.g. what made the trained DNN model arrive at a particular decision for a given data point. This is a major drawback for applications where interpretability of the decision is an inevitable prerequisite.

For instance, in medical diagnosis incorrect predictions can be lethal, thus simple black-box predictions cannot be trusted by default. Instead, the predictions should be made interpretable to a human expert for verification. In the sciences, deep learning algorithms are able to extract complex relations between physical or biological observables. The design of interpretation methods to explain these newly inferred relations can be useful for building scientific hypotheses, or for detecting possible artifacts in the data/model. Also from an engineer's perspective interpretability is a crucial feature, because it allows to identify the most relevant features and parameters and more generally to understand the strengths and weaknesses of a model. This feedback can be used to improve the structure of the model or speedup training.

Recently, the transparency problem has been receiving more attention in the deep learning community. Several methods have been developed to understand what a DNN has learned. Some of this work is dedicated to visualize particular neurons or neuron layers, other work focuses on methods which visualize the impact of particular regions of a given input image. An important question for the practitioner is how to objectively measure the quality of an explanation of the DNN prediction and how to use these explanations for improving the learned model.

Our tutorial will present recently proposed techniques for interpreting, explaining and visualizing deep models and explore their practical usefulness in computer vision.

Slides

1-Intro 2-Methods 3-Evaluation 4-Applications
ICIP Tutorial 1 ICIP Tutorial 2 ICIP Tutorial 3 ICIP Tutorial 4

Organizers

Wojciech Samek, Fraunhofer Heinrich Hertz Institute
Grégoire Montavon, Technical University Berlin
Klaus-Robert Müller, Technical University Berlin