Monday, 18 June 2018

Time: 8.30 am - 12.00 pm.
Location information: Room 155 CDE (1360).

Summary

Machine learning techniques such as deep neural networks (DNN) are able convert large amounts of data into highly predictive models. In complement to their unmatched predictive capability, it is becoming increasingly important to understand qualitatively and quantitatively how these models decide. Our tutorial will provide a broad overview of techniques for interpreting deep models, and how some of these techniques can be made useful on practical problems. In the first part we will lay a taxonomy of these methods, and explain how the various interpretation techniques can be characterized conceptually and mathematically. The second part of the tutorial will explain when and why we need interpretability.

For background material on the topic, see our reading list.

Outline of the tutorial

1. Definitions of interpretability
2. Techniques for understanding deep representations & explaining individual predictions of a DNN
3. Approaches to quantitatively evaluate interpretability
4. Using interpretability in practice (validate deep models, identifying biases & flaws in the dataset, understand invariances of the model)
5. Extracting new insights in complex systems with interpretable models

Organizers

Wojciech Samek, Fraunhofer Heinrich Hertz Institute
Grégoire Montavon, Technical University Berlin
Klaus-Robert Müller, Technical University Berlin