Tuesday, 6 September 2016 at BarcelonaTech Campus Nord, Spain

Location information: The workshop will be held in the room Sala D'Actes of the building Edifici Vertex (i.e. same building as the ICANN 2016 main conference).


Machine learning (ML) methods such as deep neural networks have demonstrated high predictive performance on a number of tasks in the sciences and industry. However, these predictive models often behave as black-boxes, and model interpretability must be built expressly into the system.

On practical problems such as medical diagnosis where even a single incorrect prediction can be costly, a simple ML prediction cannot be trusted by default. Instead, the prediction should be made interpretable (e.g. explainable in terms of input variables) to a human expert for careful verification.

In the sciences, ML algorithms are able to extract complex relations between physical or biological observables. The design of interpretation methods to explain these newly inferred relations is therefore instrumental in building scientific hypotheses, or for detecting possible artefacts in the data/model.

The Workshop on Machine Learning and Interpretability aims to review recent techniques for enabling the interpretability of machine learning models and to identify new fields of applications for such techniques. Furthermore, it would provide an opportunity for participants to initiate new interdisciplinary projects.

a satellite event of

List of speakers

Lars Kai Hansen, Technical University of Denmark

Lars Kai Hansen has M.Sc.('83) and Ph.D.('86) degrees in physics from University of Copenhagen. Since 1990 he has been with the Technical University of Denmark, where he currently heads the Section for Cognitive Systems. He has published more than 300 papers and book chapters on machine learning, signal processing, and applications in bio-medicine and digital media. His research has been generously funded by the Danish Research Councils and private foundations, the European Union, and the US National Institutes of Health. Among his contributions are neural network ensemble methods('90) and machine learning for brain state decoding based on PET('94) and fMRI('97). In 2011 he was elected “Catedra de Excelencia” at UC3M Madrid, Spain.

Matthew Zeiler, Clarifai

Matthew Zeiler is an artificial intelligence expert with a Ph.D. in machine learning from NYU. His groundbreaking research in visual recognition, alongside renowned machine learning pioneers Geoff Hinton and Yann LeCun, has propelled the image recognition industry from theory to real-world practice. As the founder of Clarifai, Matt is applying his award-winning research to create the best visual recognition solutions for businesses and developers and power the next generation of intelligent apps. Reach him @MattZeiler. Clarifai is an artificial intelligence company that excels in visual recognition, solving real-world problems for businesses and developers alike. Founded in 2013 by Matthew Zeiler, a foremost expert in machine learning, Clarifai has been a market leader since winning the top five places in image classification at the ImageNet 2013 competition. Clarifai’s powerful image and video recognition technology is built on the most advanced machine learning systems and made easily accessible by a clean API, empowering developers all over the world to build a new generation of intelligent applications.

Alexander Binder, Singapore University of Technology and Design

Alexander (Alex) Binder obtained a Ph.D. degree at the department of computer science, Technical University Berlin in 2013. Before he held a Diplom degree in mathematics from the Humboldt University Berlin. Since 2007, he has been working for the THESEUS project on semantic image retrieval at Fraunhofer FIRST where he was the principal contributor to top five ranked submissions at ImageCLEF2011 and Pascal VOC2009 challenges. From 2012 to 2015 he worked on real-time car localization topics in the Automotive Services department (ASCT) of the Fraunhofer Institute FOKUS. From to 2010 to 2015 he was with the Machine Learning Group at the TU Berlin. He likes to program in C++ and is using more and more Python. His research interests include computer vision, medical applications, machine learning (kernel machines and deep learning), efficient heuristics and understanding non-linear predictions.

Alfredo Vellido, BarcelonaTech

Alfredo Vellido is Associate Professor and representative of the Computer Science Department for Terrassa Campus at Universitat Politècnica de Catalunya (UPC) in Barcelona, Spain. He received a B.Sc. in Physics from the Universidad del País Vasco, Spain, in 1996 and a Ph.D. in Neural Computation from Liverpool John Moores University, U.K. in 2000, followed by a Ramón y Cajal research fellowship at UPC. Research interests include machine learning and data mining, as well as their application in biomedicine, bioinformatics and beyond. He is member of the IEEE Systems, Man & Cybernetics Society Spanish Chapter, the Task Force on Medical Data Analysis part of the Data Mining Technical Committee of the IEEE Computational Intelligence Society, the ATICA network and the CIBER-BBN.

Schedule

09:00-09:20Introduction
09:20-10:00Talk 1: Matthew Zeiler - Clarifai

The talk will cover some of the most recent groundbreaking work realized at Clarifai in the domain of neural networks and interpretability.

10:00-10:40Talk 2: Alexander Binder - Explaining Decisions of Deep Neural Networks with Layer-wise Relevance Propagation

Deep neural networks are defining the state of the art in many tasks, such as detection and visual question answering, however often it is unclear what makes them arrive at a decision for one given input sample, e.g. one image. Notably, for the existing methods of explanations, there are large unclarities about what they really explain, e.g. what information backpropagation really does provide. In this talk I am going to present layer-wise relevance propagation, an approach to decompose the prediction of a deep neural network for one image in terms of single pixels and regions. Theoretical motivations, explanations of differences versus deconvolution and backpropagation will be given. I will introduce a framework to evaluate the computed explanations based on the implied ordering of regions and pixels. You will see results of numerical evaluations of the computed visualizations on Imagenet, MIT Places and SUN397 test data. Finally its ability to identify biases in your training data will be shown.

10:40-11:10Coffee Break
11:10-11:50Talk 3: Alfredo Vellido - Mind the Interpreters: Notes on Making Machine Learning Interpretable in Biomedicine and Beyond

Human societies in the digital age have abruptly become data rich. We are still in the very early stages of making those data manageable by transforming them into information and so are we in the process of making that information useful and usable in different domains in the form of actionable knowledge. Data modelling thus becomes a stepping stone in this process, something that statisticians, for instance, know only too well. Modern computational intelligence and machine learning are playing an increasingly important role in knowledge extraction in many practical and scientific applications. In this brief introductory talk on the topic of model interpretability, I will use biomedical applications as a perfect example of the many challenges still facing experts in the process of making complex models acceptable in real-world problems.

11:50-12:30Talk 4: Lars Kai Hansen - Resampling Based Design, Evaluation and Interpretation of Neuroimaging Models

Brain imaging by PET, MR, EEG, and MEG has become a cornerstone in systems level neuroscience. Statistical analyses of neuroimage datasets face many interesting challenges including non-linearity and multi-scale spatial and temporal dynamics. The objectives of neuroimaging are dual: We are interested in the most accurate, i.e., predictive, statistical model, but equally important is model interpretation and visualization which often takes the form of brain maps. I will introduce some current machine learning strategies invoked for explorative and hypothesis driven neuroimage modeling, and present a framework aimed at model selection, evaluation, and interpretation based on computer intensive data re-sampling. Within the framework we obtain both an unbiased estimate of the predictive performance and of the reliability of the brain map visualization.

12:30-13:00Panel Discussion + Conclusion

Note that the exact schedule can be subject to change.

Organizers and Sponsors

Organizers Sponsors