ICML 2020 Workshop, 18 July 2020, Vienna, Austria
Location: The workshop will be held virtually
Invited Speakers | Accepted Papers | Schedule | Call for Papers | Organizers ]

Over the years, ML models have steadily grown in complexity, gaining predictivity often at the expense of interpretability. An active research area called explainable AI (or XAI) has emerged with the goal to produce models that are both predictive and understandable. XAI has reached important successes, such as robust heatmap-based explanations of DNN classifiers. From an application perspective, there is now a need to massively engage into new scenarios such as explaining unsupervised / reinforcement learning, as well as producing explanations that are optimally structured for the human. In particular, our planned workshop will cover the following topics :
- Explaining beyond DNN classifiers : random forests, unsupervised learning, reinforcement learning
- Explaining beyond heatmaps : structured explanations, Q/A and dialog systems, human-in-the-loop
- Explaining beyond explaining : Improving ML models and algorithms, verifying ML, getting insights

XAI has received an exponential interest in the research community, and awareness of the need to explain ML models have grown in similar proportions in industry and in the sciences. With the sizable XAI research community that has formed, there is now a key opportunity to achieve this push towards successful applications. Our hope is that our proposed XXAI workshop can accelerate this process, foster a more systematic use of XAI to produce improvement on models in applications, and finally, also serves to better identify in which way current XAI methods need to be improved and what kind of theory of XAI is needed.

Invited Speakers

Bolei Zhou, Chinese University of Hong Kong

Bolei Zhou is an Assistant Professor with the Information Engineering Department at the Chinese University of Hong Kong. He received his PhD in computer science at the Massachusetts Institute of Technology. His research is on machine perception and decision, with a focus on visual scene understanding and interpretable AI systems. He received the MIT Tech Review’s Innovators under 35 in Asia-Pacific award, Facebook Fellowship, Microsoft Research Asia Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News. More about his research is at http://bzhou.ie.cuhk.edu.hk.

Osbert Bastani, University of Pennsylvania

Osbert Bastani is a research assistant professor at the Department of Computer and Information Science at the University of Pennsylvania. He is a member of the PRECISE and PRiML centers. Previously, he completed my Ph.D. at Stanford advised by Alex Aiken, and spent a year as a postdoc at MIT working with Armando Solar-Lezama.

Grégoire Montavon, Technical University of Berlin

Grégoire Montavon received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and a Ph.D. degree in Machine Learning from the Technische Universität Berlin in 2013. He is currently a Research Associate in the Machine Learning Group at TU Berlin. His research interests include interpretable machine learning and deep neural networks.

Scott Lundberg, Microsoft Research

Scott Lundberg is a senior researcher at Microsoft Research. Before joining Microsoft, I did my Ph.D. studies at the Paul G. Allen School of Computer Science & Engineering of the University of Washington working with Su-In Lee. My work focuses on explainable artificial intelligence and its application to problems in medicine and healthcare. This has led to the development of broadly applicable methods and tools for interpreting complex machine learning models that are now used in banking, logistics, sports, manufacturing, cloud services, economics, and many other areas.

Zeynep Akata, University of Tübingen

Zeynep Akata is a professor of Computer Science within the Cluster of Excellence Machine Learning at the University of Tübingen. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the Max Planck Institute for Informatics with Prof Bernt Schiele (2014-17) and at University of California Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a young scientist honour from the Werner-von-Siemens-Ring foundation in 2019 and an ERC-2019 Starting Grant from the European Commission. Her research interests include multimodal learning and explainable AI.

Sepp Hochreiter, Johannes Kepler University

Sepp Hochreiter is director of the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018. In 2017 he became the head of the Linz Institute of Technology (LIT) AI Lab which focuses on advancing research on artificial intelligence. Previously, he was at the Technical University of Berlin, at the University of Colorado at Boulder, and at the Technical University of Munich. Sepp Hochreiter has made numerous contributions in the fields of machine learning, deep learning and bioinformatics. He developed the long short-term memory (LSTM) for which the first results were reported in his diploma thesis in 1991. In addition to his research contributions, Sepp Hochreiter is broadly active within his field: he launched the Bioinformatics Working Group at the Austrian Computer Society; he is founding board member of different bioinformatics start-up companies; he was program chair of the conference Bioinformatics Research and Development; he is a conference chair of the conference Critical Assessment of Massive Data Analysis (CAMDA); and he is editor, program committee member, and reviewer for international journals and conferences. As a faculty member at Johannes Kepler Linz, he founded the Bachelors Program in Bioinformatics, which is a cross-border, double-degree study program together with the University of South-Bohemia in České Budějovice (Budweis), Czech Republic. He also established the Masters Program in Bioinformatics, where he is still the acting dean of both studies.

Ribana Roscher, University Bonn

Ribana Roscher received the Dipl.-Ing. and Ph.D. degrees in geodesy from the University of Bonn, Germany, in 2008 and 2012, respectively. Until 2015, she was a Postdoctoral Researcher with the University of Bonn, the Julius-Kuehn Institute, Siebeldingen, Germany, Freie Universitaet Berlin, Germany, and the Humboldt Innovation, Berlin. In 2015, she was a Visiting Researcher with the Fields Institute, Toronto, Canada. She is currently an Assistant Professor of remote sensing with the Institute of Geodesy and Geoinformation, University of Bonn. From 2019 to 2020, she was an Interims Professor of semantic technologies with the Institute of Computer Science, University of Osnabrueck, Germany. Her research include pattern recognition and machine learning for remote sensing, especially applications from agricultural and environmental sciences.

Adrian Weller, University of Cambridge

Adrian Weller is a principal research fellow in machine learning at the University of Cambridge. He has broad interests across machine learning and artificial intelligence (AI), their applications, and their implications for society, including: scalability, reliability, interpretability, fairness, privacy, ethics, safety and finance. Adrian is Programme Director for AI at The Alan Turing Institute (national institute for data science and AI), where he is also a Turing Fellow leading work on safe and ethical AI. He is a principal research fellow at the Leverhulme Centre for the Future of Intelligence (CFI) leading their Trust and Transparency project; the David MacKay Newton research fellow at Darwin College; and an advisor to the Centre for Science and Policy (CSaP), and the Centre for the Study of Existential Risk (CSER). Adrian serves on the boards of several organizations, including the Centre for Data Ethics and Innovation (CDEI). Previously, Adrian held senior positions in finance. He continues to be an angel investor and advisor.

Accepted Papers

Schedule

The planned timetable aims to alternate between invited talks and contributed talks (4 best submitted papers). The morning and afternoon sessions will be mainly focused on XAI methods and applications. At the end of the afternoon session, we plan an extensive panel discussion, that should let all invited speakers debate and express opinions on a variety of questions and topics prepared and moderated by the workshop organizers as well as questions from the audience.

08:45-09:00Opening Remarks
09:00-09:30Invited Talk 1: Bolei Zhou - Interpreting and Leveraging the Latent Semantics in Deep Generative Models

Recent progress in deep generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) has enabled synthesizing photo-realistic images, such as faces and scenes. However, it remains much less explored on what has been learned inside the deep representations learned from synthesizing images. In this talk, I will present some of our recent progress in interpreting the semantics in the latent space of the GANs, as well as reversing real images back into the latent space. Identifying these semantics not only allows us to better understand the internal mechanism in generative models, but also facilitates versatile real image editing applications.

09:30-10:00Invited Talk 2: Osbert Bastani - Interpretable, Robust, and Verifiable Reinforcement Learning

Structured control policies such as decision trees, finite-state machines, and programs have a number of advantages over more traditional models: they are easier for humans to understand and debug, they generalize more robustly to novel environments, and they are easier to formally verify. However, learning these kinds of models has proven to be challenging. I will describe recent progress learning structured policies, along with evidence demonstrating their benefits.

10:00-10:30Contributed Talk 1: TBD - Title
Contributed Talk 2: TBD - Title
10:30-11:00Invited Talk 3: Grégoire Montavon - XAI Beyond Classifiers: Explaining Anomalies, Clustering, and More

Unsupervised models such as clustering or anomaly detection are routinely used for data discovery and summarization. To gain maximum insight from the data, we also need to explain which input features (e.g. pixels) support the cluster assignments and the anomaly detections.—So far, XAI has mainly focused on supervised models.—In this talk, a novel systematic approach to explain various unsupervised models is presented. The approach is based on finding, without retraining, neural network equivalents of these models. Their predictions can then be readily explained using common XAI procedures developed for neural networks.

11:00-11:30Invited Talk 4: Scott Lundberg - Title (Explaining Trees / Random Forests)

12:00-14:00Virtual Poster Session
14:00-14:30Invited Talk 5: Zeynep Akata - Modelling Conceptual Understanding Through Communication

14:30-15:00Invited Talk 6: Sepp Hochreiter - Title (Use of XAI decomposition for improving RL)

15:00-15:30Contributed Talk 3: TBD - Title
Contributed Talk 4: TBD - Title
15:30-16:00Invited Talk 7: Ribana Roscher - Use of Explainable Machine Learning in the Sciences

For some time now, machine learning methods have been indispensable in many application areas. Especially with the recent development of neural networks, these methods are increasingly used in the sciences to obtain scientific outcomes from observational or simulated data. Besides a high accuracy, a desired goal is to learn explainable models. In order to reach this goal and obtain explanation, knowledge from the respective domain is necessary, which can be integrated into the model or applied post-hoc. This talk focuses on explainable machine learning approaches which are used to tackle common challenges in the sciences such as the provision of reliable and scientific consistent results. It will show that recent advances in machine learning to enhance transparency, interpretability, and explainability are helpful in overcoming these challenges.

16:00-16:30Invited Talk 8: Adrian Weller - Title (Challenges and deployment of XAI)

16:30-17:15Discussion and Closing Remarks

Note that the exact schedule can be subject to change.

Call for Papers

We call for papers on the following topics: (1) explaining other types of ML models (e.g. random forest, kernel machines, k-means), (2) explanation for other ML tasks (e.g. segmentation, unsupervised learning, reinforcement learning), (3) explaining beyond heatmaps (structured explanations, Q/A and dialog systems, human-in-the-loop), (4) Explaining beyond explaining (e.g., improving ML models and algorithms, verifying ML, getting insights).

Submissions are required to stick to the ICML format. Papers are limited to 6 pages (excluding references) and will go through a review process. Submissions don't need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. Authors should state any overlapping published work at the time of submission. Accepted papers will be posted on the website (upon agreement), but the workshop will not have any official proceedings, so it is non-archival. A selection of accepted papers will be invited to be part of a special journal issue on "Extending Explainable AI Beyond Deep Models and Classifiers".
Submission website: https://cmt3.research.microsoft.com/XXAI2020
Important dates
Submission deadline 20 June, 2020
Author notification 1 July, 2020
Camera-ready version 10 July, 2020
Workshop 18 July, 2020

Organizers

Wojciech Samek Andreas Holzinger Ruth Fong Taesup Moon Klaus-Robert Müller
Fraunhofer Heinrich Hertz Institute Medical University Graz University of Oxford Sungkyunkwan University Technical University of Berlin