Methods and issues in explainable AI

A workshop for ML practitioners wishing to make their models better understandable for themselves and decision makers.

Safety and reliability concerns are major obstacles for the adoption of AI in practice. In addition, European regulation will make explaining the decisions of models a requirement for so-called high-risk AI applications.

When models grow in size and complexity they become less interpretable. Explainability in machine learning attempts to address this problem by providing insights into the inference process of a model. It can be used as a tool to make models more trustworthy, reliable, transferable, fair, and robust. However, it is not without its own problems, with algorithms often reporting contradictory explanations for the same phenomena.

Learning outcomes

We consider the machine learning pipeline under the lens of explainability. Through a series of hands-on use case examples the participant will be introduced to methods for:

  • Exploratory data analysis
  • Feature selection and engineering
  • Model selection
  • Model evaluation and visualization
  • Model interpretation and explanation

These are tailored to improve the understanding of why a model acts as it does.

Contents

  • Goals of Explainable AI
  • Interpretable models vs black box explanations. When to prefer one over the other
  • Post-hoc methods like Shapley values and LIME
  • Interpretable machine learning for time series, e.g. Prophet and neural basis expansion
  • Explainable deep learning, e.g. attention and TCAV

In this series