Scientific Inference With Interpretable Machine Learning

Timo will introduce a framework for designing interpretable machine learning methods for science, termed “property descriptors”.

Abstract

To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods —termed ‘property descriptors’— that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.

Bio

Timo is a postdoc at the Machine Learning in Science Cluster at the University of Tübingen. His research centers around interpretable ML and XAI, the philosophy of science, ML and causality, as well as the ethics of AI, and algorithmic fairness. In his recent work, Timo raises awareness of the issues around model-agnostic tools to explain ML models, works on formalizing existing concepts in the XAI literature, or introduces a new framework for understanding robustness. He is also coauthor of the book Supervised Machine Learning for Science.

Before Tübingen, Timo was a PhD student at the Munich Center for Mathematical Philosophy (LMU Munich) working on the question of what it is that explainable AI explains.

References

In this series