established and emerging techniques in machine learning in order to provide practitioners with the best tools for their applications
What we do
, one to three day workshops cover fundamental and advanced topics in machine learning.
Paper implementations and code libraries developed during our research and supporting our work, open sourced
for beginners and
for experienced people on a tight schedule.
Paper pills are
summaries of publications
, talks or new software, with context and an explanation of the main ideas and key results.
Our areas of interest
Safety and reliability in ML systems
Efficient Machine Learning
Trustworthy and interpretable ML
Advances and fundamentals in ML
Our latest work
Methods and issues in Explainable AI
This seminar series explores the landscape of Explainable Artificial Intelligence (XAI) beyond the content in our training on XAI, and through a comprehensive selection of papers in the field. By the end of this series, participants will have gained a comprehensive understanding of multiple current developments, practical challenges, and future directions in the fields of interpretable and explainable machine learning.
May 17, 2023
AutoDev: Exploring Custom LLM-Based Coding Assistance Functions
We explore the potential of custom code assistant functions based on large language models (LLMs). With our open-source software package …
Large Language Models
Sep 20, 2023
AutoDev: LLM-Based Coding Assistance Functions
AutoDev is a software package for the realisation of coding assistance functions using large language models (LLMs). It covers fine-tuning, …
Large Language Models
Sep 15, 2023
Introduction to Simulation-based Inference
Embrace the challenges of intractable likelihoods with simulation-based inference. A half-day workshop introducing the concepts …
Uncertainty quantification for neural networks
In this seminar series, we review seminal and recent papers on uncertainty quantification that didn’t make it into our training on …
Oct 22, 2022
What we are reading:
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning
Is structure learning even necessary? By generating overparametrised, random model structures for sum-product networks, a class of tractable probabilistic graphical models, Peharz et al. manage to obtain models that are competitive with far more complex generative learning approaches as well as with discriminative approaches, whilst yielding well-calibrated probabilities and being robust to missing features.
Bayesian ML and Probabilistic Programming
Sep 13, 2023
Automatic Posterior Transformation for Likelihood-free Inference
A sequential neural posterior estimation method that modifies the posterior approximation using arbitrary, dynamically updated proposals. Also, it is compatible with arbitrary choices of priors, proposals, and powerful flow-based density estimators.
Sep 6, 2023
Jump-Start Reinforcement Learning
Reinforcement learning often presents a challenge when training an agent from scratch, as reward sparsity and exploration issues can impede learning. Leveraging offline data to warm-start the policy can significantly enhance learning, however, naive initialisation tends to yield poor results. A recent paper investigates various methods to transition from offline initialisation to online fine-tuning, providing both experimental and theoretical insights on sample complexity.
Aug 21, 2023
Implicit Q Learning
A core challenge when applying dynamic programming based approaches to offline reinforcement learning is the bootstrapping error. This pill presents a paper proposing an algorithm called implicit Q learning that mitigates bootstrapping issues by modifying the argmax in the Bellman equation.
Aug 9, 2023