About
Blog
Pills
Software
Trainings
Seminar
Jobs
⚲
Search results
We
identify
,
test
and
disseminate
established and emerging techniques in machine learning in order to provide practitioners with the best tools for their applications
What we do
Trainings
Our
hands-on
, one to three day workshops cover fundamental and advanced topics in machine learning.
Software
Paper implementations and code libraries developed during our research and supporting our work, open sourced
Our blog
We publish
introductory posts
for beginners and
paper digests
for experienced people on a tight schedule.
Paper pills
Paper pills are
summaries of publications
, talks or new software, with context and an explanation of the main ideas and key results.
Our areas of interest
Safety and reliability in ML systems
Efficient Machine Learning
Trustworthy and interpretable ML
Advances and fundamentals in ML
Our latest work
Block-Seminar
Methods and issues in Explainable AI
This seminar series explores the landscape of Explainable Artificial Intelligence (XAI) beyond the content in our training on XAI, and through a comprehensive selection of papers in the field. By the end of this series, participants will have gained a comprehensive understanding of multiple current developments, practical challenges, and future directions in the fields of interpretable and explainable machine learning.
Blog
AutoDev: Exploring Custom LLM-Based Coding Assistance Functions
We explore the potential of custom code assistant functions based on large language models (LLMs). With our open-source software package …
Software
AutoDev: LLM-Based Coding Assistance Functions
AutoDev is a software package for the realisation of coding assistance functions using large language models (LLMs). It covers fine-tuning, …
Training
Introduction to Simulation-based Inference
Embrace the challenges of intractable likelihoods with simulation-based inference. A half-day workshop introducing the concepts …
Block-Seminar
Uncertainty quantification for neural networks
In this seminar series, we review seminal and recent papers on uncertainty quantification that didn’t make it into our training on …
What we are reading:
Paper pills
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning
Is structure learning even necessary? By generating overparametrised, random model structures for sum-product networks, a class of tractable probabilistic graphical models, Peharz et al. manage to obtain models that are competitive with far more complex generative learning approaches as well as with discriminative approaches, whilst yielding well-calibrated probabilities and being robust to missing features.
Automatic Posterior Transformation for Likelihood-free Inference
A sequential neural posterior estimation method that modifies the posterior approximation using arbitrary, dynamically updated proposals. Also, it is compatible with arbitrary choices of priors, proposals, and powerful flow-based density estimators.
Jump-Start Reinforcement Learning
Reinforcement learning often presents a challenge when training an agent from scratch, as reward sparsity and exploration issues can impede learning. Leveraging offline data to warm-start the policy can significantly enhance learning, however, naive initialisation tends to yield poor results. A recent paper investigates various methods to transition from offline initialisation to online fine-tuning, providing both experimental and theoretical insights on sample complexity.
Implicit Q Learning
A core challenge when applying dynamic programming based approaches to offline reinforcement learning is the bootstrapping error. This pill presents a paper proposing an algorithm called implicit Q learning that mitigates bootstrapping issues by modifying the argmax in the Bellman equation.
See more...