There exist a number of different methods for calibrating models and for measuring and visualizing calibration. However, there is rather little mature software available to practitioners that can help with these tasks. Accompanying our open source library kyle offering helpful tools for calibration, appliedAI will offer comprehensive notebooks and tutorials demonstrating different aspects of it. We explain in detail how to measure miscalibration, visualize it with help of (extended) reliability curves as well as approaches for recalibrating neural networks and other models.
Practical (re)calibration of classifiers
A set of comprehensive notebooks explaining different aspects of calibration in detail. The notebooks will be based on appliedAI’s open-source toolkit for calibration kyle.