Monte Carlo-Dropout for Uncertainty Quantification in Deep Learning

Deep learning models are commonly used for regression tasks, producing a point estimate for a given set of inputs. However, a single point does not adequately reflect the model’s uncertainty about this estimate. In safety-critical applications, understanding this uncertainty can be extremely valuable. For example, in medical applications, a high degree of uncertainty could necessitate an additional check by a doctor, whereas this might not be required in cases of highly certain predictions. Nonetheless, deep learning models that incorporate uncertainty estimation are a distinct type of models, often involving higher computational costs than the majority of pre-trained models used as backbones in many solutions.

The method described here, Monte Carlo Dropout, allows for uncertainty quantification in pre-trained models, as long as dropout layers have been included in the model’s architecture. By retaining dropout during inference as well, sampling from the ensemble of models — due to dropout — enables quantification of the uncertainty in the obtained samples via Monte Carlo approximation. As such, the method is straightforward to use and can be applied to already trained models. This offers two significant benefits:

  1. Uncertainty quantification can also be performed on already trained models, eliminating the need for it to be a design decision.
  2. There is a reduced computational cost compared to Bayesian methods.

💡 This talk is designed more as a how-to, rather than a deep discussion on MC-dropout and its variants.

References

In this series