Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

In this presentation, we revisit landmark research that interprets the dropout training process in deep neural networks as approximations of Bayesian inference within deep Gaussian processes. As a direct outcome of this theoretical viewpoint, we gain access to techniques that allow us to model uncertainty within dropout neural networks - essentially salvaging information previously discarded from our existing models. This provides a solution to the challenge of embodying uncertainty in deep learning without compromising either computational efficiency or test accuracy.

References

In this series