All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Rag21W]
Worst-Case Robustness in Machine Learning,
[Sal20C]
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks,
[Fle71M]
A modified Marquardt subroutine for nonlinear least squares,
[Mar63A]
An Algorithm for Least-Squares Estimation of Nonlinear Parameters,
[Nes04N]
Nonlinear Optimization,
[Sut99P]
Policy Gradient Methods for Reinforcement Learning with Function Approximation,
[Bat21C]
Cross-validation: what does it estimate and how well does it do it?,
[Ben04N]
No Unbiased Estimator of the Variance of K-Fold Cross-Validation,
[Gua18T]
Test Error Estimation after Model Selection Using Validation Error,
[Tib09B]
A bias correction for the minimum error rate in cross-validation,
[Var06B]
Bias in error estimation when using cross-validation for model selection,
[Yoo20D]
Data Valuation using Reinforcement Learning,
[Bro07I]
Increasing the Reliability of Reliability Diagrams,
[Din20R]
Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off,
[Guo17C]
On Calibration of Modern Neural Networks,
[Kul19T]
Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with Dirichlet calibration,
[Lak17S]
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles,
[Muk20C]
Calibrating deep neural networks using focal loss,
[Nix20M]
Measuring Calibration in Deep Learning,
[Pan22C]
Class-wise and reduced calibration methods,
[Pan21K]
kyle: A toolkit for classifier calibration,
[Wid19C]
Calibration tests in multi-class classification: A unifying framework,
[Bin19P]
Pyro: Deep Universal Probabilistic Programming,