All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Lue21B]
Benchmarking Simulation-Based Inference,
[Sha22S]
Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models,
[Son19G]
Generative Modeling by Estimating Gradients of the Data Distribution,
[Son21S]
Score-Based Generative Modeling through Stochastic Differential Equations,
[Cra20F]
The frontier of simulation-based inference,
[Tej20S]
sbi: A toolkit for simulation-based inference,
[Kov23N]
Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs,
[Lu21L]
Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators,
[Zha23B]
BelNet: basis enhanced learning, a mesh-free neural operator,
[Fis23I]
Influence Diagnostics under Self-concordance,
[Fis23S]
Statistical and Computational Guarantees for Influence Diagnostics,
[Geo18F]
Fast Approximate Natural Gradient Descent in a Kronecker Factored Eigenbasis,
[Geo23N]
NNGeometry,
[Gro23S]
Studying Large Language Model Generalization with Influence Functions,
[Ham05R]
Robust Statistics: The Approach Based on Influence Functions,
[Koh17U]
Understanding Black-box Predictions via Influence Functions,
[Mar15O]
Optimizing Neural Networks with Kronecker-factored Approximate Curvature,
[Pru20E]
Estimating Training Data Influence by Tracing Gradient Descent,
[Tra22P]
pyDVL: The Python Data Valuation Library,
[Bar23R]
Representation Equivalent Neural Operators: a Framework for Alias-free Operator Learning,
[Rao23C]
Convolutional Neural Operators for robust and accurate learning of PDEs,
[Che95U]
Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems,
[Kim20S]
SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds,
[Mcd22C]
COMET flows: Towards generative modeling of multivariate extremes and tail dependence,
[Liu23S]
Scaling Up Probabilistic Circuits by Latent Variable Distillation,
[Lip22F]
Flow Matching for Generative Modeling,
[Glo23A]
Adversarial robustness of amortized Bayesian inference,
[Sze14I]
Intriguing properties of neural networks,