The Frontier of Simulation-based Inference

An overview and schematic comparison of recent developments in simulation-based inference and their enabling factors. Advancements in ML, Active Learning and Augmentation are named as the three driving forces in the field.

Computer simulations provide high-fidelity models and allow to generate insights without the need of running real experiments. They are especially useful in areas where real experiments would be too costly or even impossible. The field of nuclear engineering is a prominent example here. However, those simulations are not suited for doing inference on their parameters and pose challenging inverse problems.

The challenge arises as the likelihood term, relevant for inference in both frequentist and Bayesian settings, is usually intractable. This is usually due to the system under observation being a black-box (e.g. proprietary simulator), the simulation being high-dimensional and thus the likelihood being numerically not computable, or both. Therefore, approaches omitting the direct computation of the likelihood were developed under the term of likelihood-free inference. Those methods aim at circumventing the use of the likelihood by means of approximating the likelihood term, the likelihood ratio or directly the posterior. Traditional methods, such as approximate Bayesian computation (ABC), do not scale well to higher dimensions as they require extensively more samples. In addition, ABC-based algorithms have to be re-run once new data becomes available.

Development in the field of simulation-based inference has experienced a boost, driven by three forces, according to [Cra20F]: First, the growth in Machine Learning capabilities, enabling new approaches. Secondly, the recognition of active learning to use the acquired knowledge to guide the simulator and improve sample efficiency. Thirdly, augmentation and integration connecting simulation and inference more tightly.

Due to advancements in Machine Learning, higher dimensional problems can be tackled, using e.g. normalizing flows for density estimation. Also, neural networks have shown to lend themselves as efficient surrogate models to the costly simulation. In a sequential setting, those surrogate models are iteratively improved using active learning, improving the sample efficiency of the algorithms. Besides a description of enabling factors for simulation-based inference, the authors provide a schematic comparison of different workflows for simulation-based inference. Those differ mainly in their access to the data, the simulator during inference as well as the term, which is being approximated.

[Cra20F] provide an overview of different algorithms used for simulation-based inference. The top row can be classified as non-amortized algorithms, requiring access to the simulator during inference. Amortized algorithms do not require access to the simulator during inference and are thus more efficient.

Workflows which do not require access to the simulator during inference are coined amortized and have a significant advantage as they don’t have to be re-run when new data is available. The above figure schematically separates the described approaches into non-amortized workflows in the top row and amortized workflows in the second row, where inference is based on density estimation. Further, the bottom row distinguishes between approximations of the posterior, likelihood and likelihood ratio.

Noticeably, recent approaches, enabled by the discussed factors, fall into a category in the lower row, i.e. they are amortized and do scale better to higher dimensions.

References

In this series