Skip to content
1 change: 1 addition & 0 deletions docs/how_to_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ Sampling
- :doc:`how_to_guide/12_mcmc_diagnostics_with_arviz`
- :doc:`how_to_guide/23_using_pyro_with_sbi`
- :doc:`how_to_guide/19_posterior_parameters`
- :doc:`how_to_guide/24_variational_inference`


Diagnostics
Expand Down
6 changes: 3 additions & 3 deletions docs/how_to_guide/06_choosing_inference_method.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,11 @@
"id": "f91955d8",
"metadata": {},
"source": [
"**`NPE`** is the only method which estimates the posterior directly. This means that it will not require further sampling steps (such as MCMC) _after_ training. This makes NPE the fastest to `.sample()` from the posterior estimate after training. In addition, it can use an embedding network to learn summary statistics (just like NRE, but unlike NLE), see the tutorial [here]().\n",
"**`NPE`** is the only method which estimates the posterior directly. This means that it will not require further sampling steps (such as MCMC) _after_ training. This makes NPE the fastest to `.sample()` from the posterior estimate after training. In addition, it can use an embedding network to learn summary statistics (just like NRE, but unlike NLE), see the tutorial [here](../tutorials/00_getting_started.ipynb).\n",
"\n",
"**`NLE`** learns the likelihood. This allows NLE to emulate the simulator after training. To sample from the posterior, NLE is combined with MCMC or variational inference (see [here]()), which makes it slower to `.sample()`. An advantage of NLE is that, if you have multiple iid observations, then it can be much more simulation efficient than NPE (see tutorial [here]()).\n",
"**`NLE`** learns the likelihood. This allows NLE to emulate the simulator after training. To sample from the posterior, NLE is combined with MCMC or [variational inference](24_variational_inference.ipynb), which makes it slower to `.sample()`. An advantage of NLE is that, if you have multiple iid observations, then it can be much more simulation efficient than NPE.\n",
"\n",
"**`NRE`** learns the likelihood-to-evidence ratio. It trains only a classifier (unlike NPE and NLE, which train a generative model), which makes it fastest to train. Like NLE, NRE has to be combined MCMC or variational inference to run `.sample()`. Like NPE, NRE can use an embedding network to learn summary statistics."
"**`NRE`** learns the likelihood-to-evidence ratio. It trains only a classifier (unlike NPE and NLE, which train a generative model), which makes it fastest to train. Like NLE, NRE has to be combined with MCMC or [variational inference](24_variational_inference.ipynb) to run `.sample()`. Like NPE, NRE can use an embedding network to learn summary statistics."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/how_to_guide/09_sampler_interface.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
"For NLE or NRE:\n",
"- If you have very few parameters (<3), use `RejectionPosterior` or `MCMCPosterior`\n",
"- If you have a medium number of parameters (3-10), use `MCMCPosterior`\n",
"- If you have many parameters (>10) and the `MCMCPosterior` is too slow, use the `VIPosterior`. Optionally combine the `VIPosterior` with an `ImportanceSamplingPosterior` to improve its accuracy.\n",
"- If you have many parameters (>10) and the `MCMCPosterior` is too slow, use the `VIPosterior` (see the [VI guide](24_variational_inference.ipynb)). Optionally combine the `VIPosterior` with an `ImportanceSamplingPosterior` to improve its accuracy.\n",
"\n",
"## Overview\n",
"\n",
Expand Down Expand Up @@ -243,7 +243,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "sbi-1",
"language": "python",
"name": "python3"
},
Expand All @@ -257,7 +257,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
"version": "3.12.12"
},
"toc": {
"base_numbering": 1,
Expand Down
Loading