Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
basics_gwinf [2022/08/18 15:56]
theoastro
basics_gwinf [2022/08/19 08:55] (current)
theoastro
Line 3: Line 3:
 === The General Idea  === === The General Idea  ===
 [[https://en.wikipedia.org/wiki/Statistical_inference|Statistical inference]] refers to any method that tries to infer knowledge on an underlying distribution, e.g. neutron star masses, from the analysis of a limited dataset, e.g. observed neutron stars. In the specific case of [[https://en.wikipedia.org/wiki/Bayesian_inference|Bayesian Inference]] this is done by applying Bayes' theorem. [[https://en.wikipedia.org/wiki/Statistical_inference|Statistical inference]] refers to any method that tries to infer knowledge on an underlying distribution, e.g. neutron star masses, from the analysis of a limited dataset, e.g. observed neutron stars. In the specific case of [[https://en.wikipedia.org/wiki/Bayesian_inference|Bayesian Inference]] this is done by applying Bayes' theorem.
-The sought-after//posterior// distribution results from reweighting a certain //prior// distribution by how well it satisfies certain data constraints that are expressed by a likelihood function.+The sought-after//posterior// distribution results from reweighting a certain //prior// distribution by how well it satisfies certain data constraints that are expressed by a likelihood function.  
 +For compact binary coalescences, the likelihood could evaluate how well a waveform generated from certain binary parameters matches an observed waveform.
  
 === Bilby  === === Bilby  ===
-In the context of astrophysical parameter estimation, the prior and posterior may refer to the parameter space of compact systems, comprising masses, kinetic and angular quantities, and many more. These distributions are usually continuous quantities and thus need to be discretized for numerical treatment. As the number of parameters increases, the numerical cost to cover it quickly becomes prohibitive, even for a very coarse discretization of the parameter space. **Bilby**, the **B**ayesian **i**nference **l**i**b**rar**y** provides convenient routines to evade this problem by implementing the ideas of [[https://dynesty.readthedocs.io/en/latest/overview.html|Nested Sampling]], tailored to the needs of compact binary coalescence research.+In the context of astrophysical parameter estimation, the prior and posterior may refer to the full parameter space of compact systems, comprising masses, kinetic and angular quantities, and many more. These distributions are usually continuous quantities and thus need to be discretized for numerical treatment. As the number of parameters increases, the numerical cost to cover it quickly becomes prohibitive, even for a very coarse discretization of the parameter space. **Bilby**, the **B**ayesian **i**nference **l**i**b**rar**y** provides convenient routines to evade this problem by implementing the ideas of [[https://dynesty.readthedocs.io/en/latest/overview.html|Nested Sampling]], tailored to the needs of compact binary coalescence research.
  
  The fundamental idea is that //for any meaningful inference, the posterior should peak in a much narrower region of parameter space than the prior//. Instead of drawing samples that cover prior space uniformly, nested sampling algorithms draw samples from nested shells of increasing likelihood that naturally contract around the posterior distrubition's [[https://en.wikipedia.org/wiki/Mode_(statistics)|modes]].  The fundamental idea is that //for any meaningful inference, the posterior should peak in a much narrower region of parameter space than the prior//. Instead of drawing samples that cover prior space uniformly, nested sampling algorithms draw samples from nested shells of increasing likelihood that naturally contract around the posterior distrubition's [[https://en.wikipedia.org/wiki/Mode_(statistics)|modes]].
Line 20: Line 21:
  
 Instead, 'worker' or 'slave' cores try to simultaneously replace the currently worst live point, i.e. the one with lowest likelihood, by doing a usual bilby-step. Instead, 'worker' or 'slave' cores try to simultaneously replace the currently worst live point, i.e. the one with lowest likelihood, by doing a usual bilby-step.
-A 'head' or 'master' core uses the suitable candidates to then update as many live points as possible. This procedure results in a [[https://arxiv.org/pdf/1506.00171.pdf#subsection.5.4|speedup]] by about $n_{live}* ln(1+ n_{cores}/n_{live})$. In other words, the scaling is almost linear if the number of cores does not significantly exceed the number of live points. This gain comes at the cost of clustering effects, though, so strongly multi-modal distributions +A 'head' or 'master' core uses the suitable candidates to then update as many live points as possible. This procedure results in a [[https://arxiv.org/pdf/1506.00171.pdf#subsection.5.4|speedup]] by about $n_{live}* ln(1+ n_{cores}/n_{live})$. In other words, the scaling is almost linear if the number of cores does not significantly exceed the number of live points. This gain comes at the cost of clustering effects, though, so strongly multi-modal distributions tend to be washed out quickly. 
-The following pages will show how to prepare, run and post process parameter estimation runs in parallel bilby.+In most astrophysical applications, though, one should expect $n_{modes} << n_{live}$ and parallel bilby provides acceptable approximations. 
 + 
 + 
 +The following pages will show how to [[bilbyfw_install|install]] the Bilby Family and prepare, run and post process parameter estimation runs in parallel bilby.
  
  
Last modified: le 2022/08/18 15:56