Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
inf_mma [2022/02/26 10:08]
theoastro
inf_mma [2023/06/16 14:28] (current)
theoastro
Line 1: Line 1:
 ===== Multi-messenger inference ===== ===== Multi-messenger inference =====
  
-A joint inference on gravitational-wave and electromagnetic signals requires NNMA to run on a supercomputer cluster because large memory space are required and need to be shared across many CPU cores. Here, we consider a full joint inference on the binary neutron star merger observed on 17th August 2017. For an example installtion of NMMA on the supercomputer cluster HAWK consult [[nmma-hawk-install|this guide]].+A joint inference on gravitational-wave and electromagnetic signals requires NMMA to run on a supercomputer cluster because large memory space are required and need to be shared across many CPU cores. Here, we consider a full joint inference on the binary neutron star merger observed on 17th August 2017. For an example installtion of NMMA on the supercomputer cluster HAWK consult [[https://enlil.gw.physik.uni-potsdam.de/dokuwiki/doku.php?id=installation_nmma|this guide]].
  
 === Example: Binary Neutron Star Merger observed in 2017 === === Example: Binary Neutron Star Merger observed in 2017 ===
Line 17: Line 17:
  
 Moreover, a prior on all observed messengers is required and needs to be tailored to the models used in the inference. Here, we use the GRB afterglow light curve model ''TrPi2018'' from afterglowpy and the kilonova model ''Bu2019lm''. A prior for the joint inference can be found [[https://github.com/nuclear-multimessenger-astronomy/nmma/blob/main/example_files/prior/GW170817_AT2017gfo_GRB170817A.prior|here]], called ''GW170817_AT2017gfo_GRB170817A.prior''. Moreover, a prior on all observed messengers is required and needs to be tailored to the models used in the inference. Here, we use the GRB afterglow light curve model ''TrPi2018'' from afterglowpy and the kilonova model ''Bu2019lm''. A prior for the joint inference can be found [[https://github.com/nuclear-multimessenger-astronomy/nmma/blob/main/example_files/prior/GW170817_AT2017gfo_GRB170817A.prior|here]], called ''GW170817_AT2017gfo_GRB170817A.prior''.
 +
 +**Electroamagnetic data and models**
 +
 +In order to not only sample on gravitational-wave data, we provide further electromagnetic signal related flags. The flag ''with-grb=True'' will turn on the sampling on a GRB data. As NMMA currently only includes one GRB model, this model does not need to be further specified. If ''with-grb=False'', a joint inference of GW+KN data is possible, excluding the GRB part. With regard to the kilonova model, we need to provide a specific model under ''kilonova-model'', its respective reduced model grid (if applicable) under ''kilonova-model-svd'' and a ''kilonova-interpolation-type'' which can be either ''sklearn_gp'' or ''tensorflow''. The ''light-curve-data'' flag should include both GRB and kilonova data if a joint inference on GW-GRB-KN is desired (meaning use: ''with-grb=True'') or should just include the kilonova data if a GW-KN inference is targeted (meaning use: ''with-grb=False'').
 +
 +**Including EOS information**
 +
 +NMMA enables to include nuclear information by using equations-of-state (EOS) and sample over the EOS during the inference. In order to include a set of EOSs, each EOS.dat file needs to include information on Mass, Radius and Tidal deformability. For the example shown in the config.ini file below, we see that ''Neos = 5000'' meaning that we include 5000 EOS.dat files each containing information on mass, radius and tidal deformability. We also see that a constraint from NICER measurements has been folded in and thus the ''eos-weight'' reflects this in a weighting. The EOS set should be sorted according to this weighting in order to reduce runtime for the sampling on the EOSs.
  
 ** config.ini preparation ** ** config.ini preparation **
Line 23: Line 31:
   nmma_generation config.ini   nmma_generation config.ini
  
 +This will generate a ''GW170817-AT2017gfo-GRB170817A_data_dump.pickle'' file under outdir/data/ which need to be provided for the joint inference function ''nmma_analysis''. An example script for job submission called ''jointinf.pbs'' on HAWK is given below:
 +
 +** Run joint inference **
 +
 +  #!/bin/bash
 +  #PBS -N <name of simulation>
 +  #PBS -l select=16:node_type=rome:mpiprocs=128
 +  #PBS -l walltime=24:00:00
 +  #PBS -e ./outdir/log_data_analysis/err.txt
 +  #PBS -o ./outdir/log_data_analysis/out.txt
 +  #PBS -m abe
 +  #PBS -M <email adress>
 +  module load python
 +  module load mpt
 +  module load mpi4py
 +  source <provide path to venv>
 +  export MPI_UNBUFFERED_STDIO=true
 +  export MPI_LAUNCH_TIMEOUT=240
 +  cd $PBS_O_WORKDIR
 +  mpirun -np 512 omplace -c 0-127:st=4 nmma_analysis <absolute path to folder>/outdir/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle --nlive 1024 --nact 10 --maxmcmc 10000 --sampling-seed 20210213 --no-plot --outdir <absolute path to outdir/result folder>
 +
 +What the ''omplace -c 0-127:st=4'' command does is that every 4th core is used on hawk (st=4) out of 128 cores per node.  On hawk, each cpu (amd epyc) has 128 cores (labelled by 0, 1, 2, ... 127). Since each only has 2 GB of ram per core on HAWK, but for NMMA ~7Gb per core are required, we need to request 16 nodes a 128 cores (from which every 4th core is used) in order to run the joint inference on ''np 512'' cores.
 +
 +By running
 +  qsub jointinf.pbs
 +the joint inference will be queued on HAWK to start the joint parameter estimation for the Binary Neutron Star Merger observed in 2017.
Last modified: le 2022/02/26 10:08