This is an old revision of the document!


Multi-messenger inference

A joint inference on gravitational-wave and electromagnetic signals requires NMMA to run on a supercomputer cluster because large memory space are required and need to be shared across many CPU cores. Here, we consider a full joint inference on the binary neutron star merger observed on 17th August 2017. For an example installtion of NMMA on the supercomputer cluster HAWK consult this guide.

Example: Binary Neutron Star Merger observed in 2017

Observational data

Firstly, all observational data is required that means observational data from single observed events:

These observational data need to be provided in the config.ini file for the joint inference.

Prior

Moreover, a prior on all observed messengers is required and needs to be tailored to the models used in the inference. Here, we use the GRB afterglow light curve model TrPi2018 from afterglowpy and the kilonova model Bu2019lm. A prior for the joint inference can be found here, called GW170817_AT2017gfo_GRB170817A.prior.

config.ini preparation

In order to prepare the joint inference, a config.ini file is required which specifies all kind of models, observational data and inference settings. An example adjusted to the observed BNS merger can be found here. The joint inference generation can be performed by running:

nmma_generation config.ini

This will generate a GW170817-AT2017gfo-GRB170817A_data_dump.pickle file under outdir/data/ which need to be provided for the joint inference function nmma_analysis. An example script for job submission called jointinf.pbs on HAWK is given below:

Run joint inference

#!/bin/bash
#PBS -N <name of simulation>
#PBS -l select=16:node_type=rome:mpiprocs=128
#PBS -l walltime=24:00:00
#PBS -e ./outdir/log_data_analysis/err.txt
#PBS -o ./outdir/log_data_analysis/out.txt
#PBS -m abe
#PBS -M <email adress>
module load python
module load mpt
module load mpi4py
source <provide path to venv>
export MPI_UNBUFFERED_STDIO=true
export MPI_LAUNCH_TIMEOUT=240
cd $PBS_O_WORKDIR
mpirun -np 512 omplace -c 0-127:st=4 nmma_analysis <absolute path to folder>/outdir/data/GW170817-AT2017gfo-GRB170817A_data_dump.pickle --nlive 1024 --nact 10 --maxmcmc 10000 --sampling-seed 20210213 --no-plot --outdir <absolute path to outdir/result folder>

What the omplace -c 0-127:st=4 command does is that every 4th core is used on hawk (st=4) out of 128 cores per node. On hawk, each cpu (amd epyc) has 128 cores (labelled by 0, 1, 2, … 127). Since each only has 2 GB of ram per core on HAWK, but for NMMA ~7Gb per core are required, we need to request 16 nodes a 128 cores (from which every 4th core is used) in order to run the joint inference on np 512 cores.

By running

qsub jointinf.pbs

the joint inference will be queued on HAWK to start the joint parameter estimation for the Binary Neutron Star Merger observed in 2017.

Last modified: le 2023/01/04 11:45