MPI job
MPI programs can run on a single or on multiple nodes. Typically the number of nodes and the number of processes (tasks)
per node are specified via sbatch
options. In order to pick the right mpirun
command,
it is necessary to switch to the env
in which the MPI program was built. mpirun
inherits
resource specifications (--nodes
and --ntasks-per-node
) from the SLURM environment.
On Hummel MPI programs should always be started with mpirun
(do not use srun
for MPI programs)!
line no. |
mpi-job.sh |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#!/bin/bash #SBATCH --partition=std #SBATCH --nodes=2 #SBATCH --ntasks-per-node=16 #SBATCH --time=00:02:00 #SBATCH --export=NONE
source /sw/batch/init.sh
module switch env env/that-was-used-at-compile-time
mpirun ./a.out
exit |
---|
Process binding
With the following settings process bindings are reported.Intel MPI
export I_MPI_DEBUG=5 mpirun ./a.out
Open-MPI
mpirun --report-bindings ./a.out