MPI job
MPI programs can run on a single or on multiple nodes. Typically the number of nodes and the number of processes (tasks)
per node are specified via sbatch options. In order to pick the right mpirun command,
it is necessary to switch to the env in which the MPI program was built. mpirun inherits
resource specifications (--nodes and --ntasks-per-node) from the SLURM environment.
On Hummel MPI programs should always be started with mpirun
(do not use srun for MPI programs)!
|
line no. |
mpi-job.sh |
1234567891011121314 |
#!/bin/bash#SBATCH --partition=std#SBATCH --nodes=2#SBATCH --ntasks-per-node=16#SBATCH --time=00:02:00#SBATCH --export=NONEsource /sw/batch/init.shmodule switch env env/that-was-used-at-compile-timempirun ./a.outexit |
|---|
Process binding
With the following settings process bindings are reported.Intel MPI
export I_MPI_DEBUG=5 mpirun ./a.out
Open-MPI
mpirun --report-bindings ./a.out