Production (= parallel) jobs
On Hummel all production jobs are supposed to be parallel jobs. The smallest scheduling unit is a whole node which has 16 CPU cores (or 40 cores in the spc
partition). It is expected that every job uses all CPU cores it requested (or at least half of the main memory ). This can only be achieved by employing parallel computing techniques. Example jobs using classic parallelization techniques can be found on these pages:
- OpenMP job shows how to run OpenMP parallel programs.
- MPI job shows how to run MPI parallel programs.
- Running independent tasks with srun shows how independent tasks can be processed in parallel with the standard
srun
command from the SLURM batch system. This is a suitable method for running few tasks in parallel. - Running independent tasks with jobber shows how the RRZ tool jobber can be used to process many tasks in a parallel way.