GPU test node (front3)
The GPU front-end node
front3 is provided for
- interactive GPU access for development and testing,
- testing GPU batch jobs.
For interactive work front3 is the equivalent
of the CPU front-end nodes front1 and
front2.
In addition, small batch jobs can be run on
front3 be employing --partition=gputest (the
default time limit is 10 minutes). The idea is that functionality can be
tested. Since all resources of the node are shared all test
runs should use as little memory (RAM and VRAM) as possible.
Recall that
- RAM allocation can be checked with the
free,topandps -lcommands, - VRAM allocation can be checked with the
nvidia-smicommand.
Running batch jobs
in the gputest partition
A batch job that is running in the gputest partition can
use GPU but must not specify GPU resources (--gpus must not
be set, see the example job below).
Differences to the
gpu partition
--gpus(or equivalent) must not be specified./homeis writable.- Resources including the GPU are shared with other batch- or interactive jobs, hence time measurements are not reliable.
Example job
|
line no. |
/sw/batch/examples/gputest-partition/gputest-job.sh |
123456789101112 |
#!/bin/bash#SBATCH --account=WorkingGroupName_gpu#SBATCH --partition=gputest#SBATCH --export=NONEsource /sw/batch/init.shmodule load cuda/12.5.1$CUDA_HOME/extras/demo_suite/busGrindexit |
|---|