Hummel → Hummel-2 Migration Guide
This guide describes steps for migrating from Hummel to Hummel-2.
Big changes
We tried to keep changes of Hummel concepts minimal. Big changes on Hummel-2 compared with Hummel are:
- Batch jobs
- All compute nodes are fat. As a consequence batch jobs most will share a node with other jobs.
sbatch
option--mail-user
is ignored. If--mail-type
is set e-mail(s) will be sent to the e-mail on the Hummel-2 mailing list.
- Disks
- The compute notes have no local/scratch disks.
/tmp
,/dev/shm
and$RRZ_LOCAL_TMPDIR
are RAM disks. - There is an SSD pool in addition to the BeeGFS parallel file system
that is based on spinning disks. A consequence of the SSD pool approach
is that adequate usage of disk systems can only be planned on an
individual basis. Please contact the HPC
team if you have questions concerning disk usage. At the beginning
every user has 100 GB disk space in
$SSD
. $BEEGFS
replaces$WORK
. Technically both underlying filesystems are very similar, but on Hummel-2 an SSD filesystem can become the working file system (and$BEEGFS
the backup file filesystem).
- The compute notes have no local/scratch disks.
Usernames and Unix groups
On Hummel-2:
- All usernames have the B-Kennung format:
bxy1234
. - For each username exists a Unix group with the same name, which is the primary Unix group.
- Every user is also member of a Unix group that represents the
working group.
- That group can be used for sharing files.
- The
id
command prints Unix group memberships.
Software
Software installed by RRZ
- With the software installed it should be possible to compile all kinds of code.
- Recall that switching to a pkgsrc Module
module switch env env/2023Q4-gcc-openmpi
provides many tools and the R software. - Application software packages will be installed soon.
Software installed by yourself
- All software needs to be reinstalled.
- Please install large software packages downloaded from the internet
in
(rather than in$USW
$HOME
). See also Which file system to use when?.
Data
Please read also: Working with data.