DV-Zeuthen

| Computer Center

Parallel Computing

Usage of the Linux Clusters at DESY Zeuthen

Computer Center

Usage of the Linux Clusters at DESY Zeuthen

1. Introduction
2. Hardware
     2.1. Nodes
3. Software Environment
4. Building Applications
     4.1 Interactive tests
     4.1.1 OpenMPI
     4.1.2 Mvapich2
     4.1.3 Intel MPI
5. Batch System Access
     5.1 Slurm Commands
     5.2 Allocation
     5.3 Parallel Execution
     5.4 MPI Support
     5.5 Job scripts
     5.5.1 Time format
     5.5.2 Examples
     5.6 Accounting
     5.7 Locak Disk Space
     5.8 Partitions and backfilling
6. SL7 changes
     6.1 Running EL6 software using Singularity
7. Additional Software
8. AFS Access
9. Monitoring
10. Known Issues
11. Further documentation

1. Introduction

There is one dedicated parallel cluster available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in Batch_System_Usage applies there.
For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

2. Hardware

The batch system consists of one partition: pax12 (rome) has 16 nodes and HDR Infiniband.

2.1. Nodes

All machines have one socket.

Name
CPU
Code Name
Cores
Memory
pax12-[00-15]
AMD EPYC 7702P 64-Core Processor @ 2GHz
Rome
64
256G


3. Software Environment

The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the module command. To build on any machine in the right environment, run the /project/apptainer/images/pax.img image.
You can submit your jobs if you run the apptainer container on a EL7 WGS like this:
apptainer run -B /etc/passwd /project/apptainer/images/pax.img

4. Building Applications

Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
module add gnu mvapich2

OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.

module name
version
depends on
gnu
5.4.0
 
gnu7
7.3.0
 
gnu8
8.3.0
 
gnu9
9.3.0
 
gnu12
12.2.0
 
intel
2021.4
 
hdf5
1.10.1
gnu
openmpi
1.10.7
gnu/intel
openmpi3
3.1.0
gnu7
openmpi3
3.1.4
gnu8/intel
openmpi4
4.0.5
gnu8/gnu9/gnu12/intel
mvapich2
2.2
gnu/gnu7
mvapich2
2.3.2
gnu8/intel
impi
2021.4
gnu/gnu8/intel
opencoarrays
1.8.11
 
opencoarrays
2.3.1
gnu7 openmpi3
opencoarrays
2.8.0
gnu8 openmpi3

4.1 Interactive tests

You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.:
salloc -p sandybridge -N 2 -c 2.
To get an interactive shell on the allocated machines, use the command
srun --pty bash.

4.1.1 OpenMPI

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8

The command line would look like this:
/opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program

More information on openmpi is in the openmpi FAQ.

4.1.2 Mvapich2

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:
pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec, e.g.:
/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1

4.1.3 Intel MPI

To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:
export FI_PROVIDER=verbs

In a Slurm job, please use the prun wrapper to start your application.

Top


5. Batch System Access

Attention:

The PAX is now based on the SLURM scheduling system.


5.1 Slurm commands

The most
important commands:
sinfo
Information about the cluster
squeue
Show current job list
srun
Parallel command execution
sbatch
Submit a batch job
salloc
Reserve ressources for interactive commands
scancel
Abort a job
sview
Graphical user interface to view and modify Slurm state
sacct
Show accounting information

5.2 Allocation

Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.

5.3 Parallel Execution

Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.

5.4 MPI Support

Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g.
module add intel openmpi.

5.5 Job scripts

Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:

-J
job name
--get-user-env
copy environment variables
-n
number of cores
-N
number of cores
-t
run time of the job, default is 30 minutes
-A
account, default the same as UNIX group
-p
partition of the cluster
--mail-type
configure email notifications, e.g. use --mail-type=ALL

Be careful with --get-user-env, it will also copy loaded modules to the job.

5.5.1 Time format

The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

5.5.2 Examples

An example job script is in slurm-mpi.job

5.6 Accounting

The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.

5.7 Local Disk Space

Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job.

5.8 Partitions and backfilling

The cluster consists of one partition: rome. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.

Top

6. SL7 changes

As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.
The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available modules, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after module add gnu7.

6.1 Running EL6 software using Singularity

It is possible to run software built on EL6 in a Apptainer container. This works with mvapich2 binaries by calling Apptainer in the batch script like this:
mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary

However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.
For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:
singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6

and in the job script:
module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6

7. Additional Software

The software installation is based on the OpenHPC project. We provide only a subset of the available software. If you need any of the other available components, send a request to zn-cluster@desy.de

8. AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.

9. Monitoring

Ganglia provides a web monitoring interface. These page is only available from the internal network.

parallel batch machines
Top


10. Known Issues

  1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.
  2. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).
  3. The command sbcast cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.
  4. The module command might be unavailable for tcsh login shell users. As workaround, they can run bash -l and use the --get-user-env option in the job.
  5. There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command.
  6. In the pax apptainer image, squeue shows all users as nobody. To work around, run: apptainer run -B /etc/passwd /project/apptainer/images/pax.img

11. Further documentation

_

Paralleles Rechnen in Zeuthen - die neuen Cluster
04/27/10, technical seminar
HPC-Clusters at DESY Zeuthen
11/22/06, technical seminar
Top