SLURM scheduler

SLURM scheduler

When users login to a HPC system, they first login to the system's headnode. Then, each user needs to submit a job using the scheduler so as to run their application/code on the compute nodes.

All our HPC systems are using SLURM scheduler for job submission. Simple Linux Utility for Resource Management (SLURM) is an open-source workload manager designed for Linux clusters of all sizes. SLURM is used by many of the world’s supercomputers and computer clusters.

By default all compute nodes are shared i.e. more than one job can run on a node if there are resources available.

For example, on a Cyclone node one user can run a CPU job using 20 cores and another user can run another job on the same node using the remaining 20 cores.