Skip to main content

Requesting Resources on the Supercomputer

Requesting CPUs and memory

CPUs and memory can be requested independently. When unspecified, each CPU core will be accompanied by 2GB of system memory.

salloc and interactive are interchangable.

Commandresources
salloc -c 11 cpu, 2gb
salloc -c 6464 cpus, 128 gb shared between cpus
salloc -c 32 --mem=80GB32 cpus, 80 gb shared between cpus
salloc -c 128 --mem=0128 cpus, 100% node memory shared between cpus
salloc -c 16 --mem-per-cpu=4G16 cpus, 4 gb dedicated for each cpu, 64 gb total

Memory can be either allocated from a single value from available memory from a node, or it can be allocted memory-per-cpu. In most cases, --mem is recommended, unless you are specifically working with OpenMP/multithreading.

Requesting Resources from Multiple Nodes

MPI (Message-Passing-Interface) is a means for jobs to be spread across multiple physical nodes, using completely independent memory. Not all workloads support MPI, and software must be compiled specifically for this purpose.

warning

As a general rule, -N only benefits MPI jobs. If you are not using MPI and MPI-enabled software, -N will not speed up your workload.

To request a given number of CPUs spread across multiple nodes, you can use -N.

Commandresources
salloc -c 11 cpu, 2gb, on one node
salloc -c 64 -N 264 cpus, 128 gb total, between cpus
salloc -c 32 --mem=80GB32 cpus, 80 gb shared between cpus
salloc -c 128 --mem=0128 cpus, 100% node memory shared between cpus
salloc -c 16 --mem-per-cpu=4G16 cpus, 4 gb dedicated for each cpu, 64 gb total

Memory can be either allocated from a single value from available memory from a node, or it can be allocted memory-per-cpu. In most cases, --mem is recommended, unless you are specifically working with OpenMP/multithreading.

This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes.
Note, that unless you are using MPI-aware software, you will likely prefer to always add -N, to ensure that each job worker has sufficient connectivity.

The -c and -n flags have similar effects in Slurm in allocating cores, but -n is the number of tasks, and -c is the number of cores per task. MPI processes bind to a task, so the general rule of thumb is for MPI jobs to allocate tasks, while serial jobs allocate cores, and hybrid jobs allocate both.

See the official Slurm documentation for more information: Slurm Workload Manager - sbatch

Requesting GPUs

To request a GPU, you can specify the -G option within your job request:

This will allocate the first available GPU that fits your job request. Since there are GPUs in the public, general, and htc partitions, be sure to specify it as per your requirement. Not all combinations are listed below.

#SBATCH -p htc
#SBATCH -q public
#SBATCH -t 0-4
#SBATCH -G 1

CPU Micro-Architectures

The Sol Supercomputer comprises mostly of AMD EPYC processors, and all nodes within the public and general partitions are uniformly AMD EPYCs.

The Phoenix Supercomputer, on the other hand, includes CPUs of different micro-architectures, such as Cascade Lake and Broadwell. These micro-architectures represent different generations of Intel processors, with variations in performance, instruction sets, and optimization capabilities. Software may perform differently depending on the CPU architecture it was compiled for or is optimized to run on.

To specify a particular CPU architecture for your job, use the --constraint flag (-C).

To request an Intel Cascadelake CPU:

#SBATCH -C cascadelake