CXFEL Compute Resources
-
Access Eligibility
Users in thegrp_cxfel
group have exclusive access to lab-owned hardware resources. Jobs using this QoS do not impact public fairshare calculations. -
Partition and QoS Setup
Use thegrp_cxfel
QoS in combination with the appropriate partition (highmem or general for CPU and GPU nodes).
Submitting Jobs
- highmem-sbatch
- highmem-interactive
- gpu-sbatch
- gpu-interactive
High-memory example. These nodes are located in the highmem
partition.
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH -c 4 # number of cores to allocate
#SBATCH -p highmem # Partition
#SBATCH -q grp_cxfel # QoS
#SBATCH --mem=1000G # Request 1000 GB memory
#SBATCH -t 2-00:00:00 # Walltime: 2 days
Same High-memory example. These nodes are located in the highmem
partition.
salloc -p highmem -q grp_cxfel --mem=500G -t 2-00:00:00
GPU example. These nodes are located in the general
partition.
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH -c 4 # number of cores to allocate
#SBATCH -p general # Partition
#SBATCH -q grp_cxfel # QoS
#SBATCH --gres=gpu:2 # Request 2 GPUs
#SBATCH -t 10-00:00:00 # Walltime: 10 days
Same GPU example. These nodes are located in the general
partition.
salloc -N 1 -c 4 -p general -q grp_cxfel --gres=gpu2 -t 10-0