Job Arrays

An alternative to “srun – exclusive” approach is to use a job array. Consider the following slurm script:

#!/bin/bash
#SBATCH --job-name=job_array_test
#SBATCH --partition=shared
#SBATCH --output=slurm_%x_%j.out
#SBATCH --error=slurm_%x_%j.err
#SBATCH --time=00:10:00
#SBATCH --ntasks=1
#SBATCH --mem=1gb
#SBATCH --array=1-5

module load anaconda/2020.07-p3.8
cd (path to working directory)
srun python3 square_number.py $SLURM_ARRAY_TASK_ID

Slurm will execute 5 copies of the job, where each instance of the job sets the environment variable SLURM_ARRAY_TASK_ID to a value in the range 1-5 specified by the line #SBATCH –array=1-5. The output of each job will appear in the file job_array_test_(jobid).out.

If the parameter you would like to vary across the array job is not an integer, you may need to construct an array of the parameters of interest and use the SLURM_ARRAY_TASK_ID as an index into the array, like so:

#!/bin/bash
#SBATCH --job-name=array_job_test
#SBATCH --partition=shared
#SBATCH --time=00:10:00
#SBATCH --output=slurm_%x_%j.out
#SBATCH --error=slurm_%x_%j.err
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=1gb
#SBATCH --array=1-5

module load anaconda/2020.07-p3.8
cd (path to working directory)
VALUES=(0.0 2.5 5.0 7.5 10)
srun python3 square_number.py ${VALUES[$SLURM_ARRAY_TASK_ID-1]}

Scroll to Top