Job Arrays

An alternative to “srun – exclusive” approach is to use a job array. Consider the following slurm script:

$ job.sl

#!/bin/bash
#SBATCH --job-name=job_array_test
#SBATCH --partition=short
#SBATCH --output=slurm_%x_%j.out
#SBATCH --error=slurm_%x_%j.err
#SBATCH --time=00:10:00
#SBATCH --ntasks=1


#SBATCH --cpus-per-task=1
#SBATCH --mem=1gb
#SBATCH --array=1-5

cd (path to working directory)
python square_number.py $SLURM_ARRAY_TASK_ID

Slurm will execute 5 copies of the job, where each instance of the job sets the environment variable SLURM_ARRAY_TASK_ID to a value in the range 1-5 specified by the line #SBATCH –array=1-5. The output of each job will appear in the file job_array_test_(jobid).out.

If the parameter you would like to vary across the array job is not an integer, you may need to construct an array of the parameters of interest and use the SLURM_ARRAY_TASK_ID as an index into the array, like so:

$ job.sl

#!/bin/bash
#SBATCH --job-name=array_job_test
#SBATCH --partition=short


#SBATCH --time=00:10:00
#SBATCH --output=slurm_%x_%j.out
#SBATCH --error=slurm_%x_%j.err
#SBATCH --ntasks=1


#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1gb
#SBATCH --array=1-5

cd (path to working directory)
VALUES=(0.0 2.5 5.0 7.5 10)
python square_number.py ${VALUES[$SLURM_ARRAY_TASK_ID-1]}

Scroll to Top