Slurm reads the submitted script and looks for directives beginning with #SBATCH, which the shell interprets as a comment, but Slurm uses to determine how what to allocate and for how long.

Example 1: Simple job

Simplest job, using one core, and less than 6GB of memory. You can pass additional arguments on the sbatch command line, as well.


#!/bin/bash

#SBATCH --time 1:00:00

#SBATCH --nodes=1

#SBATCH --cores=1

./single_job "$@"

Example 2: Simple job, requiring more than 6GB of memory

Simplest job, using one core, and 24GB of memory. We increase the number of processors requested to limit the number run per node


#!/bin/bash

#SBATCH --time 1:00:00

#SBATCH --nodes=1

#SBATCH --cpus-per-task=4

#SBATCH --mem=24G

./single_24gb_job

Example3: MPI Example

MPI process using more than one node:


#!/bin/bash

#SBATCH --time 1:00:00

#SBATCH --exclusive

# Note that mpiexec will pick up the node list and processor count from the environment.

# In most cases you should not be passing -np or any other parameters to mpiexec

mpiexec ./mpijob

Example 4: Matlab Example

Running at MATLAB script using using just one node (note that -b takes a command, not a file; also some toolboxes may require the jvm, so you may not want the -nojvm option, but try it first).


#!/bin/bash

#SBATCH --time 1:00:00

#SBATCH --exclusive

module load matlab/R2019b

matlab -nojvm -b mymatlabscript

Example 5: Array Example

When submitting a job array, each instance of the array runs independently. You use the environment variable SLURM_ARRAY_TASK_ID to determine which instance you are using.
Array.slurm


#!/bin/bash

#SBATCH --time 1:00:00

#SBATCH -c 1

#SBATCH 

module load Python/3.7.4-GCCcore-8.3.0

python mypythonscript.py $SLURM_ARRAY_TASK_ID 

mypythonscript.py


#!/usr/bin/env python3

import sys

offset = int(sys.args[1])

input_list = [....]

process(input_list[offset])