Simple job
Simplest job, using one core, and less than 6GB of memory. You can pass additional arguments on the sbatch command line, as well.#!/bin/bash
#SBATCH --time 1:00:00
#SBATCH --nodes=1
#SBATCH --cores=1
./single_job "$@"
Simple job, requiring more than 6GB of memory
Simplest job, using one core, and 24GB of memory. We increase the number of processors requested to limit the number run per node.#!/bin/bash
#SBATCH --time 1:00:00
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=24G
./single_24gb_job
MPI Example
MPI process using more than one node:#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=2
#SBATCH --exclusive
# Note that mpiexec will pick up the node list and processor count from the environment.
# In most cases you should not be passing -np or any other parameters to mpiexec
mpiexec ./mpijob
Matlab Example
Running at MATLAB script using using just one node (note that -b takes a command, not a file; also some toolboxes may require the jvm, so you may not want the -nojvm option, but try it first).#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=1
#SBATCH --exclusive
module load matlab/R2019b
matlab -nojvm -b mymatlabscript
Array Example
When submitting a job array, each instance of the array runs independently. You use the environment variable SLURM_ARRAY_TASK_ID to determine which instance you are using.#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH -c 1
#SBATCH
module load Python/3.7.4-GCCcore-8.3.0
python mypythonscript.py $SLURM_ARRAY_TASK_ID
#!/usr/bin/env python3
import sys
offset = int(sys.args[1])
input_list = [....]
process(input_list[offset])
