Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen Revision Vorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
hpc:scheduling:slurm_commands [2024/09/26 14:57] hpc:scheduling:slurm_commands [2024/10/25 15:03] (aktuell)
Zeile 4: Zeile 4:
  
 **srun** **srun**
 +
 This is the simplest way to run a job on a cluster. This is the simplest way to run a job on a cluster.
 Initiate parallel job steps within a job or start an interactive job (with --pty). Initiate parallel job steps within a job or start an interactive job (with --pty).
  
 **salloc** **salloc**
 +
 Request interactive jobs/allocations. Request interactive jobs/allocations.
 When the job is started a shell (or other program specified on the command line) it is started on the submission host (Frontend). When the job is started a shell (or other program specified on the command line) it is started on the submission host (Frontend).
Zeile 14: Zeile 16:
  
 **sbatch** **sbatch**
-submit a batch script. The script will be executed on the first node of the allocation. The working directory coincides with the working directory of the sbatch directory. Within the script one or multiple srun commands can be used to create job steps and execute parallel applications.+ 
 +Submit a batch script. 
 +The script will be executed on the first node of the allocation. 
 +The working directory coincides with the working directory of the sbatch directory. 
 +Within the script one or multiple srun commands can be used to create job steps and execute parallel applications.
  
 **Examples** **Examples**
 <code> <code>
 # General: # General:
-sbatch --job-name=$jobname -N <num_nodes> --ntasks-per-node=<ppn> /path/to/sbatch.script.sh+sbatch --job-name=<name of job shown in squeue> -N <num nodes> --ntasks-per-node=<spawned processes per node> /path/to/sbatch.script.sh
  
 # A start date/time can be set via the --begin parameter: # A start date/time can be set via the --begin parameter:
Zeile 26: Zeile 32:
 --begin=now+60 (seconds by default) --begin=now+60 (seconds by default)
 --begin=2010-01-20T12:34:00 --begin=2010-01-20T12:34:00
 +</code>
 +
 +A sbatch script of the command above would look like
 +<code>
 +#!/bin/bash
 +#SBATCH --job-name=<name of job shown in squeue> -N <num nodes>
 +#SBATCH -N <num nodes>
 +#SBATCH --ntasks-per-node=<spawned processes per node>
 +#SBATCH --begin=2010-01-20T12:34:00
 +
 +/path/to/sbatch.script.sh
 </code> </code>
  
Zeile 31: Zeile 48:
 All parameters used there can also be specified in the job script itself using #SBATCH. All parameters used there can also be specified in the job script itself using #SBATCH.
  
-=== Check the status of your own jobs ===+Also, more examples can be found [[hpc:tutorials:sbatch_examples|here]]. 
 + 
 +=== Check the status of your job submissions ===
 <code> <code>
-squeue+squeue --me
 </code> </code>
  
-=== Check the status of the nodes ===+=== Check the status of nodes ===
 <code> <code>
 sinfo sinfo
Zeile 50: Zeile 69:
 </code> </code>
  
 +=== Slurm Cheat Sheet ===
 +
 +A summary of the most common commands can be found [[https://slurm.schedmd.com/pdfs/summary.pdf|here]].
  • Zuletzt geändert: 2024/09/26 14:57
  • von