SLURM Migration

SCARF is migrating to the SLURM batch system, instructions on this page which refer to LSF commands, parameters or configuration do not apply to SLURM. Please see our SLURM page for more information.

Submitting Jobs

Jobs are submitted using the LSF scheduler, which allocates cluster resources when they become available. Sheduler software organises the workload on the cluster such that all users get a fair share of the resources.

General principles

Before learning how to use LSF, it is worthwhile becoming familiar with the basic principles of scheduler operation in order to get the best use out of the SCARF cluster. Scheduler software exists simply because the amount of jobs that users wish to run on cluster at any given time is usually greatly in excess of the amount of resources available, typically by a factor of 2 or 3.

Hints and tips

Several factors are taken into account during scheduling, such as job length and size, but the basic principles remain the same throughout - every user gets a fair share on the cluster based on the jobs that they have submitted. This leads to a small number of important principles:

  • Do not try to second guess the scheduler! Submit all of your jobs when you want to run them and let it figure it out for you. You will get a fair share, and if you don't then we need to adjust the scheduler.
  • Give the scheduler as much information as possible. There are a number of optional parameters (see later) such as job length, and if you put these in then you have an even better chance of getting your jobs run.
  • It is very difficult for one user to monopolise the cluster, even if they submit thousands of jobs. The scheduler will still aim to give everyone else a fair share, so long as there are other jobs waiting to be run.

Example of scheduling

Scheduler

 

Three users (left column) have jobs in the queue (middle column) which are waiting to run on the cluster (right column). As the blue user's job finishes (middle row), all three users could potentially use the two job slots that become available. However, the orange and purple users already have jobs running, whereas the blue user does not, and as such it is the blue user's jobs that are run (bottom row).

Testing the batch system

To submit a test job on SCARF, run the command bsub sleep 60. You can check the status of your job with the bjobs command.

Important job submission flags

There are many flags available for the bsub command, some of which should be included in every job, if possible. These are summarised below:

-q scarf (which queue to run on)
-n 36 (number of processors to run on)
-W 00:30 (predicted time in hours and minutes)
-x (exclusive node use, to avoid sharing with other jobs) ONLY FOR PARALLEL JOBS!
-o %J.log (name of file for output of job)
-e %J.err (name of file for error log)

Advanced resource requests

Sometimes you may need to more closely specify how you want your job to run. This is done through adding addtional options to your bsub command.

Choosing a sub-section of the cluster

There are a small number of resources which you may wish to request explicitly. This is done using the "-m" statement, and follow the format:

-m scarf_intel_hosts_6

This allows you to explicitly select which part of the cluster you wish to run on, should you want to use one specific set of hardware. The various options are listed next to the name of the sub-sections of the cluster in the SCARF status monitor.

Spanning multiple hosts for additional memory per process

Use this to restrict the number of processes run per host. For example, to run only one process per host use:

-R "span[ptile=1]"

Running parallel jobs

Parallel jobs are submitted using a simple script, such as the one given below for the Linpack benchmarking suite:

# Linpack parallel processing benchmark script
#BSUB -q scarf
#BSUB -n 36
#BUSB -W 00:30
#BSUB -o %J.log
#BSUB -e %J.err
mpirun -lsf /home/scarf009/hpl/bin/Linux_OPTERON/xhpl-gm

Please note that everything after the last #BSUB option must be on a single line. -n and �np refer to the number of processors you wish to run on. The rest of the #BSUB input options, and many more besides, can be found in the bsub manual page.

To submit the job, do not run the script, but rather use it as the standard input to bsub, like so:

bsub < my_script_name

Monitoring your jobs

Once you have submitted your job, there are several command line tools for monitoring their status, and the overall performance of the cluster. The top 5 are:

  • bjobs
  • bpeek
  • bqueues
  • busers
  • bhosts
  • lsload

Killing your jobs

If you would like to kill your job, the command to do it is bkill with the job id of the job you want to kill:

bkill <jobid>

These all have really good man pages, but if you have any problems then please contact the helpdesk.