Batch System

From HLRS Dgrid
Jump to: navigation, search

The only way to start a parallel job on the compute nodes of this cluster is to use the portable batch system (Torque).

Policies for usage of the queues for the batch system

Submit a batch job

You will get each requested node for your exclusive usage. There are 2 methods to use the batch system:

  • interactive batch jobs:
    if requested resources are available, the job starts a interactive shell immediately. For interactive access the qsub command has the option -I example:
    qsub -I ...
  • normal batch jobs:
    jobs will be started by the moab scheduler after passing some rules configured by the administrator (FAIRSHARE, BACKFILLING, ...).

Command for submitting a batch job request

A short explanation follows here. For detailed information, see the man pages or the documentation in the WWW.

man qsub
man pbs_resources
man pbs

Command to submit a batch job

qsub <option>

On success, the qsub command returns a request ID.

You have to specify the resources you need for your batch job. These resources are specified by including them in the -l option argument on the qsub command or in the PBS job script. There are 2 important resources you need to specify:

  1. nodes=<number of nodes>:<feature>
    • To distinguish between different nodes, features are assigned to each node. These features describe the properties of each node. Please do only use exactly 1 feature for each node type.
      Available node:
      feature describes notes
      bwgrid 2 quad core-CPU per node
      Multi nodes can be specified using a +:
      The example above will allocate 2 nodes with feature cell and 3 nodes with feature bwgrid.
  2. walltime=<time>

Usage of multi-socket nodes and multi-core cpus

The batch system takes into account the number of cpus and nodes (summarized PE) for each node when assigning resources.

The resource request feature nodes. If you do not reqeust a specific number of PE per node the system assumes that the reqeusted number of nodes is the number of PE you want is 1.


 qsub -l nodes=2:bwgrid ./myscript

One node with Harpertown processors (Quadcore) will be assigned. The File ${PBS_NODEFILE} contains the node list.:


resource request feature nodes, option ppn.Using the option ppn you can request multiple PE on each node explicitly. This options allows especially for Open-MPI which is integrated in the batch system to make process placement in order to execute neighbour ranked processes in the same or at least in neighbour nodes. Furthermore, if you have an application which needs a large ammount of memory, you can place less MPI processes on each node, so that the memory of the whole node is available to one or two MPI-tasks.


 qsub -l nodes=1:bwgrid:ppn=2+1:bwgrid:ppn=3 ./myscript

One node with Harpertown processors (Quadcore) for two PE each and one node with 3 PE will be allocated. therefore The file ${PBS_NODEFILE} will contain:


In the latest version of the cluster manager you can also request several nodes with less processes

qsub -l nodes=5:bwgrid:ppn=2 ./myscript

will put two processes on each of the 5 nodes, i.e. 10 processes in total.

resource request feature nodes, option pmem. For specific applications it might be useful that the batch system assigns only one PE per node. The software used on this cluster (Torque/Moab) needs a small trick to manage it.

Since for the request option for one PE (ppn=1) the same assumptions are used as if there was no request option at all (as in the first example)

qsub -l nodes=4:bwgrid:ppn=1 ./myscript

will put all four tasks on the first node. You can either write in the long form

qsub -l nodes=1:bwgrid:ppn=1+1:bwgrid:ppn=1+1:bwgrid:ppn=1+1:bwgrid:ppn=1 ./myscript

or you use additional resources to create a situation where only one PE per node can be assigned. The easiest way is to request memory per node, at least more than half of the available memory.


 qsub -l nodes=2:bwgrid,pmem=15gb ./myscript

Two nodes will be reserved. They provide 16 GByte RAM, therefore only one PE can be placed on each node. The file ${PBS_NODEFILE} contains:


Standardvalues for resource requirements

If you don't set the resources for your job request, then you will get default resource limits for your job.

feature value notes
walltime 00:10:00
nodes 1
ppn 8

Please select your resource requests carefully. The higher your specified resource limits the lower the job priority. See also QueuePolicies.

To have the same environmental settings (exported environment) of your current session in your batchjob, the qsub command needs the option argument -V.

Examples for PBS options in job scripts

You can submit batch jobs using qsub. A very simple qsub script for a MPI job with PBS (Torque) directives (#PBS ...) for the options of qsub looks like this:

# Simple PBS batch script that reserves two cpu and runs a
# MPI program on one node)
# The default walltime is 10min !
#PBS -l nodes=2:bwgrid
cd $HOME/testdir
mpirun -np 2 -hostfile $PBS_NODEFILE ./mpitest

VERY important is that you specify a shell in the first line of your batch script.

VERY important in case you use the openmpi module is to omit the -hostfile option. Otherwise an error will occur like

[n110402:02618] pls:tm: failed to poll for a spawned proc, return status = 17002
[n110402:02618] [0,0,0] ORTE_ERROR_LOG: In errno in file ../../../../../orte/mca/rmgr/urm/rmgr_urm.c at line 462
[n110402:02618] mpirun: spawn failed with errno=-11

If you want to use two MPI processes on each node this can be done like this:

# Simple PBS batch script that reserves two nodes and runs a
# MPI program on four processors (two on each node)
# The default walltime is 10min !
#PBS -l nodes=2:bwgrid:ppn=2
cd $HOME/testdir
mpirun -np 4 -hostfile $PBS_NODEFILE ./mpitest

Here an example script to automatically adjust the number of mpi tasks:

#PBS -l nodes=4:bwgrid:ppn=2,walltime=1:00:00

module load mpi/openmpi/1.4-gnu-4.1
cd $HOME/testdir

# find out the number of entries in the nodefile:
nodes=`cat $PBS_NODEFILE | wc -l`         

mpirun -np $nodes ./mpitest

If you need 2h wall time and one node you can use the following script:

# Simple PBS batch script that runs a scalar job using 2h
#PBS -l nodes=1:bwgrid,walltime=2:00:00
cd $HOME/jobdir

or a more advanced script making use of workspaces

#PBS -l nodes=3:bwgrid:ppn=8,walltime=04:00:00
#PBS -m abe                                            # if you want to receive email 
#PBS -M  # you also have to put your address
#PBS -N Jobname_if_you_like
#PBS -o $HOME/logs/$PBS_JOBID.out
#PBS -e $HOME/logs/$PBS_JOBID.err

module load mpi/openmpi

cd $(ws_find Workspacename)

# find out the number of entries in the nodefile:
nodes=$(cat $PBS_NODEFILE | wc -l)

mpirun -np $nodes $HOME/bin/mympiprog 

for more information about the available options and variables see the man page of qsub

Examples for starting batch jobs:

  • Starting a script with all options specified inside the script file
    qsub <script>
  • Starting a script using 3 nodes and a real time of 2 hours:
    qsub -l nodes=3:bwgrid,walltime=2:00:00 <script>
  • Starting a script using 5 cluster nodes using 4 processors on each node with PBS Feature bwgrid:
    qsub -l nodes=5:bwgrid:ppn=4,walltime=2:00:00 <script>
  • Starting a script using 1 cluster node with PBS Feature bwvis and 5 processors with PBS Feature bwgrid and real job time of 1.5 hours:
    qsub -l nodes=1::bwvis+5:bwgrid,walltime=1:30:00 <script>
  • Starting an interactive batch job using 5 processors with a job real time of 300 seconds:
    qsub -I -l nodes=5:bwgrid,walltime=300

    For interactive Batch jobs, you don't need a file. If the requested resources are available, you will get an interactive shell on one of the allocated compute nodes. Which nodes are allocated can be shown with the command cat $PBS_NODEFILE on the batch job shell or with the PBS status command qstat -n on the master node.

    You can log in from the frontend or any assigned node to all other assigned nodes by
    ssh <nodename>

    If you exit the automatically established interactive shell to the node, it will be assumed that you finished your job and all other connections to the nodes will be terminated.

  • If you want to use the visualization nodes, submit your job to the bw-vis queue, e.g. as follows:
    qsub -l nodes=1:bwvis,walltime=3600 -I -q bw-vis

    There are also prepared scripts to start a vnc session or for using virtual GL, see the section Graphic environment.

  • Possibilities to request 4 cluster nodes (with feature 'bwgrid' i.e. 32 processors).
    qsub -l nodes=4:bwgrid:ppn=8 <script>
    qsub -l nodes=4:bwgrid:pmem=15gb <script>

    The difference between both job submisson kinds is in the content of PBS_NODEFILE.

  • Starting a script which should run on other Account ID: First you have to know which Account ID's (groupnames) are valid for your login:

    Choose a valid groupname for your job:

    qsub -l nodes=5:bwgrid,walltime=300 -W group_list=<groupname>
  • You can also tell the batch system to execute the job after another job or a list of jobs have finished:
    qsub -W depend=afterany:jobid:anotherjobid:furtherjobid... -l ... jobscript

Get the batch job status

  • Available commands
      qstat [options]
      showq [options] (showq -h for details)

      For detailed informations, see man pages:

      man qstat
      man pbsnodes
  • Examples
      list all batch jobs:
      qstat -a

      lists all batch queues with resource limit settings:

      qstat -q

      lists node information of a batch job ID:

      qstat -n <JOB_ID>

      lists detailed information of a batch job ID:

      qstat -f <JOB_ID>

      lists information of the PBS node status:

      pbsnodes -a
      pbsnodes -l

      gives informatioin of PBS node and job status:


      DISPLAY: X11 applications on interactive batch jobs

      For X11 applications you need to have SSH X11 Forwarding enabled. This is usually activated per default. But to be sure you can set 'ForwardX11 yes' in your $HOME/.ssh/config. To have the same DISPLAY of your current session in your batchjob, the qsub command needs the option argument -X.

      frontend> qsub -l nodes=2:bwgrid,walltime=300 -X -I