Multi-node (distributed memory) parallel jobs are typically written using MPI, a C/C++ and Fortran based Application Programming Interface that implements job parallelism via passing (network) messages between co-ordinated processes.
The HEC currently supports the OpenMPI implementation of MPI, available through one of several modules described below. The implementation offers compiler wrappers to assist with compiling MPI codes. The wrappers call an underlying compiler, and automatically take care of locating the correct version of MPI libraries and include files. The compiler wrappers are:
These wrappers can be called once the relevant module has been added to your environment and are used in exactly the same way that a standard compiler can be called.
More complex MPI applications are typically built with a Makefile and/or a configure script. These applications can be built by specifying the compiler wrappers in place of the general compilers. It isn't necessary to specify the location or name of the MPI libraries or header files - the wrappers will handle these details automatically - so these fields can typically be left blank in a Makefile.
Each version of OpenMPI has been built in one of three flavours, one for each of the three supported compilers on the HEC: gcc, intel and pgi. For example, the version of OpenMPI 1.4.3 built for Intel compilers will have the module name: openmpi/1.4.3-intel. When submitting MPI jobs, care should be take to load the same module version and flavour as was used to originally build it.
Here is a sample job to run the IMB benchmark across 2 nodes, and 16 cores per node:
In the above job script example, many of the SGE job directives are explained under the basic batch job submission page. Below is a description of the additional entries required to launch MPI jobs:
This signals to the job scheduler that this shoud be run on the parallel queue, which offers full support for parallel jobs.
Node type selection
This specifies which type of compute node the parallel job will run on. Different nodes have different architectures and numbers of cores; to ensure optimum placement all nodes used in the same parallel job must be of the same type. The different types of node are described in full in Requesting specific node types for jobs on the HEC. The pattern 10Geth* allows jobs to run on both 10Geth64G and 10Geth128G which are compute nodes with 10 Gbit ethernet network connectivity and either 64G or 128G memory respectively. It is recommended to use 10Geth* for most parallel jobs, and specify 10Geth128G only when more memory is required.
Specifying job size (number of cores used)
This specifies that the job requires two whole nodes. The nodes will be used exclusively for the parallel job - no other serial or parallel job may co-exist on the selected compute nodes.
Currently, all compute nodes offer 16 cores per node, but this may change as we refresh the cluster’s compute nodes. The job will run using a number of cores equal to the number of nodes selected multiplied by that node_type's core count. The example above uses nodes=2 along with node_type=10Geth* which is an 16-core node type, meaning that the job will run on 2 nodes with 16 cores per node, ie a 32-core job. the node type selected been node_type=10Geth64G (a 16-core node type).
Advanced use of nodes= syntax
On rare occasions, it may be required to run fewer than the default number of processes per node. The nodes request can be supplemented by a processes-per-node value. For example, to run the above IMB benchmark using 2 nodes with 1 process per node (effectively measuring point-to-point performance between two nodes), you can use the following instead:
As MPI jobs exclusively book whole compute nodes, specifying fewer processes per node than the number of available cores will result in the remaining cores going unused - they cannot be used by other users or other jobs. Only specify a ppn value when absolutely necessary.
Calling the MPI application:
The final line in the job script:
is the call to the parallel application (in this case IMB-MPI1) wrapped in a call to the mpirun application which will handle the parallel startup of the user application. Note that mpirun does not need to be told the number of processes to run; OpenMPI automatically picks this value up from the job scheduler.
The name of the MPI application should typically be the last argument to mpirun. For MPI applications that require their own additional arguments, you should place them after the call to the application itself, as arguments before the application call are interpreted by the mpirun command.
Testing suggests OpenMPI supports basic input redirection on the assumption that standard input is read by rank zero of the application.
A note on memory resource requests for MPI jobs
As parallel jobs reserve whole nodes, memory reservation is automatically set to the selected compute node's full memory. Larger memory compute nodes are a sparse resource and in high demand. Parallel jobs requiring them may wait a very long time to schedule. It may be preferable to split your job over a larger number of standard-memory nodes.
As not all parallel jobs scale efficiently to the size of at least a single node, an alternative syntax can be used to request parallel jobs with a small core count. The nodes= syntax can be replaced by a request for the specific amount of cores requires using the np= syntax. For example:
Note that smaller MPI jobs must specify their memory requirements - as they don't reserve a whole node, they cannot assume all the node's memory will be available. The memory resource request uses the same syntax as for serial jobs (e.g. #$ -l h_vmem=1G). This memory request is per core, so the job in the above example is requesting 4G of memory (4 cores, 1G per core).
Note also that the np= syntax ensures that all job slots for the job are on the same node. As a result, the value should never be greater than the maximum number of cores on the largest compute node – this is currently 16.
Tips on parallel jobs