While compute nodes are all broadly similar, there are some circumstances where an application may perform better on one type of node than another. Jobs can request that they be restricted to a specific type of node using the job resource request node_type by using the following syntax in a job script:


#$ -l node_type=type


Where type is one of the following:

Type# coresProcessorMemoryInterconnect
10Geth64G16Intel Ivy Bridge, Haswell or Broadwell64G10Gb low latency
10Geth128G16

Intel Ivy Bridge

128G10Gb low latency
10Geth*16Intel Ivy Bridge, Haswell or Broadwell64G or 128G10Gb low latency
40core192G 40

Skylake or Cascade Lake

192G 10Gb high latency


The scheduler will handle node_type requests in an intelligent way, depending on job type:


For serial jobs:

  • If no node_type is specified, the job is free to launch on any type of compute node.


For parallel jobs using the nodes= syntax (ie, requesting whole compute nodes):

  • If no node_type is specified, the job will be run as though node_type=10Geth* had been specified.
  • If node_type=10Geth* is specified, the job will be eligible to run on compute nodes of type 10eth64G or 10Geth128G, and the memory resource request will be set to 3.95G per core (for a total of 63.2G per node).
  • If node_type=40core192G is specified, the job will be eligible to run on compute nodes of type 40core192G with a memory resource request of 4.75G per core (for a total of 192G per node).


For parallel jobs using the np= syntax (ie, requesting a number of cores all on the same compute node):

  • If no node_type is specified the job will be eligible to run an all nodes with at least as many cores as requested.



Related pages