This solution is intended for jobs that will use more than 500 megabytes of memory. For jobs that will use less than 500M, you should follow guidance for submitting jobs on the HEC.
The job scheduler cannot predict how much memory your jobs will consume, so it is important for you to declare the amount required in order that the scheduler can make sure that it is sent to a compute node with enough memory to support it. Jobs over 500M in size are classed as large memory jobs and must specify their memory requirements. It's important to declare memory usage as accurately as possible - specifying far more memory than a job requires means that the scheduler reserves that memory for you whether you use it or not, resulting in it not being available to other users. Bear in mind that in order to guard against jobs with run-away memory leaks which might adversely affect other users, the scheduler will terminate any job which exceeds its requested memory requirement.
As a rule of thumb: request slightly more memory than your job requires, but not too much.
For jobs greater than 500 megabytes, calculate the amount of memory your job requires as closely as possible and submit the job with:
qsub -l h_vmem=X myjob.bsub
Where X is the amount memory to be reserved along with the unit, either M (megabytes) or G (gigabytes) (e.g. 700M means 700 megabytes, 1.5G means 1.5 gigabytes).
Not specifying a unit may cause your job to crash.
Alternatively, you can add a similar directive directly into the job script by adding the line:
#$ -l h_vmem=X
So a job which require 1.5 gigabytes would have the following in the job script:
#$ -l h_vmem=1.5G