User accounts on the HEC have three separate filestore areas  home, storage and scratch  which are available across the cluster. Each job also has access to a unique temp directory. These areas differ in size, backup and file retention policy.


The home area is subject to a small (3G) quota. The home area is backed up nightly, and should be used for essential files (e.g. job submission scripts or source code). The storage area has a larger quota (30G) but is not backed up. This area should typically be used to store job outputs, or other research data that can be regenerated if lost. scratch is a large (10T) shared area; it is not subject to quota, and files written to it are automatically deleted after two weeks. The scratch area is intended as a place to hold very large job output files and other files for a limited period of time  files will be deleted from scratch two weeks after they were last written to. Finally, the temp area is a unique directory available to each job. The directory exists on the local disk of the compute node running the job, and is intended to provide fast file I/O for jobs which create temporary files during their run. The contents of temp are automatically deleted at the end of each job, so any essential files created here should be copied into one of the home, storage or scratch areas before the job terminates. A summary of the different filestores area is given below:

File AreaQuotaBackup PolicyFile Retention policy
home3GNightlyPermanent
storage30GNonePermanent
scratchUnlimitedNoneFiles automatically deleted after 2 weeks
tempUnlimitedNoneFiles automatically deleted at the end of the job


Home, storage, scratch and temp can be quickly referenced via the environment variables HOME, global_storage, global_scratch and TMPDIR respectively. For example, to quickly change directories to your scratch area, type: cd $global_scratch.


Extracting tar and zip files in scratch

Files in the scratch area are deleted if their last-modified timestamps are older than two weeks. Both tar and unzip preserve the original timestamps of extracted files, resulting in older unpacked files being deleted overnight. To avoid this use the argument -DD with unzip or m with tar – both options will use the current time as the last-modified timestamp for extracted files.



More information

 Viewing filestore quota and usage

You may view your home and storage quota details and scratch usage at any time using the command panquota. Typical output from panquota looks like this:

Filesystem          Quota     Used     Avail     Use%     # files 
home                 3.30G     2.78G    0.52G    84.24      28547 
storage             33.00G    19.02G   13.98G    57.64      96935 
scratch                 -      0.00G       -        -           1


For your home and storage areas, the output shows the following info: your quota limit; how much you've used; how much is available; a percentage usage; and the total number of files you have in that area. Your scratch area isn't subject to quota, so the output will show only how much of the shared scratch area you're using and a count of the number of files you have in scratch.




 Filestore best practice

The main HEC filestores are shared across the login node and all compute nodes. With several thousand user jobs accessing the filestore simultaneously, we offer some best practice guidelines to avoid overloading the filesystem. When running many jobs at once, particular care should be taken to follow the guidelines below:

Avoid making multiple jobs write to same the file.

Multiple processes or jobs writing to the same file at the same time in an uncoordinated manner will result in missing or corrupted data. It will also slow the filesystem down as the write requests to a single file are queued up to run.

Avoid creating large directories.

Directories become increasingly slower to manage (e.g. to view, add or delete files) the larger they become. Avoid placing more than a few thousand files in the same directory. Large directories should be deleted after use rather than reused the directory size never shrinks, even when files within it are deleted, making it slower to use next time.

Avoid excessive file copying within jobs.

Copying data from one location on the filesystem to another is an expensive operation consider ways to minimise it, especially when running hundreds of jobs at the same time. For applications which need a local copy of an input file, consider using the ln -s command to create a soft link (short cut) rather than a full copy of the file. If output files need to be placed elsewhere on the filesystem, consider using the mv (file move) command.

Avoid directory recursion/file searching within jobs.

Having multiple jobs search for specific files through a large directory hierarchy can place a high load on the filesystem. Avoid use of recursive commands such as find within jobs.

Use the temp directory for temporary files.

The temp directory (referenced by the $TMPDIR variable) is located on the compute node running your job, and offers much higher file I/O and bandwidth and lower latency than network based storage. Wherever possible create any temporary/transient files in temp.

Clean up your filestore regularly.

With the ability to run hundred of jobs simultaneously, user filestore can quickly become cluttered with files (e.g. job stdout and stderr files). Small files in particular have a relatively high space overhead even a small file will consume 32k of quota due to metadata and RAID overheads. An area filled with large numbers of small files can exhaust quota more quickly than expected.

Offload valuable data to a permanent location.

The HEC filestore is intended to enable high performance access to files for job runs, and should not be used for long term archiving of research data. Important research results should be offloaded from the HEC as soon as convenient to a more suitable location. Please contact your local departmental IT liaison to discuss options for long-term research data archiving.

Monitor your scratch usage. 

As scratch is a shared area, care should be taken not to overuse it this will result in not enough space in scratch for other users. Monitor scratch usage regularly while running jobs which send their output to scratch, and offload any research results as soon as possible to free up space for others.

Use checkpointing options with care.

Some applications offer checkpointing - the ability to save their running state to file so that a calculation can be restarted partway through should some error occur. Before enabling checkpointing, consider carefully how much checkpointing you need having several hundred jobs checkpointing regularly will overload the system.




Related pages