Large memory workloads on Shaheen and Neser

Some of your workloads may require larger memory than the typical 128GB of memory per compute node of Shaheen. For these types of job, you have two options:

1. Four Shaheen nodes are equipped with 256GB of memory. If that is sufficient, all you need is to do is add this line in your SLURM job script

#SBATCH --mem=240M

2. Shaheen was augmented with a pre-post processing cluster called Neser, that is equipped with several nodes with 192GB and two nodes with 768GB of memory. The main advantage of using Neser is that it mounts natively the parallel file system /scratch and /project for fast access to your Shaheen files. More details are available on the Neser webpage here.

Please feel free to contact us if you have special needs for larger memory workloads that can not be satisfied with the above mentioned options.