In this newsletter:
- System Maintenance
- Application License Server Maintenance by IT
- Tip of the Week: Large memory workloads on Shaheen and Neser
- RCAC meeting
- Follow us on Twitter
- Previous Announcements
- Previous Tips
The next maintenance session will take place from 08:00 on Monday 11th May until 17:00 on Wednesday 13th May. There will be no access to the system during this period.
Application License Server Maintenance by IT on Thursday, 2nd April 2020, 8.00 PM to 4:00 AM 3rd April
Due to a scheduled maintenance of the Application License Server by IT on Thursday, 2nd April 2020, 8.00 PM to 4:00 AM next day, access to the below applications will be impacted on Shaheen and Neser:
Ansys, AtomistixToolKit (ATK), Eclipse, Intel Compilers, Material Studio, Mathematica, MATLAB, Tecplot and Totalview.
During these maintenance windows, you may face issues with Intel at compilation and error with the application mentioned at runtime.
The project submission deadline for the next RCAC meeting is 30th April 2020. Please note that the RCAC meetings are held once per month. Projects received on or before the submission deadline will be included in the agenda for the subsequent RCAC meeting. The detailed procedures, updated templates and forms are available here:
Tip of the Week: Large memory workloads on Shaheen and Neser
Some of your workloads may require larger memory than the typical 128GB of memory per compute node of Shaheen. For these types of job, you have two options:
1. Four Shaheen nodes are equipped with 256GB of memory. If that is sufficient, all you need is to do is add this line in your SLURM job script
2. Shaheen was augmented with a pre-post processing cluster called Neser, that is equipped with several nodes with 192GB and two nodes with 768GB of memory. The main advantage of using Neser is that it mounts natively the parallel file system /scratch and /project for fast access to your Shaheen files. More details are available on the Neser webpage here.
Please feel free to contact us if you have special needs for larger memory workloads that can not be satisfied with the above mentioned options.
Follow us on Twitter
Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.