In this newsletter:
- RCAC meeting
- KAUST supercomputer Shaheen II joins the fight against COVID-19
- Tip of the Week: Do a scalability test before launching a series of calculations
- Follow us on Twitter
- Previous Announcements
- Previous Tips
The project submission deadline for the next RCAC meeting is 31st August 2020. Please note that the RCAC meetings are held once per month. Projects received on or before the submission deadline will be included in the agenda for the subsequent RCAC meeting. The detailed procedures, updated templates and forms are available here: https://www.hpc.kaust.edu.sa/account-applications
KAUST supercomputer Shaheen II joins the fight against COVID-19
King Abdullah University of Science and Technology (KAUST) invites researchers from across the Kingdom to submit proposals for COVID-19-related research. Recognizing the urgency to address global challenges related to the COVID-19 pandemic through scientific discovery and innovation, the University’s Supercomputing Core Laboratory (KSL) is making computing resources—including the flagship Shaheen II supercomputer and its expert scientists—available to support research projects.
Topics may include but are not limited to: understanding the virus on a molecular level; understanding its fluid-dynamical transport; evaluating the repurposing of existing drugs; forecasting how the disease spreads; and finding ways to stop or slow down the pandemic.
Accepted proposals can access the following resources: (1) Shaheen II, a Cray XC-40 supercomputer based on Intel Haswell processors with nearly 200,000 compute cores tightly connected with Aries high-speed interconnect; (2) Ibex cluster, a high throughput computer system with about 500 computing nodes using Intel Skylake and Cascade Lake CPUs and Nvidia V100 GPUs; and (3) KSL staff scientists, who will provide support, training and consultancy to maximize impact. Through 30 June 2020, up to 15% of these resources will be reserved for fast-tracking competitive COVID-19 proposals through the KAUST Research Computing Allocation Committee. Thereafter, such proposals remain welcome and will be considered in the standard process.
Please contact firstname.lastname@example.org with any inquiries.
Tip of the week: Do a scalability test before launching a series of calculations
For HPC software, scalability (also known as parallelization efficiency) is a concept to describe how much speedup we can get when putting more computing resources into the jobs.
The scalability of a specific software depends on the nature of your calculations (size of the system, nature of the algorithms, etc). Blindly increasing the number of nodes does not necessarily lead to the speedup number as you were expecting. This is because, when increasing the number of nodes, the overhead caused by the inter-node communication becomes more and more dominant, and the computing power of the nodes are being used less and less efficiently.
Taking a Quantum ESPRESSO job (5 atoms, 15 k-points) as an example, it takes 286 seconds on 1 node (32 cores). You might expect a linear scaling and a time-to-solution of ~150 seconds on 2 nodes (64 cores), but it actually takes 246 seconds.
Therefore, before launching a series of calculations with similar size, please do a scalability test to find out the optimal number of nodes, so as to use the computing resource more efficiently.
Follow us on Twitter
Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.