KAUST Supercomputing Laboratory Newsletter 18th March

In this newsletter:

  • RCAC meeting
  • Shaheen Maintenance: 23rd of March and end of April
  • Tip of the week: MPI 3.0 Neighborhood Collectives
  • Follow us on Twitter
  • Previous Announcements
  • Previous Tips

RCAC meeting

The project submission deadline for the next RCAC meeting is 31 March 2021. Please note that the RCAC meetings are held once per month. Projects received on or before the submission deadline will be included in the agenda for the subsequent RCAC meeting.The detailed procedures, updated templates and forms are available here: https://www.hpc.kaust.edu.sa/account-applications

Shaheen Maintenance: 23rd of March and end of April

We would like to announce our next maintenance session on Shaheen on the 23rd of March 2021 between the hours of 8 and 5pm. We plan to apply the latest patches and security updates, as well as reboot and fix the hardware on the system. Access to the files and login nodes should be possible during the outage. 

We would also like to give you an advanced notice for the longer Shaheen outage towards the end of April. The datacentre team will be performing their annual PPM on the power supply equipment. At the same time, we will upgrade Shaheen existing project and scratch filesystems. This is an essential step before bringing our newly acquired filesystem online and providing more project storage space. We estimate that the combined Shaheen outage should take around 4-6 days. We will communicate the details closer to the date.  As always, please contact us at help@hpc.kaust.edu.sa should you have any concerns or questions.

 

Tip of the week: MPI 3.0 Neighborhood Collectives

Several scientific computing applications, especially with explicit numerical schemes, requires halo exchange MPI communications. These communications of the so-called "Ghost cells" are between neighboring MPI ranks within a given topology (typically cartesian). MPI 3.0 includes neighborhood collective operations in these two functions: MPI_Neighbor_allgather and MPI_Neighbor_alltoall.

Traditional implementations with point to point communication routines are tricky and require special handling of non-contiguous data. The use of these collective functions will significantly improve the clarity of your code and make it significantly simpler and potentially faster.

Remember that most MPI collective operations also have a non-blocking variant, such as MPI_Ineighbor_alltoall, which may allow you to further overlap MPI communications with other computations in your code.

Follow us on Twitter

Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.

Previous Announcements

http://www.hpc.kaust.edu.sa/announcements/

Previous Tips

http://www.hpc.kaust.edu.sa/tip/