KAUST Supercomputing Laboratory Newsletter 26th October 2016

Data Centre Firewall Upgrade

On 27th October between 17:00 and 21:00, KAUST IT will be upgrading the SCC firewall. As Shaheen is behind this firewall, there will be intermittent access during the upgrade period.

System Maintenance

There will be an extended down time of all systems from 15:00 on 30th November until 17:00 on 6th December. This is for an upgrade to the power and cooling to allow Shaheen to be run at full capacity without power capping. During this period we will not be able to read or respond to any emails sent to help@hpc.kaust.edu.sa.

System Reservation

Shaheen has been reserved for a strategic user to perform scalability testing of their code for 48 hours, commencing at 08:00 Friday 28th October. Login access to the CDLs will remain available, and jobs can be queued to run from Sunday 30th October at 08:00 onwards.

RCAC Meeting

The project submission deadline for the next RCAC meeting is 31st October 2016. Please note that the RCAC meetings are held once per month. Projects received on or before the submission deadline will be included in the agenda for the next RCAC meeting. The next meeting is scheduled for November 2016. The detailed procedure and the updated forms are available here:

https://www.hpc.kaust.edu.sa/account-applications

Any new project allocation will be considered only using the new project proposal template. This should include up-to-date PI C.V. and optional suggestions and/or exclusions of reviewers.

Neser Last Day of Operation

Please note this system will be decommissioned on 30th November 2016.

After this date all data in /project and /home will be deleted. Please ensure that you have transferred any data you wish to retain.

Tip of the Week:Thread affinity with OpenMP 4.0

Thread affinity can prevent an MPI process or OpenMP thread from migrating to a different hardware resource. Process/thread migration may cause a significant performance decrease of a code. The OpenMP 4.0 standard introduced affinity settings controlled by OMP_PLACES and OMP_PROC_BIND environment variables.

  • OMP_PLACES: specifies hardware resources. The value can be either an abstract name describing a list of places or an explicit list of places. Choices are threads, cores, or sockets
    • threads : Each place corresponds to a single hardware thread on the target  machine. 
    • cores : Each place corresponds to a single core (having one or more  hardware threads) on the target machine.
    • sockets : Each place corresponds to a single socket (consisting of one or  more cores) on the target machine.
    • A list with explicit place values: such as"{0,1,2,3},{4,5,6,7},{8,9,10,11},{12,13,14,15}” 
  • OMP_PROC_BIND : controls how OpenMP threads are bound to resources. Values  for OMP_PROC_BIND include close, spread and master.
    • spread: Bind threads as evenly distributed (spread) as possible
    • close: Bind threads close to the master thread while still distributing threads for load balancing, wrap around once each place receives one  thread
    • master : Bind threads the same place as the master thread

 

Follow us on Twitter

Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.

Previous Announcements

http://www.hpc.kaust.edu.sa/announcements/

Previous Tips

http://www.hpc.kaust.edu.sa/tip/