KAUST Supercomputing Laboratory Newsletter 10th February 2016

Annual Power Maintenance

Due to the annual power maintenance in the data centre, all of the systems will be unavailable from 08:00 on Thursday 11th February until approximately 10:00 on Monday 15th February.

Please note that the system will not be available to run jobs from 15:00 today as it has been reserved to run large scale jobs that were deferred to avoid disruption to other users.

Shaheen Storage

A reminder of the polices in place for Shaheen storage:

  • /scratch/<username>: files not modified AND not accessed in the last 60 days will be deleted.
  • /scratch/tmp: temporary folder - files not modified AND not accessed in the last 3 days will be deleted.
  • /project/<projectname>: 20 TB limit per project. Once a project has used 20TB of disk storage, files will be automatically deleted from disk with a weighting based on date of last access. Stub files will remain on disk that link to the tape copy, so from a user's perspective the file will still be visible on disk using normal commands such as ls, but will take take time to recover from tape back to disk if the file needs to be read.
  • /scratch is designated as temporary storage, the data is NOT copied to tape.

The Third Annual Workshop on "Accelerating Scientific Applications Using GPUs"

The KAUST Supercomputing Laboratory is co-organizing with NVIDIA, a leader in accelerated computing, a one day workshop on accelerating scientific applications using GPUs on Tuesday February 23rd, 2016 in the auditorium between building 2 and 3. To register to the event, please click here

The event will be followed up by a two-day GPU hack-a-thon in which selected teams of developers will be guided by OpenACC and CUDA mentors from NVIDIA and KAUST to port and accelerate their domain science application to GPU accelerators. Space will be limited to 4-5 teams. Please click here to submit your hack-a-thon proposal.

Please contact us at training@hpc.kaust.edu.sa if you need further information. We are looking forward to seeing you there.

Saber Feki, Workshop Chair

Bilel Hadri and Hatem Ltaief, Workshop Co-Chairs

Tip of the Week: Multi-threaded MPI

MPI defines four “levels” of thread safety and it is supported for all three programming environments (intel, cray and gnu) on Shaheen. Cray-MPICH  offers improved support for multi-threaded applications that perform MPI operations within threaded regions. Currently, this feature is available as a separate version of the Cray-MPICH library that is invoked by using a new compiler driver option, -craympich-mt. This is used when MPI code MPI_Init_threads is called instead of MPI_Init. The maximum thread support level is returned by the MPI_Init_thread() call in the "provided"

On Shaheen II, here are the steps for using it:

  • Compile your code as follows:
             cc -craympich-mt -o mpi_mt_code.x mpi_mt_test.c
  • In your job script, before the srun command, add the following:
              export MPICH_MAX_THREAD_SAFETY=multiple
  • Please note that MPICH_MAX_THREAD_SAFETY specifies the maximum allowable thread-safety level that is returned by MPI_Init_thread() in the provided argument. This allows the user to control the maximum level of threading  allowed. The 4  legal values are:
    • MPI_THREAD_SINGLE: only one thread will execute.  
    • MPI_THREAD_FUNNELED: the process may be multi-threaded, but only the main thread will make  MPI calls (all MPI calls are funneled to the main thread).  
    • MPI_THREAD_SERIALIZED: the process may be multi- threaded, and multiple threads may make  MPI calls, but only one at a time: MPI calls are not made concurrently  from two distinct threads (all MPI calls are serialized ).
    • MPI_THREAD_MULTIPLE: Multiple threads may call MPI, with no restrictions.
MPICH_MAX_THREAD_SAFETY value MPI_Init_thread() returns
single MPI_THREAD_SINGLE
funneled MPI_THREAD_FUNNELED
serialized MPI_THREAD_SERIALIZED
multiple MPI_THREAD_MULTIPLE

More information is available in the man page for  intro_mpi.

 

Follow us on Twitter

Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.

Previous Announcements

Previous Tips