KAUST Supercomputing Laboratory Newsletter 25th June 2020

In this newsletter:

  • Programming Environment update on Shaheen in June 2020
  • RCAC Meeting
  • KAUST supercomputer Shaheen II joins the fight against COVID-19
  • Tip of the Week: Submitting Many Jobs At Once
  • Follow us on Twitter
  • Previous Announcements
  • Previous Tips

 

Programming Environment update on Shaheen in June 2020

The new Cray Programming Environment Software release CDT/20.06 has been installed, however, there will be no software default versions change this time. The default is still CDT/19.12.

To get the whole new release and for compatibilities between the different libraries, we recommend using the command module load cdt/20.06 instead of loading manually each different version.

Below is the detailed list of librairies in each CDT.

CDT 20.6 (new) CDT 19.12 (default)
cce/10.0.1 cce/9.1.1
gcc/9.3.0 gcc/8.3.0
atp/3.6.4 atp/2.1.3
cray-hdf5-parallel/1.10.6.1 cray-hdf5-parallel/1.10.5.2
cray-hdf5/1.10.6.1 cray-hdf5/1.10.5.2
cray-libsci/20.6.1 cray-libsci/19.06.1
cray-fftw/3.3.8.6 cray-fftw/3.3.8.4
cray-mpich-abi/7.7.14 cray-mpich-abi/7.7.11
cray-mpich/7.7.14 cray-mpich/7.7.11
cray-netcdf-hdf5parallel/4.7.3.3 cray-netcdf-hdf5parallel/4.6.3.2
cray-netcdf/4.7.3.3 cray-netcdf/4.6.3.2
cray-parallel-netcdf/1.12.0.1 cray-parallel-netcdf/1.11.1.1
cray-petsc-64/3.12.4.1 cray-petsc-64/3.11.2.0
cray-petsc-complex-64/3.12.4.1 cray-petsc-complex-64/3.11.2.0
cray-petsc-complex/3.12.4.1 cray-petsc-complex/3.11.2.0
cray-petsc/3.12.4.1 cray-petsc/3.11.2.0
cray-python/3.8.2.1 cray-python/3.7.3.2
cray-R/3.6.1.1 cray-R/3.6.1
cray-shmem/7.7.14 cray-shmem/7.7.11
cray-tpsl-64/20.03.2 cray-tpsl-64/19.06.1
cray-tpsl/20.03.2 cray-tpsl/19.06.1
cray-trilinos/12.18.1.0 cray-trilinos/12.14.1.0
iobuf/2.0.10 iobuf/2.0.9
papi/6.0.0.1 papi/5.7.0.2
perftools/20.06.0 perftools/6.3.0
PrgEnv-cray/6.0.7 PrgEnv-cray/6.0.5
PrgEnv-gnu/6.0.7 PrgEnv-gnu/6.0.5
PrgEnv-intel/6.0.7 PrgEnv-intel/6.0.5

 

RCAC Meeting

The project submission deadline for the next RCAC meeting is 30th June 2020. Please note that the RCAC meetings are held once per month. Projects received on or before the submission deadline will be included in the agenda for the subsequent RCAC meeting. The detailed procedures, updated templates and forms are available here: https://www.hpc.kaust.edu.sa/account-applications

 

KAUST supercomputer Shaheen II joins the fight against COVID-19

King Abdullah University of Science and Technology (KAUST) invites researchers from across the Kingdom to submit proposals for COVID-19-related research. Recognizing the urgency to address global challenges related to the COVID-19 pandemic through scientific discovery and innovation, the University’s Supercomputing Core Laboratory (KSL) is making computing resources—including the flagship Shaheen II supercomputer and its expert scientists—available to support research projects.

Topics may include but are not limited to: understanding the virus on a molecular level; understanding its fluid-dynamical transport; evaluating the repurposing of existing drugs; forecasting how the disease spreads; and finding ways to stop or slow down the pandemic.

Accepted proposals can access the following resources: (1) Shaheen II, a Cray XC-40 supercomputer based on Intel Haswell processors with nearly 200,000 compute cores tightly connected with Aries high-speed interconnect; (2) Ibex cluster, a high throughput computer system with about 500 computing nodes using Intel Skylake and Cascade Lake CPUs and Nvidia V100 GPUs; and (3) KSL staff scientists, who will provide support, training and consultancy to maximize impact. Through 30 June 2020, up to 15% of these resources will be reserved for fast-tracking competitive COVID-19 proposals through the KAUST Research Computing Allocation Committee.  Thereafter, such proposals remain welcome and will be considered in the standard process.

Applicants can apply for computing allocations using the COVID-19 Project Proposal form. Please submit the form to projects@hpc.kaust.edu.sa. Submitted proposals will be fast-tracked for processing.

Please contact help@hpc.kaust.edu.sa with any inquiries.

 

Tip of the Week: Submitting Many Jobs At Once

Sometimes, we have a series of calculations with different input files but the same/similar commands. The number of such calculations could be very large (dozens, or even hundreds). Instead of submitting them one by one manually, we can write a script to automate the job submission.

1) Use bash script:
Suppose we will run 10 such calculations in './my_project', and each calculation will run inside one of the 10 subfolders './my_calc_*' with its own input files and Slurm jobscript 'script.slurm'. Then we can put the following script (script.sh) in './my_project', and run it (./script.sh):

#!/bin/bash
all_folders=($(ls -d my_calc_*))
for i in "${all_folders[@]}"
do
  (cd "$i" && echo "Submitting in $i" && sbatch script_slurm)
  sleep 1
done

2) Use job array in Slurm jobscript:
Suppose we will run 100 such calculations in './my_project', and each calculation will run inside one of the 100 subfolders './my_calc_*' with its own input files but the same command 'srun --ntasks=32 --hint=nomultithread ./my_app'. Then we can put the following Slurm jobscript (script.slurm) in './my_project', and submit it (sbatch script.slurm):

#!/bin/bash
#SBATCH --partition=workq
#SBATCH --job-name="test"
#SBATCH --nodes=1
#SBATCH --array=0-99
#SBATCH --time=24:00:00
#SBATCH --err=std-%A_%a.err
#SBATCH --output=std-%A_%a.out
#----------------------------------------------------------#
all_folders=($(ls -d my_calc_*))
cd ${all_folders[${SLURM_ARRAY_TASK_ID}]} && srun --ntasks=32 --hint=nomultithread ./my_app

Note 1: The scripts above are just for demonstration, and please modify and add more details according to your own needs.
Note 2: The scripts above are not working if the subfolder names include white-spaces.
Note 3: Before using the scripts to submit all jobs, do a test on a smaller scale to make sure it works as expected.

 

Follow us on Twitter

Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.

Previous Announcements

http://www.hpc.kaust.edu.sa/announcements/

Previous Tips

http://www.hpc.kaust.edu.sa/tip/