KAUST Supercomputing Laboratory Newsletter 4th May 2016
XSEDE HPC Workshop: OpenMP
The registration page for the XSEDE HPC Monthly Workshop Series - 10th May - OpenMP session is up. The portal registration page can be found here:
https://portal.xsede.org/course-calendar/-/training-user/class/488/sessi...
If there is enough interest, KSL will investigate the possibility of streaming this evening course live at the Library computer room starting at 6pm.
RCAC Meeting
The project submission deadline for the next RCAC meeting is 31st May 2016. Please note that the RCAC meetings are held once per month. Projects received on or before this deadline will be included in the agenda for the next RCAC meeting, scheduled to be held in June 2016. The detailed procedure and the forms are available in the following webpage.
https://www.hpc.kaust.edu.sa/account-applications
Known issue with Shaheen
Submitting jobs with large numbers of MPI tasks from /project might produce error messages (as shown below). KSL and Cray are working to fix this. Meanwhile, copy your executable and/or rebuild your application with shared libraries to your /scratch/<username> directory, and the job will run without error.
slurmstepd: error: couldn't chdir to `/lustre/project/kXX/....': Permission denied: going to /tmp instead
Tip of the week: Controlling MPI tasks per node
We recommend that you specify the number of nodes, the tasks per node, and sockets in your SLURM job script. A few applications can benefit from hyperthreading and by default, SLURM will load 64 tasks per node since hyperthreading is enabled on Shaheen. Here is an example to control the number of tasks on each node to avoid hyperthreading:
#!/bin/bash #SBATCH --account=k## #SBATCH --job-name=job_name #SBATCH --output=job_name.out #SBATCH --error=job_name.err #SBATCH --nodes=4 #SBATCH --time=00:30:00 srun --ntasks=128 --hint=nomultithread --ntasks-per-node=32 --ntasks-per-socket=16 ./exe
Data Pre/Post-Processing on Shaheen II for Analysis and Visualization
The Supercomputing Core Lab and Visualization Core Lab are conducting the following short survey intended to help us determine the Visualization Software Tools utilization needs on Shaheen II. This will help us provide appropriate assistance such as building and maintaining required tools, creating mailing lists for user groups or organizing specific seminars and hands-on workshops,etc.
https://www.surveymonkey.com/r/95NXRD8
Follow us on Twitter
Follow all the latest news on HPC within the Supercomputing Lab and at KAUST, on Twitter @KAUST_HPC.