Introduced in March 2015, the KSL team manages Shaheen II, a Cray XC40 delivering over 7.2 Pflop/s of theoretical peak performance. With 5.536 Pflop/s of sustained LINPACK performance, Shaheen II was the seventh fastest supercomputer in the world according to the TOP500 list of July 2015. The September 2018 TOP500 list places Shaheen II at #32.
The system has 6,174 dual sockets compute nodes based on 16 core Intel Haswell processors running at 2.3GHz. Each node has 128GB of DDR4 memory running at 2300MHz. Overall the system has a total of 197,568 processor cores and 790TB of aggregate memory. Fig. 1 summarizes the specifications of the Shaheen II system.
Figure 1. Specification of Cray XC40 Shaheen-II.
The compute nodes are housed in 36 water-cooled XC40 cabinets, and connected via the Aries High Speed Network (HSN). The HSN is configured with 8 optical network connections between every pair of cabinets achieving therefore 57% of the maximum global bandwidth between the 18 groups of two cabinets. This will allow the design of the future upgrade with additional cabinets to accommodate more optical links between all cabinets with the same level of connectivity, i.e. 8 optical network connections between every pair of cabinets.
KAUST’s system includes richly layered data storage architecture. The main data storage solution is a Lustre Parallel file system based on Cray Sonexion 2000 with a usable storage capacity of 17.2 PB delivering around 500 GB/s of I/O throughput. The Cray Sonexion 2000 installation is configured using 72 high performance Scalable Storage Units (SSU) and 144 Object Storage Services (OSS) with 4TB drives connected to the XC40 via 72 LNET router service nodes evenly distributed across the 36 cabinets.
The backup and archiving will be enabled with a Cray Tiered Adaptive Storage (TAS) system, which consists of a tape library with a total capacity of 20 PB upgradable to 100 PB. The TAS solution is paired with the TAS Connector for Lustre to provide a unique and tightly integrated solution for tiered data management directly from the Lustre file system. This will provide higher I/O throughput to the tape library, with a 200 TB disk cache included as an automatic buffer between the Lustre filesystem and the TAS solution.