System Architecture
Link to section 'Compute Nodes' of 'System Architecture' Compute Nodes
Model: | 3rd Gen AMD EPYC™ CPUs (AMD EPYC 7763) |
---|---|
Number of nodes: | 1000 |
Sockets per node: | 2 |
Cores per socket: | 64 |
Cores per node: | 128 |
Hardware threads per core: | 1 |
Hardware threads per node: | 128 |
Clock rate: | 2.45GHz (3.5GHz max boost) |
RAM: | Regular compute node: 256 GB DDR4-3200 Large memory node: (32 nodes with 1TB DDR4-3200) |
Cache: | L1d cache: 32K/core L1i cache: 32K/core L2 cache: 512K/core L3 cache: 32768K |
Local storage: | 480GB local disk |
Link to section 'Login Nodes' of 'System Architecture' Login Nodes
Number of Nodes | Processors per Node | Cores per Node | Memory per Node |
---|---|---|---|
8 | 3rd Gen AMD EPYC™ 7543 CPU | 32 | 512 GB |
Link to section 'Specialized Nodes' of 'System Architecture' Specialized Nodes
Sub-Cluster | Number of Nodes | Processors per Node | Cores per Node | Memory per Node |
---|---|---|---|---|
B | 32 | Two 3rd Gen AMD EPYC™ 7763 CPUs | 128 | 1 TB |
G | 16 | Two 3rd Gen AMD EPYC™ 7763 CPUs + Four NVIDIA A100 GPUs | 128 | 512 GB |
Link to section 'Network' of 'System Architecture' Network
All nodes, as well as the scratch storage system are interconnected by an oversubscribed (3:1 fat tree) HDR InfiniBand interconnect. The nominal per-node bandwidth is 100 Gbps, with message latency as low as 0.90 microseconds. The fabric is implemented as a two-stage fat tree. Nodes are directly connected to Mellanox QM8790 switches with 60 HDR100 links down to nodes and 10 links to spine switches.