Bluemoon

Bluemoon has 105 nodes, providing 8552 compute cores; 6720 of these cores are available via HDR Infiniband. This cluster supports large-scale computation, low-latency networking for MPI workloads, large memory systems, and provides high-performance parallel filesystems.

Hardware

  • 9 dual processor, 96-core AMD EPYC 9654 PowerEdge R6625 nodes, with 1.5TB RAM each. 4xHDR / 200Gb/s Infiniband for MPI communication and GPFS access. 25Gb/s Ethernet
  • 39 dual-processor, 64-core AMD EPYC 7763, PowerEdge R6525 nodes, with 1TB RAM each. 2xHDR / 100Gb/s Infiniband for MPI communication and GPFS access. 25Gb/s Ethernet
  • 32 dual-processor, 12-core Intel E5-2650 v4, Dell PowerEdge R430 nodes, with 64GB RAM each. 10Gb Ethernet
  • 8 dual-processor, 12-core Intel E5-2650 v4, Dell PowerEdge R430 nodes, with 256GB RAM each. 10Gb Ethernet
  • 9 dual-processor, 20 core Intel 6230, PowerEdge R440 nodes, with 100GB RAM each. 10Gb Ethernet
  • 1 dual-processor, 64-core EPYC 7543 PowerEdge R7525 node, with 4TB RAM. 2xHDR / 100Gb/s Infiniband for MPI communication and GPFS access. 25Gb/s Ethernet
  • 1 dual-processor, 8-core Intel E7-8837, IBM x3690 node, with 512GB RAM
  • 2 dual-processor, 12-core (Intel E5-2650 v4) Dell R730 nodes, with 1TB RAM
A few nodes are restricted and only available by request:
  • 2 dual processor, 16-core EPYC 7F52 Dell PowerEdge R6525 high-clock-speed nodes, with 1TB RAM each
  • 2 dual-processor, 64-core AMD Epyc 7763 PowerEdge R7525 GPU testing nodes, with 1TB RAM and 2 Nvidia A100 GPUs each
The VACC GPFS file systems are provided by:
  • 2 I/O nodes (Dell R740xd) with 40GbE, 200Gb HDR, along with 2 I/O nodes (Dell R430s, 10Gb Ethernet-connected) connected to:
    • 1 Dell MD3460 providing 287TB storage to GPFS
    • 1 Dell ME4084 providing 751TB of spinning disk storage
    • 1 IBM FS7200 providing 187TB of NVMe-attached FlashCore Module storage
Updated on April 25, 2024

Related Articles

Need Support?
Can't find the answer you're looking for?
Contact Support