Our Resources

Our National Open-Access Systems

These are the primary systems available to the national research and innovation community through our unified access policy.

Aphroditi

AI-training-focused servers that power development, fine-tuning, and production-scale model training for the national community.

Cluster: B200
  • Architecture: 2 HGX B200 nodes
  • GPU Accelerators: 8× NVIDIA B200 per node, each with 192 GB VRAM
  • CPU Sockets: 2× Intel Xeon Platinum per node delivering 64 cores total
  • Memory: 2064 GB high-bandwidth memory per node
Cluster: RTX6000
  • Architecture: HGX RTX6000 Blackwell
  • GPU Accelerators: 8× NVIDIA RTX6000 Blackwell per node, each with 96 GB VRAM
  • CPU Sockets: 2× AMD EPYC 9333 per node delivering 64 cores total
  • Memory: 1032 GB RAM per node
Apply for Access

Cyclone

A powerful hybrid system featuring both GPU-accelerated and CPU-only compute nodes, designed to support a wide range of scientific domains.

Cluster: CPU
  • Architecture: 17 CPU-only nodes
  • CPU Sockets: 2× 20-core Intel Xeon Gold 6248 per node
  • Memory: 192 GB per node
Cluster: GPU
  • Architecture: 16 GPU-accelerated nodes
  • CPU Sockets: 2× 20-core Intel Xeon Gold 6248 per node
  • GPU Accelerators: 4× NVIDIA V100 per node
  • Memory: 192 GB per node
Apply for Access

Research Platforms

These powerful systems are dedicated to specific large-scale research projects or partnerships and are not available through the general open-access call.

SimEA

Features state-of-the-art NVIDIA A100 GPUs for advanced AI research.

  • Architecture: 6 compute nodes
  • CPU Sockets: 2x 24-core Intel Xeon Gold 6330 per node
  • GPU Accelerators: 4x NVidia A100 per node
  • Memory: 500 GB per node

Nepheli

High-density AMD EPYC CPUs for large-scale parallel computing.

  • Architecture: 36 compute nodes
  • CPU Sockets: 2x 64-core AMD EPYC 7713 per node
  • Memory: 256 GB per node

Other Partner & Legacy Systems

Cyclamen

  • Architecture: 8 compute nodes
  • CPU Sockets: 2x 16-core Intel Xeon G6130 per node
  • GPU Accelerators: 2x NVidia P100 per node
  • Memory: 128 GB per node

Core Infrastructure

All foundational resources are shared across the facility to support all compute systems.

Shared Storage

A Multi-Petabyte shared disk storage system serves as the central data repository for all users and projects, ensuring high-availability and performance.

High-Speed Interconnect

All compute nodes are linked via a high-speed, low-latency HDR 100 InfiniBand network, crucial for large-scale parallel applications.

Apply for access to services and resources.

Find out more