Our Resources

Core Infrastructure

The following foundational resources are shared across the facility to support all compute systems.

Shared Storage

A Multi-Petabyte shared disk storage system serves as the central data repository for all users and projects, ensuring high-availability and performance.

High-Speed Interconnect

All compute nodes are linked via a high-speed, low-latency HDR 100 InfiniBand network, crucial for large-scale parallel applications.


Our National Open-Access System

This is the primary system available to the national research and innovation community through our unified access policy.

Cyclone

A powerful hybrid system featuring both GPU-accelerated and CPU-only compute nodes, designed to support a wide range of scientific domains.

  • Architecture: 17 CPU-only nodes & 16 GPU-accelerated nodes
  • CPU Sockets: 2x 20-core Intel Xeon Gold 6248 per node
  • GPU Accelerators: 4x NVidia V100 per GPU node
  • Memory: 192 GB per GPU node
Apply for Access

Advanced AI & Research Platforms

These powerful systems are dedicated to specific large-scale research projects or partnerships and are not available through the general open-access call.

SimEA

Features state-of-the-art NVIDIA A100 GPUs for advanced AI research.

  • Architecture: 6 compute nodes
  • CPU Sockets: 2x 24-core Intel Xeon Gold 6330 per node
  • GPU Accelerators: 4x NVidia A100 per node
  • Memory: 500 GB per node

Nepheli

High-density AMD EPYC CPUs for large-scale parallel computing.

  • Architecture: 36 compute nodes
  • CPU Sockets: 2x 64-core AMD EPYC 7713 per node
  • Memory: 256 GB per node

Other Partner & Legacy Systems

Cyclamen

  • Architecture: 8 compute nodes
  • CPU Sockets: 2x 16-core Intel Xeon G6130 per node
  • GPU Accelerators: 2x NVidia P100 per node
  • Memory: 128 GB per node

Apply for access to services and resources.

Find out more