Resources

The Clusters

Cyclone

The cluster used for all projects accepted through the production and preparatory application process.

  • 17 40-core compute nodes
  • 16 40-core compute nodes and 4 NVidia V100 GPUs each
  • 2 20-core sockets with Intel Xeon Gold 6248
  • 192 GB memory per node
  • 135 TB NVMe Storage
  • 3.2 PB Shared Disk Storage
  • HDR 100 Node-to-Node interconnect
  • Rocky Linux 8.6

Nepheli

Bespoke system craeted for large projects.

  • 36 128-core compute nodes
  • 2 64-core sockets with AMD EPYC 7713
  • 256 GB memory per node
  • 3.2 PB Shared Disk Storage
  • HDR Infiniband network for MPI Node-to-node interconnect
  • Rocky Linux 8.6

SimEA

Bespoke system craeted for large projects

  • 6 48-core compute nodes and 4 NVidia A100 GPUs each
  • 2 24-core sockets with Intel Xeon Gold 6330
  • 500 GB memory per node
  • 3.2 PB Shared Disk Storage
  • HDR Infiniband network for MPI Node-to-node interconnect
  • Rocky Linux 8.6

Cyclamen

  • 8 32-core compute nodes and 2 NVidia P100 GPUs each
  • 2 16-core sockets with Intel Xeon G6130
  • 128 GB memory per node
  • 3.2 PB Shared Disk Storage
  • HDR Infiniband network for MPI Node-to-node interconnect
  • Rocky Linux 8.6

Cy-Tera
(legacy 2012)

Legacy system not fully available.

  • 98 12-core compute nodes
  • 18 12-core compute nodes and 2 NVidia M2070 GPUs each
  • 2 6-core sockets with Intel Westmere X5650
  • 48 GB memory per node
  • 360 TB GPFS
  • QDR Infiniband network for MPI Node-to-Node interconnect
  • CentOS

Planck
(legacy 2009)

Legacy system now offline.

  • 24 8-core compute nodes
  • 2 4-core sockets with Intel Westmere X5650
  • 32 GB memory per node
  • 40 TB Storage(shared)
  • Infiniband network for MPI Node-to-Node interconnect
  • CentOS 5

Apply for access to services and resources.

Find out more