Resources

The Clusters

Cyclone

The cluster used for all projects accepted through the production and preparatory application process.

  • 17 40-core compute nodes
  • 16 40-core compute nodes and 4 NVidia V100 GPUs each
  • 2 20-core sockets with Intel Xeon Gold 6248
  • 192 GB memory per node
  • 135 TB NVMe Storage
  • 3.2 PB Disk Storage
  • HDR 100 Node-to-Node interconnect
  • CentOS

AMD Epyc

Bespoke system craeted for large projects.

  • 8 128-core compute nodes
  • 2 64-core sockets with AMD EPYC 7702
  • 256 GB memory per node
  • 4.5 PB Disk Storage(shared)
  • HDR Infiniband network for MPI Node-to-node interconnect
  • CentOS 7.7

Cyclamen

  • 8 32-core compute nodes and 2 NVidia P100 GPUs each
  • 2 16-core sockets with Intel Xeon G6130
  • 128 GB memory per node
  • 4.5 PB Disk Storage(shared)
  • HDR Infiniband network for MPI Node-to-node interconnect
  • CentOS 7.6

Cy-Tera
(legacy 2012)

Legacy system not fully available.

  • 98 12-core compute nodes
  • 18 12-core compute nodes and 2 NVidia M2070 GPUs each
  • 2 6-core sockets with Intel Westmere X5650
  • 48 GB memory per node
  • 360 TB GPFS
  • QDR Infiniband network for MPI Node-to-Node interconnect
  • CentOS

Planck
(legacy 2009)

Legacy system now offline.

  • 24 8-core compute nodes
  • 2 4-core sockets with Intel Westmere X5650
  • 32 GB memory per node
  • 40 TB Storage(shared)
  • Infiniband network for MPI Node-to-Node interconnect
  • CentOS 5

Euclid
(legacy 2011)

Legacy system not fully available.

  • 6 8-core compute nodes
  • 2 6-core sockets with Intel Westmere X5650
  • 16 GB memory per node
  • 35 TB Storage(shared)
  • Infiniband network for MPI Node-to-Node interconnect
  • CentOS 6.5

Apply for access to services and resources.

Find out more