Hardware Infrastructure

Hardware Infrastructure

This is a short overview of some of the interesting architectures and machines that are available at the Research Group Scientific Computing, used both for education and research purposes.

EXA: Heterogenous Many-Core Cluster

EXA

EXA is a compute cluster which comprises a frontend and 4 heterogenous computational nodes, all based on Supermicro SuperServer systems.

Node 1 features four Intel Xeon Gold 6138 20-core CPUs. Node 2 has two AMD Epyc 7501 32-core CPUS. Node 3 has 2 Intel Xeon Gold 6130 16-core CPUs and two Tesla V100 GPUs. Node 4 has 2 Intel Xeon Gold 6130 16-core CPUs and an AMD Radeon Instinct MI25.

Network capabilities include 10 Gigabit Ethernet and QDR Infiniband. The cluster runs Linux (CentOS). SLURM is used for resource management.

EXA has been put into service in October 2018 and is currently used for internal research.

PHIA: Xeon Phi and GPU Cluster

PHIA

PHIA is a compute cluster which comprises a frontend and 8 computational nodes. The nodes are SuperMicro 20127GR-TRF systems with two Intel Xeon Xeon E5-2650 2.0 GHz (Sandy Bridge) and 128 GB RAM.

There are six homogeneous nodes with two Xeon Phi 5110P cards (60-Core, 1.053 GHz, 8GB) and two Nvidia Tesla K20 M-class cards. One node is configured with four of the Xeon Phi cards, another node has four Nvidia Tesla cards.

Network capabilities include Gigabit Ethernet and QDR Infiniband. The cluster runs Linux (RHEL).

PHIA has been put into service in April 2013 and is currently used for internal research.

CORA: GPU Cluster

CORA

The CORA GPU cluster comprises one frontend and 6+1 computational nodes with two Intel Xeon quad-core X5550 2.66Ghz CPUs and 24GB DDR3-1333 system memory. Each of the computational nodes is additionally equipped with two nVidia Tesla C2050 and one nVidia Tesla C1060 GPUs.

A single Tesla C2050 (Fermi architecture) comprises 448 Streaming-cores, 3GB GDDR5 device memory and a peak floating-point performance of 1.03 TFlop/s (single precision). A Tesla C1060 provides 240 Streaming- cores, 4GB GDDR3 device memory and a single-precision peak performance of 933 Gflop/s. The interconnection capabilities of CORA include Gigabit Ethernet and high performance QDR Infiniband.

In total, the CORA GPU cluster comprises 64 CPU-cores, 7952 GPU-cores and an aggregated main memory of 262GB. The system offers an available disk space of approx. 64TB. CORA has been put into service in February 2010, and was upgraded with Fermi GPUs in November 2010.

Since September 2013 it is used for teaching purposes.

NOVA: AMD Opteron™ 6172

NOVA

The NOVA mini-cluster comprises 4 computational nodes with two 12-core AMD Opteron™ 6172 2.1GHz processors. Available system-memory is 48GB 1333MHz DDR3 per node. Networking interconnects include QDR Infiniband and Gigabit Ethernet.

In total, the system offers 96 CPU-cores with 192 GB DDR3 memory and 8 TB HDD storage. The system runs Red Hat Enterprise Linux 6 (64 bit), and has been put into service in March 2011.

MERET: Intel® Nehalem-EX

MERET

MERET is an Intel® "Nehalem-EX" based system and comprises four Intel® Xeon® X7560 8-core processors (2.26 GHz, 24 MB L3) and 64GB of RAM. Each core provides 2-way SMT, bringing the total number of hardware threads supported by MERET to 64. The system runs Red Hat Enterprise Linux 5 (64 bit).

MERET was put into service in October 2010.

DAWN: SPARC T3-2

SPARC T3-2

This SPARC T3-2 Server is powered by the SPARC T3 processor in a two-socket system with 3 rack units. With 2 x 1.65 GHz 16-core processors (each capable of 8 simultaneous threads) it delivers up to 256 parallel threads. Memory size is 128 GB.

The system runs on Solaris 10 SPARC. It has been put into service in May 2011.

Print