EDDY
Kontakt
- Bitte kontaktieren Sie uns bevorzugt per E-Mail an eine der unten angegebenen Adressen. Vorrangig bitte an unsere Supportadresse hpcsupport@uol.de.
- Wünschen Sie telefonischen Kontakt, so rufen Sie uns gerne unter den unten stehenden Nummern an, oder vereinbaren Sie einen Gesprächstermin via Mail.
- Unsere typischen Bürozeiten sind zwischen 9:00 und 17:30 Uhr. Falls wir einmal nicht anzutreffen sind, erreichen Sie uns immer per Mail
HPC-Support
Anschrift
EDDY
EDDY
EDDY is one of the largest HPC clusters in Europe devoted solely to wind energy research and is operated by the IT Services of the University of Oldenburg. It is funded by the Federal Ministery for the Environment, Nature Conservation and Nuclear Safety (BMU) through a program that aims at increasing the contribution of renewables to the total electricity production in germany. The users of the cluster are
- the Complex Fluid and System Dynamics project group of the Fraunhofer IWES Institute.
- Energy meteorology (EnMet)
- Turbulence, wind energy and stochastics (TWIST)
- Wind energy systems (WeSys)
The cluster is utilized for challenging computational fluid dynamics (CFD) calculcations on a wide range of (spatiotemporal) scales, from individual blades and wind turbines, to the simulation of wind parks, to mesoscale and weather models.
Overview of Hardware
- 244 Compute Nodes (5.856 CPU cores, 21 TB of main memory (RAM), 201 TFlop/s theoretical peak)
- 160 "Low-memory" compute nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 64 GB main memory (RAM) @ 2400MHz
- 81 "High-memory" compute nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 128 GB main memory (RAM) @ 2400MHz
- 3 "GPU" compute nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 56 GB main memory (RAM) @ 2400MHz
- 1TB 7.2k 6Gbps HDD (used for local storage)
- NVIDIA GPU
- 160 "Low-memory" compute nodes
- Management and Login Nodes
- 2 administration nodes in an active/passive high-availability (HA) configuration
- LENOVO System x3550 M5 Node Modell 8869-FT1
- 2x Intel Xeon CPU E5-2650 v4 12c with 2.2GHz
- 256 GB main memory (RAM) @ 2400MHz
- 4x 1.2TB 7.2K 12Gbps SAS HDD (set up as an hardware RAID 10)
- Connect-IB Single-port Card
The master nodes are shared between CARL and its sister cluster EDDY and run all vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.). They also provide monitoring functions for both clusters (with automated alerting). Monitoring includes hardware components (health states of all servers, temperature, power consumption, etc.) as well as basic cluster services (with automated restart if a service has died)
- 2 login nodes for user access to the system, software development (programming environment), and job submission and control. These nodes have the same specifications as the administration nodes described above.
- Internal networks
- InfiniBand Network consisting of 9 Spine and 15 Leaf-Switches with a 1:1 (non-)blocking-factor. Therefore, each leaf-switch is connected to 18 MPC-nodes. The maximum data transfer rate is 56.250 Gb/s (4x FDR).
- Second, physically separated Gigabit Ethernet ("base network") for vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.)
- 10Gb Ethernet backbone network connecting the management and login nodes, the storage system, and the Gigabit Ethernet (MPI and base network) leaf switches
- IPMI network for hardware monitoring and control, including access to VGA console (KVM functionality), allowing full remote management of the cluster
- Storage System
- General Parallel File System (GPFS) with 1.392 PB of total storage space of which about 926 TB are usable. We are using 4 declustered arrays to secure the availability of our data. The fail of a single hard drive isn't noticeable, even if two hard drives fail the "critical rebuild" would only take about 45 minutes. The high-memory and pre- and postprocessing have additional local storage space (up to 1 TB).
- Enterprise-class scalable NAS cluster (manufacturer: EMC Isilon). This is where the home directories are stored. All data saved on the Isilon is backed up and its possible to work with snapchots. The data is accessible via a 10GbE-Connection.
The Isilon is the central storage of the IT Services, thats why its also used for the HPC. Disk space is allocated to the two clusters depending on how much of the hardware of the storage system was paid out of the FLOW and HERO project funds, respectively.
- 2 administration nodes in an active/passive high-availability (HA) configuration
SYSTEM SOFTWARE AND MIDDLEWARE
- All cluster nodes are running Red Hat Enterprise Linux (RHEL)
- Cluster management: Bright Cluster Manager
- Commercial compilers, debuggers and profilers, and performance libraries: Intel Cluster Studio, PGI
- Workload management: SLURM
SELECTED APPLICATIONS RUNNING ON EDDY
(Due to licensing, some applications are only accessible for specific users or research groups.)
- PALM (an advanced and modern meteorological model system for atmospheric and oceanic boundary-layer flows)
- OpenFoam (A toolbox for the development of customized numerical solvers for computational fluid dynamics)
- MATLAB, including the Parallel Computing Toolbox
- WRF (A mesoscale numerical weather prediction system designed for atmospheric research and operational forecasting applications)
PICTURES
More pictures can be found here.