Kontakt


  • Bitte kontaktieren Sie uns bevorzugt per E-Mail an eine der unten angegebenen Adressen. Vorrangig bitte an unsere Supportadresse .
  • Wünschen Sie telefonischen Kontakt, so rufen Sie uns gerne unter den unten stehenden Nummern an, oder vereinbaren Sie einen Gesprächstermin via Mail.
  • Unsere typischen Bürozeiten sind zwischen 9:00 und 17:30 Uhr. Falls wir einmal nicht anzutreffen sind, erreichen Sie uns immer per Mail

Dr. Stefan Harfst

+49 (0)441 798-3147

JJW 2-214

Fynn Schwietzer

 +49 (0)441 798-3287

 JJW 2-217

HPC-Support

Anschrift

Carl von Ossietzky Universität Oldenburg
Wissenschaftliches Rechnen
Johann-Justus-Weg 147a
26127 Oldenburg 

FLOW

FLOW (Facility for Large-scale COmputations in Wind Energy Research)

FLOW is one of the largest HPC clusters in Europe devoted solely to wind energy research and is operated by the IT Services of the University of Oldenburg. It is funded by the Federal Ministery for the Environment, Nature Conservation and Nuclear Safety (BMU) through a Programme that aims at increasing the contribution of renewables to the total electricity production in Germany. The users of the cluster are

The cluster is utilized for challenging computational fluid dynamics (CFD) calculcations on a wide range of (spatiotemporal) scales, from individual blades and wind turbines, to the simulation of wind parks, to mesoscale and weather models.
 

Overview of Hardware

  • 193 Compute Nodes (2288 CPU cores, 24.3 TFlop/s theoretical peak, 6.2 TB main memory,)
    • 122 "small-memory" nodes (diskless)
      • IBM System x iDataPlex dx360 M3 server (12 cores, 24 GB DDR3 RAM)
      • Intel Xeon Processor X5650 ("Westmere-EP", 6 cores, 2.66 GHz, 12 MB Cache, Max. Mem. Speed 1333 MHz, QPI 6.4 GT/s, TDP 95 W)
    • 64 "large-memory" nodes (diskless)
    • 7 "special" nodes (e.g., for data pre- and post-processing, and other purposes)
      • IBM System x3550 M2 server (8 cores, 32 GB DDR3 RAM, SAS 147GB HDD)
      • Intel Xeon Processor E5520 ("Nehalem-EP", 4 cores, 2.26 GHz, 8 MB Cache, Max. Mem. Speed 1066 MHz, QPI 5.86 GT/s, TDP 80 W)
  • Management and Login Nodes
    • 2 master nodes in an active/passive high-availability (HA) configuration
      • IBM System x3550 M3 server (8 cores, 24 GB DDR3 RAM, disks: 4 SAS 300GB HDDs, 10k RPM, 6Gbps, configured as RAID-10)
      • Intel Xeon Processor E5520 ("Westmere-EP", 4 cores, 2.4 GHz, 12 MB Cache, Max. Mem. Speed 1066 MHz, QPI 5.86 GT/s, TDP 80 W)
      • The master nodes are shared between FLOW and its sister cluster, HERO, and run all vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.). They also provide monitoring functions for both clusters (with automated alerting). Monitoring includes hardware components (health states of all servers, temperature, power consumption, etc.) as well as basic cluster services (with automated restart if a service has died).
    • 2 login nodes for user access to the system, software development (programming environment), and job submission and control
      • IBM System x3550 M3 server (8 cores, 24 GB DDR3 RAM, disks: 2 SAS 146GB HDDs, 10k RPM, 6Gpbs, configured as RAID-1)
      • CPU is the same as in master nodes
  • Internal networks
    • Node Interconnect: fully non-blocking QDR InfiniBand (Mellanox ConnectX-2 HCAs, Mellanox IS5200 216-Port Switch)
    • 10Gb Ethernet backbone network connecting the management and login nodes, the storage system, and the Gigabit Ethernet (see below) leaf switches
    • Gigabith Ethernet for vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.)
    • Dedicated IPMI network for hardware monitoring and control, including access to VGA console (KVM functionality), allowing full remote management of the cluster
  • Storage System
    • Enterprise-class scalable NAS cluster (manufacturer: EMC Isilon), 180 TB raw capacity, 130 TB net capacity (for the redundancy level chosen), IOPS NFS/CIFS (SpecSFS 2008) 18075 / 32279, InfiniBand backend network, two Gigabit Ethernet and two 10Gb Ethernet frontend ports per storage node.
    • The storage system is shared between FLOW and its sister cluster, HERO. Disk space is allocated to the two clusters depending on how much of the hardware of the storage system was paid out of the FLOW and HERO project funds, respectively.

System Software and Middleware

Selected applications running on FLOW

  • ANSYS CFX - a high-performance, general purpose fluid dynamics program
  • NekTar - a Navier-Stokes solver
  • OpenFOAM, a free, open source CFD software package which incorporates a variety of methods and features to solve a wide range of problems from chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics
  • PALM - A PArallelized Large-Eddy Simulation Model for Atmospheric and Oceanic Flows
  • WRF - The Weather Research & Forecasting Model

Pictures

(Stand: 02.08.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page