CARL
Contact
- Please contact us preferably by e-mail to one of the addresses below. Primarily, please contact our support address hpcsupport@uol.de.
- If you wish to contact us by phone, please call us at the numbers listed below. Alternatively, you can e-mail us to arrange an appointment for a telefone call.
- Our usual office hours are from 9am to 5:30pm. If we are not available, you can always reach us by mail.
HPC Support
Address:
CARL
CARL (named after Carl von Ossietzky)
CARL, funded by the Deutsche Forschungsgemeinschaft (DFG) and the Ministry of Science and Culture (MWK) of the State of Lower Saxony, is a multi-purpose cluster designed to meet the needs of compute-intensive and data-driven research projects in the main areas of
- Quantum Chemistry and Quantum Dynamics,
- Theoretical Physics,
- The Neurosciences (including Hearing Research),
- Oceanic and Marine Research, and
- Biodiversity.
Like its sister cluster EDDY, CARL is operated by the IT Services of the University of Oldenburg. The system is used by more than 20 research groups from the Faculty of Mathematics and Science, and a couple of research groups from the Department of Computing Science of the School of Computing Science, Business Administration, Economics and Law.
Overview of Hardware
- 327 Compute Nodes (7.640 CPU cores, 77 TB of main memory (RAM), 271 TFlop/s theoretical peak)
- 158 "standard" nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 256 GB main memory (RAM)
- 1 TB 7.2K 6Gbps HDD (used for local storage)
- 128 "low-memory" nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 128 GB main memory (RAM)
- 1 TB 7.2K 6Gbps HDD (used for local storage)
- 30 "high-memory" nodes
- LENOVO System x3650 M5 Node (Modell 8871-FT)
- 2x Intel Xeon CPU E5-2667 v4 8C with 3.2GHz
- 512 GB main memory (RAM)
- 2 "Pre- and postprocessing" nodes
- LENOVO System x3850 X6 Node (Modell 6241-FT1)
- 4x Intel Xeon CPU E7-8891 v4 10c with 2.8GHz
- 2048 GB main memory (RAM)
- 9 "GPU" nodes
- LENOVO NeXtScale nx360 M5 HPC Node (Modell 5465-FT1)
- 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- 256 GB main memory (RAM)
- 1 TB 7.2K 6Gbps HDD (used for local storage)
- NVIDIA GPU
- 158 "standard" nodes
- Management and Login Nodes
- 2 administration nodes in an active/passive high-availability (HA) configuration
- LENOVO System x3550 M5 Node Modell 8869-FT1
- 2x Intel Xeon CPU E5-2650 v4 12c with 2.2GHz
- 256 GB main memory (RAM) @ 2400MHz
- 4x 1.2TB 7.2K 12Gbps SAS HDD (set up as an hardware RAID 10)
- Connect-IB Single-port Card
- The master nodes are shared between CARL and its sister cluster EDDY and run all vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.). They also provide monitoring functions for both clusters (with automated alerting). Monitoring includes hardware components (health states of all servers, temperature, power consumption, etc.) as well as basic cluster services (with automated restart if a service has died).
- 2 login nodes for user access to the system, software development (programming environment), and job submission and control
- these nodes have the same speciftications as the administration nodes
- 2 administration nodes in an active/passive high-availability (HA) configuration
- Internal networks
- InfiniBand Network consisting of 2 Spine and 11 Leaf-Switches with a 8:1 blocking-factor. Therefore, each leaf-switch is connected to 32 MPC-nodes. The maximum data transfer rate is 56.250 Gb/s (4x FDR).
- Second, physically separate Gigabit Ethernet ("base network") for vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.)
- 10Gb Ethernet backbone network connecting the management and login nodes, the storage system, and the Gigabit Ethernet (MPI and base network) leaf switches
- IPMI network for hardware monitoring and control, including access to VGA console (KVM functionality), allowing full remote management of the cluster
- Storage System
- Enterprise-class scalable NAS cluster (manufacturer: EMC Isilon). This is where the home directories are stored. All data saved on the Isilon is backed up and its possible to work with snapchots. The data is accessible via a 10GbE-Connection
- The Isilon is the central storage of the IT Services, thats why its also used for the HPC. Disk space is allocated to the two clusters depending on how much of the hardware of the storage system was paid out of the FLOW and HERO project funds, respectively.
System Software and Middleware
- All cluster nodes are running Red Hat Enterprise Linux (RHEL)
- Cluster management: Bright Cluster Manager
- Commercial compilers, debuggers and profilers, and performance libraries: Intel Cluster Studio, PGI
- Workload management: SLURM
Selected applications running on CARL
(Due to licensing, some applications are only accessible for specific users or research groups.)
- Quantum Chemistry Packages: Gaussian 09, MOLCAS, MOLPRO, VASP
- LEDA - a C++ class library for efficient data types and algorithms
- MATLAB, including the Parallel Computing Toolbox
- FVCOM - The Unstructured Grid Finite Volume Coastal Ocean Model
Pictures
More pictures can be found here.