Contact

  • Please contact us by e-mail via one of our two support addresses, depending on your request.
  • If you wish to contact us by phone, please call us at the numbers listed below. Alternatively, you can e-mail us to arrange an appointment for a telefone call.
  • Our usual office hours are from 9am to 5:30pm. If we are not available, you can always reach us by mail.

HPC Support

Research Data Management


Dr. Stefan Harfst

+49 (0)441 798-3147

W3 1-139

 

Dr. Johannes Vosskuhl

+49 (0)441 798-3576

W3 1-139

 

Fynn Schwietzer

 +49 (0)441 798-3287

 W3 1-139

 

Address:

Carl von Ossietzky Universität Oldenburg
Fakultät V - Geschäftsstelle
Ammerländer Heerstr. 114-118
26129 Oldenburg

Article

Newsletter April 2020

Newsletter of the universities Scientific Computing team

1. Very High Cluster Utilization: The HPC cluster CARL and EDDY are very busy as many of you probably have noticed. On average only 8% of the available resources have been idle in the first four month of 2020 (and only 5% idle resources in April). Some users have reported very long queueing times, in particular for parallel jobs requesting multiple cores on a single node. In principle, the job scheduler should take care of the fair-sharing of the HPC resources, but in the end, there are only so many nodes. The following guidelines might help to improve the situation for everyone:

    • If possible, limit the number of jobs you are running, if your jobs need more than the default memory and run for many days. For job arrays, use the maximum tasks limit.
    • MPI Parallel jobs should best be submitted with the options --nodes and --ntasks-per-node to minimize the number of used nodes.
    • Many single-core jobs can be organized to run on a limited number of nodes using the parallel command as explained in this HowTo (Section 4).

If you have questions, please contact .  

2. Changes Job Scheduler: About two weeks ago we have changed some of the settings of the job scheduler that determine job priorities. Mainly, we increased the time period for considering past usage so that less active users get higher priorities for their jobs. In addition, we also reserved a total of 28 compute nodes in the partition carl.p. This should decrease the wait times for (parallel) jobs requesting a time limit of 23:55h or less. 

3. HPC Software Environments: Software on the cluster is organized in modules but in order to make sure different modules are compatible with each other we also use so-called HPC software environments. These environments can be activated by loading one of the hpc-env modules. Most software is installed in two environments: hpc-uniol-env and hpc-env/6.4 and we now started installing software in hpc-env/8.3 (see software news below). The new environment includes more recent compilers and toolchains. If you are looking for a certain software package, you can use module spider to find out which environment you need to load. If you are missing software in one of the environments, please contact .  

4. Dates: Please take note following upcoming events

5. Software News: The following modules are now available on hpc-env/8.4:

The list above might not be complete, you can always search for software with the command “module spider <softwarename>”.

(Changed: 19 Jan 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page