Dear HPC users,
In this month’s newsletter we are announcing an upcoming and important change for the storage systems of the HPC cluster. Please read the following carefully:
- Limitation Number of Files: To ensure optimal performance of the central file systems we will have to enforce a limit for the number of files each user can store in his/her directories. These limits will affect the directories $HOME, $DATA, and $OFFSITE, they are documented in the HPC wiki and will be activated on May 11th. Users above this limit will be informed in a separate e-mail before.
Once the quota for the number of files is active, users with too many files (more than the hard quota) in a given directory will not be able to create new files in that directory. In addition, a soft quota is in place which triggers a 30-day grace period to allow to temporarily store additional files (the same principle as with the soft and hard quotas for file size, which you already know and which remain active as well, of course).
If you have any questions regarding this change or if you want to request higher quotas, please contact . - Command lastquota Updated: The command lastquota now reflects the upcoming change in quotas. It now prints for most directories two lines, one for the used capacity and one for the used number of files. In case you are above a soft limit, the remaining grace period is also shown.
- Reducing File System Usage: If you need to reduce the number and size of files you are storing, consider creating compressed tar-files. The command
$ tar cf - project_xyz/ | zstd -T0 > project_xyz.tar.zst
combines all the files from the directory project_xyz/ into a single file project_xyz.tar.zst using Zstandard for compression. More details about this can be found in the HPC wiki. - $WORK is nearly full: The file system for $WORK was more than 90% full last week, only about 50TB remained free (now we are back to above 100TB). All users are kindly asked to check if they can move files, that a not actively needed on the cluster, to e.g. $DATA or $OFFSITE. See the HPC wiki for instructions.
- User Survey: The BI (in which Scientific Computing is a division) has started a user survey to evaluate our services. A PDF from is provided at https://uol.de/bi/nutzerumfragen (go to the bottom of the page to find the version for Scientific Computing). An online version is also in preparation. Your feedback is highly appreciated.
- Dates: Please take note of following upcoming events
- HPC Status Conference (Gauß Alliance): April 27th, 10 – 17:30 (online, registration needed)
- Meeting of the HPC user representatives: June 25th, 10 – 12 (probably online)
- Deadline for HLRN proposals: April, 28th
- Software News: The following modules are now available on hpc-env/8.3:
- SNAP-HMM/20190603-GCC-8.3.0
- BioPerl/1.7.8-GCCcore-8.3.0
- dmipy/1.0.5-foss-2019b-Python-3.7.4
- Nilearn/0.7.1-foss-2019b-Python-3.7.4
- Exonerate/2.4.0-GCC-8.3.0
- RStudio/1.4.1106-foss-2019b-Java-11-R-4.0.2
- exciting/nitrogen-14-intel-2019b-Python-2.7.16
- lxml/4.5.2-GCCcore-8.3.0 (Python package working with Python/2.7.16 and Python/3.7.4
- … and much more packages! So you might want to take a look at our software page, should you need one or two dependency updates.
The list above might not be complete, you can always search for software with the command “module spider <softwarename>”.
Best wishes and happy computing
Your HPC support team