National High Performance Computing (NHR) Alliance Launches Scientific Conference Series

The first conference will take place on September 18-19 at the Zuse Institute in Berlin. The scientific conference series with annually changing focal topics is intended to promote the exchange of scientists on HPC use. This year's inaugural NHR conference will focus on atomistic simulation, life sciences and agent-based simulation.

With top-class keynote speakers and further lectures on this year’s scientific topics, the conference will provide the basis for a scientific exchange beyond disciplinary boundaries.

Keynote Speakers

  • Mohammed AlQuraishi, Columbia University New York
  • Rob Axtell, George Mason University College of Science
  • Helmut Grubmüller, MPI, Göttingen
  • Karissa Sanbonmatsu, Los Alamos National Laboratory

By presenting your research in a talk or poster, you are invited to initiate discussions with other scientists. NHR is made for science and so you are asked to provide impetus to make NHR meet your needs. Consulting, and operator staff will participate in the program, and there will be lots of opportunities for one-on-one meetings as well as panel discussions.

Since the end of 2021 the JGU is part of the transnational consortium of the national high-performance computers (NHR) South-West. Aside from Mainz other members include the Rhineland-Palatinate technical university of Kaiserslautern-Landau, the Goethe university of Frankfurt and the university of the saarland. Throughout Germany, scientists focusing on high-energy physics, condensed matter physics and life science can use MOGON NHR South-West for their research.

New High-Performance Computer inaugurated: MOGON NHR South-West

On March 13th, the new MOGON NHR South-West was inaugurated by Clemens Hoch, the Minister of Science and Health of the federal state Rhineland-Palatinate and Prof. Dr. Müller-Stach, Vicepresident for Research and young scientist at the Johannes Gutenberg University of Mainz (JGU). The association NHR South-West consists of the Johannes Gutenberg University of Mainz, the Rhineland-Palatinate technical University of Kaiserslautern-Landau, the Goethe University of Frankfurt and the University of the Saarland and since the end of 2021 one of nine centers for national high-performance computers (NHR) in Germany. Besides representatives of the JGU, guests from the three parter universities were also present.

The new HPC-System MOGON NHR South-West located in Mainz, expands the computing capacity of the transnational consortium and is available to research groups from all over Germany. For the new set up of the cluster the NHR South-West received 7.5 million euros from the federal-country-funding "national high-performance computer". With the financial resources an efficient System could be built in Mainz. Carsten Allendörfer, technical director of the Data Center (ZDV) was responsible for the set up togehter with the HPC-group of he ZDV. "After MOGON I and MOGON II we are happy to provide a cluster that scientists throughout Germany can use with the focus on high-energy physics, condensed matter physics and life science."


Markus Tacke, technical director of the HPC-Group, adds: "The nwe MOGON NHR South-West consits of 590 computing nodes, 75.000 CPU-cores and a main memory of 186 TB. Per nodes two AMD-processer are available (AMD EPYC 7713), per processor each time 64 cores."

MOGON NHR South-West | HPC-cluster-specification

75.000 CPU-CORES
186 TB RAM

AMD EPYC 7713 processor


Learn More

NHR South-West at JGU
Compelete Press Release Ministry of Science and Health (german only)
Website NHR South-West 
Website HPC-Team of the ZDV 


Half-time in cluster set up for National High-Performance Computing (NHR)

With the inclusion of the NHR South-West in the National High-Performance Computing, the preparations for the installation of the new cluster at the Johannes Gutenberg University Mainz (JGU) also started. The working group for High-Performance Computing (HPC) at the Data Center (ZDV) has been working intensively on the conception and planning for the past few months, ordered the required hardware and had parts of the central server room converted for the cluster.

Preparation is everything

The future cooling of the cluster was initially the focus of the construction measures. The existing piping in the raised floor had to be extended so that the planned Direct Liquid Cooling system could dissipate the waste heat from the computing nodes in the future. Side coolers, which are located between the server racks, dissipate the heat from the remaining hardware using air circulation. The server racks and side coolers were delivered at the beginning of August and installed on site by our colleagues.


Also mounted and ready for use are the power distribution units that provide power to the hardware in the server racks. Furthermore, the various networks must be prepared for the upcoming deployment of the cluster. The HPC staff was able to lay the first foundations with the help of the ZDV network group by installing the switches of the management network and the first parts of the HDR-Infiniband network. The use of HDR-Infiniband, a special network interface standard, offers above all a very high transmission rate.


Further hardware deliveries are scheduled by the end of the year: Components for the Direct Liquid Cooling system and the computing nodes themselves. As soon as the cluster is ready for operation, it will be available to researchers throughout Germany via the NHR South-West. Main topics in Mainz are: High Energy Physics, Condensed Matter Physics and Life Science.

About the NHR South-West

The NHR South-West is a cross-national consortium that includes JGU, the Technical University of Kaiserslautern, Goethe University Frankfurt and Saarland University. The goal is to benefit mutually from methodological expertise and computing capacity that is being developed.

Researchers from the four universities of NHR South-West can network more closely with other high-performance computing centers within the alliance and develop their technical and methodological competencies in High-performance computing in a targeted and coordinated manner. In this way, the advantages of High-performance computing can be made available to research and science in the long term.

In October 2021, the joint science conference of the federal and state governments decided to include JGU in the National Supercomputing Alliance.


MOGON II supports BigBlueButton temporarily

To provide BigBlueButton (BBB), which is available to all schools and colleges/universities in Rhineland-Palatinate, ZDV temporarily used up to 500 nodes from MOGON II. Especially the closed schools that count on video conferencing for distance learning due to the pandemic, benefit from the additional resources. The computing power temporarily provided for BBB  will be returned to scientific computing through new hardware purchases in the future.

In cooperation with the Ministry of Science, Education and Culture (MWWK) and the Ministry of Education of Rhineland-Palatinate, the ZDV offers the open-source web conferencing system BigBlueButton to all universities/colleges and schools in Rhineland-Palatinate.



Posted on | Posted in Aktuelles

Support Chat

Dear user,

with our new support chat channel on Mattermost we would like to offer an easier way to get quick answers to your questions. While the main objective of that channel is to offer a more direct contact we hope to get users from different disciplines in touch.

Project proposals, installation request and major prolems should be reported to our Ticket-Hotline

If you're not quite sure if it is a major problem or if your case justifies an installation request or if you just have a quick question try our new Support-Chat.

Your HPC-Team

Kurs: Parallelisierung mit MPI und OpenMP, Frühling 2020


Zentrum für Datenverarbeitung (ZDV) der Johannes Gutenberg-Universität Mainz, Anselm-Franz-von-Bentzel-Weg 12, 55128 Mainz, Raum N33 (wird im Gebäude ausgeschildert)


2018, Montag, 30. März - 2. April; jeweils 9:00 Uhr bis 18:00 Uhr


Inhalt des Kurses sind die Programmiermodelle MPI und OpenMP. Bei praktischen Übungen (in C und Fortran) können die Teilnehmer die Basis-Konstrukte des Message Passing Interfaces (MPI) und die Shared-Memory Direktiven von OpenMP direkt ausprobieren und testen. Dieser Kurs wird von der Johannes-Gutenberg-Universität Mainz in Zusammenarbeit mit dem HLRS organisiert. Etwa 70% der Kursinhalte richten sich an Programmier-Beginner und etwa 20% an Fortgeschrittene.


Die Agenda finden Sie hier.




Dr. Rolf Rabenseifner vom HLRS


Online-Registrierung hier.


Angehörige deutscher Universitäten oder anderer öffentlicher Forschungseinrichtungen: keine.
Andere: 780 EUR.
(Enthält Essen und Trinken während der Kaffeepausen. Gebühren müssen am ersten Tag in Bar bezahlt werden.)


Shellkenntnisse und C oder Fortran

Social Events

An einem Abend planen wir eine Stadtführung (kostenfrei) und ein gemeinsames Essen, wobei jeder seine eigenen Kosten trägt. Wir freuen uns auf einen netten Abend.

Lokale Organisatoren

Christian Meesters, phone 06131 - 39 - 26397, Email: hpc-courses[at]uni-mainz.de (Zentrum fuer Datenverarbeitung, HPC-Gruppe)


Bitte teilen Sie Stornierungen den Organisatoren so schnell wie möglich via Email mit, damit andere Teilnehmer nachrücken können. Es werden keine Stornierungsgebühren erhoben.
Person die nicht erscheinen und sich nicht abmelden werden für ein Jahr bei allen unserer Kurse gesperrt.


Jeder Teilnehmer erhält eine Kopie aller Kursfolien.
Der MPI-1 Teil des Kurses basiert auf einem Kurs des EPCC Education and Training Centre, Edinburgh Parallel Computing Centre.
Käuflich sind Kopien des MPI-3.1 Standards (Hardcover, 17 Euro) und von OpenMP (etwa 14 Euro).

Weitere Kurse

http://www.hlrs.de/training/course-list (Externer Link)


Rolf Rabenseifner, Telefon: 0711 685 65530, Email: rabenseifner[at]hlrs.de
Christian Meesters, phone 06131 - 39 - 26397, Email: meesters[at]uni-mainz.de (Zentrum fuer Datenverarbeitung)

Call for proposals for HPC compute resources 2019

The AHRP offers access to HPC resources operated at the Johannes Gutenberg University Mainz to scientist of German universities or research institutions located in Germany. The projects may apply multi-million core-hours. Up to 20% of the compute resources available in MOGON II will be available for this call.

Important Dates

Opening Date 05 December 2019
Application Deadline
07 January 2020, 18:00 CET
21 January 2020, 18:00 CET
Reviews of proposals January 2020
until 16 February 2020
Announcement of decision End of January 2020
17 February 2020
Allocation period for awarded proposals February 2020 until January 2021
March 2020 until February 2021

Available resources

MOGON II offers 836 compute nodes, each with 20 cores (2x Intel 2630v4) and 1.136 compute nodes, each with 32 cores (2x Intel 6130 Gold) with memory between 64GB and 1.5TB connected with 100Gbps Omni-Path. Each node offers local SSD storage of 150GB or 350GB.
Additionally, 29 GPU nodes with 6 GTX 1080Ti are available.
Parallel file system with 2PB project space and 1PB scratch space is available.

Application process

Project sizes are
1) < 1.2 mio. core hours (Project Class M, 100 NE per month)
2) < 6.0 mio. core hours (Project Class L, 500 NE per month)
3) > 6.0 mio. core hours (Project Class XL, >500 NE per month)
(1NE = 1000 core hours)

Large projects (L / XL) should provide a well defined work plan and ideally scaling curves in the proposal. The amount of project storage should be outlined as well. Additional information about I/O (bandwith,IOPS,access-pattern) is welcome.

If you have not used MOGON II yet we recommend that you apply for a test project at first. You will get quick access to the cluster and will be able to test you code and perform scaling tests that are recommended for an application for bigger projects.

Approved DFG-proposals maybe attached to your proposal.

Please submit your proposal for MOGON II via AHRP-Antrag. 

The steering committee will approve or reject projects according to the results of a technical review at the JGU and a scientific peer review.


Please acknowledge the ressources and the support provided by the JGU Mainz, i.e.

The authors gratefully acknowledge the computing time granted on the supercomputer Mogon II at Johannes Gutenberg University Mainz (hpc.uni-mainz.de).

Within 3 months after the end of a project a final report should be submitted. The final reports will be published on berichte.ahrp.info every two years by the AHRP.

The rules of the AHRP apply


If you have any questions regarding the application you may reach us at hpc@uni-mainz.de.

Posted on

Vortrag: Towards a unified theory of variant calling

Liebe Nutzer,

wir freuen uns am 18. Dezember Johannes Köster (Universitätsklinikum Essen) zu Gast zu haben. Er wird einen Vortrag zum Thema

Towards a unified theory of variant calling

halten. Der Abstract:

The field of variant calling could so far be splitted into small vs. structural and germline vs. somatic, with many specialized algorithms, but no unified solution. We present a novel statistical model that handles all above scenarios in a unified way.

Moreover, the model reaches beyond such standard questions, by allowing to describe and assess arbitrary calling scenarios via boolean logic formulas over allele frequency ranges in arbitrary numbers of samples. Thereby, it not only unifies across all variant size ranges, but also integrates traditional post-hoc filtering steps
into a single statistical assessment. In turn, this enables, for the first time, true bayesian false discovery rate control for variant calling.

Johannes Köster wird am 18. Dezember um 13:30 im Besprechungsraum 3 des ZDV sprechen.

Fragen und Anregungen richten Sie bitte an hpc-colloqium@uni-mainz.de

Ihre HPC Gruppe

Neue Kurse 2020

Liebe Nutzer,

Die Termine für das Jahr 2020 stehen fest, erste Anmeldungen sind möglich. Aufgrund einer Umstellung unserers e-Learningsystems ist das Einschreiben bis auf Weiteres nur angemeldet möglich. Zudem werden Anmeldungen für weitere Termine erst im November freigegeben.

Zur Einführung in den Umgang mit Hochleistungsrechnern bieten wir unseren HPC-Einführungskurs an (auch "HPC-Intro"). Weil hierzu ein Grundwissen über die Kommandozeile (Bash) notwendig ist, bieten wir jeweils etwa eine Woche vor den HPC-Einführungen Bash-Einführungen (auch "Bash-Intro") an.


Die Termine sind wie folgt:

  • Noch in diesen Jahr: Bash-Intro am 27. und 28. November und die HPC-Einführung am 3. und 4. Dezember. (Die Kurse sind jeweils zweitägig.)
  • Im nächsten Jahr:
    • Eine Einführung in das Programmieren mit C++ vom 9. März bis zum 13. März 2020
    • Ein Parallelisierungskurs angeboten durch das HLR Stuttgart bei uns zu MPI und OpenMP vom 30. März bis zum 2. April 2020. Noch ist aus technischen Gründen keine Anmeldung möglich. Wir werden über die mögliche Anmeldung zeitnah informieren
    • Eine weitere Bash-Intro am 22. und 23. April 2020 und eine HPC-Einführung am 29. und 30. April 2020
    • Ein Workshop zur Nutzung der SeqAn-Sequenzanalysebibliothek für Bioinformatiker (Voraussetzung C++-Kenntnisse) am 6. Mai 2020 (noch keine Anmeldung möglich).
    • Weitere Bash-Intros am 26. und 27. August 2020 sowie am 25. und 26. November 2020, sowie HPC-Einführungen am 2. und 3. September 2020 sowie am 2. und 3. Dezember 2020.

Eine Übersicht und Links zu den Anmeldungen finden Sie hier auf unserer Webseite.

Bitte beachten Sie:

  • Aufgrund des veralteten e-Learning-Systems ist eine Übersicht und Anmeldung nur nach vorheriger Anmeldung auf der e-Learning Seite möglich. Das ZDV arbeitet mit Hochdruck an einer Lösung. Wir werden mit Etablierung des neuen Systems zeitnah unsere Seite anpassen und weitere Anmeldungen ermöglichen.
  • Die HPC-Gruppe bietet für das kommende Jahr nur 3 Paare aus Bash-und HPC-Einführung an. Die Ursache ist, dass wir weitere Kurse anbieten, u. a. die beiden C++-Kurse und auch noch weitere Kurse anbieten möchten. Sollten wir durch eine lange Warteliste erkennen, dass Bedarf an
    weiteren Kursen besteht, werden wir zusätzliche Kurse anbieten.

Ihre HPC Gruppe

NVIDIA DGX-1 available

Dear users,

we are happy to announce that we can offer the access to two NVIDIA DGX-1 systems. NVIDIA promotes those as “the most powerful AI system in the world for the most complex challenges”. NVIDIA DGX-1 offers an AI supercomputer (in a box) for superior performance in the fields of artificial intelligence, machine learning and deep learning. Read more on NVIDIA Website.

The machines are equipped with 8 NVIDIA Tesla v100 GPUs with 16GB/GPU or 32GB/GPU, two Intel Xeon E5-2698 v4 2.2 GHz CPUs and 512GB 2nd generation high bandwidth memory yielding a peak performance of 1PFLOPS. The GPUs are interconnected via the NVIDIA NVLink and the network connection consists of 4x InfiniBand 100 Gbps EDR and 2x 10GbE

If you’re intested in using those machines send us an email at hpc@uni-mainz.de