Close

A note on cookies

We use cookies to improve your experience of our website. If you want to find out more see our Privacy Policy

Menu

Industrial Optimal Design using Adjoint CFD

Events menu

Course Feedback

A three-day training about high performance computing entitled "Parallelization with MPI and OpenMP" was held at the Paderborn Center for Parallel Computing, Paderborn University, in the period Feb. 5-7,
2018. MPI stands for Message Passing Interface and it is a standard that describes how to perform communication on parallel computing architectures. It is mainly used on clusters and distributed memory
systems. Nevertheless, the latest MPI standard also supports shared memory systems. On the other hand, OpenMP (Open Multi-Processing) is an implementation that targets only multi-core shared memory systems. Both programming models were explained on the training, together with comprehensive hands-on sessions where participants could choose between exercises written in C or Fortran programming languages. The content level was mostly for beginners, however, also advanced parallel programming topics were tackled. One of the advanced topics is related to algorithmic differentiation with MPI and OpenMP, that is especially beneficial for IODA participants. This technique computes the derivatives of a computer program, e.g. a parallel code like computational fluid dynamics (CFD) solver. The lectures and exercises about MPI and OpenMP were taught by Dr. Rolf Rabenseifner (High-Performance Computing Center Stuttgart, member of the MPI forum), the lecture about algorithmic differentiation with parallel programming was taught by Dr. Kshitij Kulshreshtha (Paderborn University).

Mladen Banovic

^ Back to Top