Close

A note on cookies

We use cookies to improve your experience of our website. If you want to find out more see our Privacy Policy

Menu

Industrial Optimal Design using Adjoint CFD

Events menu

Parallel Programming (PAR) MPI Open MPTraining

University of Paderborn

On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into the basic and intermediate features of MPI, like blocking and nonblocking point-to-point communication, collective communication, subcommunicators, virtual topologies, and derived datatypes. Modern methods like one-sided communication and the new MPI shared memory model inside are also taught.
Additionally, this course teaches shared memory OpenMP parallelization, which is a key concept on multi-core shared memory and ccNUMA platforms. A race-condition debugging tool is also presented. The course is based on OpenMP-3.1, but also includes new features of OpenMP-4.0 and 4.5, like pinning of threads, vectorization, and taskloops.
The course is rounded up with a talk on hybrid MPI+X programming of clusters of shared memory nodes, and it ends with Algorithmic Differentiationas an additional topic especially for the participants of the Marie Curie ITN.

Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among them

The focus is on the programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by PC2 and the Marie Curie ITN, Paderborn University in cooperation with HLRS. (Content Level: 70% for beginners, 30% advanced)

^ Back to Top