Parallel programming

The content on this page was translated automatically.

The course first covers the established programming systems OpenMP and MPI in greater depth than in the bachelor's program. Emphasis will be placed on advanced language constructs such as OpenMP tasks, MPI communicators, and hybrid MPI/OpenMP programming. We will discuss the use of these constructs using example applications. The focus is on the design goals of performance and scalability. In addition, some more complex synchronization problems will be discussed.

In the second part of the course, current parallel programming systems will be presented and compared with established systems. Typical approaches of these systems are Partitioned Global Address Space (PGAS) and Task-based Parallel Programming (TaPP). Individual languages will be learned and tested, e.g. Chapel, TBB, HPX and Charm++. In addition, we discuss cross-cutting topics such as design patterns, fault tolerance, and elasticity.

To attend the course, prior knowledge of Parallel Processing 1 and 2 is an advantage, but not mandatory. The course is divided into a lecture and a practical part. The practical part includes the development of programs with the programming systems covered. In the practical course you will typically work in teams of two. The internship, together with a final discussion, forms the basis for the assessment of the course.


Event times:

  • Monday, 12:15 - 13:45, room -1418

  • Friday, 10:15 - 11:45 a.m., Room 2307A 

The first class will be held on Friday, 10/19/18.

Lecturers: Prof. Dr. Claudia Fohry and M.Sc. Jonas Posner


All further information about the event can be found in Moodle.