Parallel programming

The content on this page was translated automatically.

The course initially covers the established programming systems OpenMP and MPI in greater depth than in the Bachelor's program. The focus will be on advanced language constructs such as OpenMP tasks, MPI communicators and hybrid MPI/OpenMP programming. We discuss the use of the constructs using example applications. The focus is on the design goals of performance and scalability. Some more complex synchronization problems will also be discussed.

In the second part of the course, current parallel programming systems will be presented and compared with established systems. Typical approaches of these systems are Partitioned Global Address Space (PGAS) and Task-based Parallel Programming (TaPP). Individual languages are learned and tested, e.g. Chapel, TBB, HPX and Charm++. We will also discuss cross-cutting topics such as design patterns, fault tolerance and elasticity.

Previous knowledge of parallel processing 1 and 2 is an advantage, but not essential. The course is divided into a lecture and a practical part. The practical part includes the development of programs with the programming systems covered. You will typically work in teams of two during the internship. The work placement, together with a final discussion, forms the basis for the assessment of the course.

 

Course times:

  • Monday, 12:15 - 13:45, Room -1418

  • Friday, 10:15 - 11:45 a.m., Room 2307A

The first event will take place on Friday, 19.10.18.

Lecturers: Prof. Dr. Claudia Fohry and M.Sc. Jonas Posner

 

All further information about the event can be found in Moodle.