Summer semester 2024

The content on this page was translated automatically.

The entire organization of the courses takes place via the Moodle learning platform. If you would like to take part in one of the courses listed below, please register for the relevant Moodle course.

Students learn about basic abstract data types in computer science, efficient data structures for their implementation and efficient graph and optimization algorithms. They familiarize themselves with algorithmic design techniques (e.g. divide-and-conquer, branch-and-bound), deepen their skills in runtime analysis and acquire further skills for evaluating algorithms. They also expand their programming experience with Java and now also use advanced language aspects such as generic programming.

In the accompanying exercises, participants apply what they have learned to the design and implementation of their own algorithms and data structures, among other things.
 

Lecturer
Prof. Dr. Claudia Fohry

 

Further information, including the lecture and exercise dates, can be found in the Moodle course and in the HIS for the lecture and for the exercises.

If you have any questions, Prof. Dr. Claudia Fohry and M.Sc. Rüdiger Nather will be happy to answer them.

The course covers a selection of parallel algorithms for various problems and architecture classes. Techniques for the design and analysis of parallel algorithms will be taught and practiced. First, we get to know two models for computers with shared memory and discuss their advantages and disadvantages: the Parallel Random Access Machine (PRAM) and the Binary Forking model (BF). Then, basic design techniques are taught using simple problems (e.g. convex hull, maximum computation).

The second part concentrates on clusters and algorithms that can be implemented using MPI or node-internal OpenMP.  Depending on the prior knowledge and interest of the participants, the course will include not only analysis but also implementation. The algorithms covered range from regular algorithms for matrix calculations to parallel optimization methods and parallel graph algorithms. Finally, algorithms for special scenarios may be addressed: fault-tolerant parallel algorithms and efficient algorithms for memory hierarchies.

The material is taught in lectures with integrated exercises and discussions. Active participation is expected.  Grades are awarded on the basis of project work with a final examination discussion. The focus of the project is the development of efficient algorithms. These should also be analyzed with regard to their runtime and, if the participants have the relevant prior knowledge, implemented and evaluated experimentally.  The course complements the courses "Introduction to Parallel Processing" (Bachelor) and "Parallel Programming" (Master), but can also be taken without prior knowledge of parallel programming.
 

Lecturer
Prof. Dr. Claudia Fohry
M.Sc. Rüdiger Nather

 

Further information, including the lecture dates, can be found in the Moodle course and in the HIS.

If you have any questions, Prof. Dr. Claudia Fohry and Mr. Rüdiger N ather will be happy to answer them.

In this course, students learn functional programming using the Haskell language as an example. The language constructs covered range from basics such as functions and lists, data types and evaluation strategies to advanced aspects such as monads and parallelization. The constructs are explained and their use discussed. In addition to Haskell, a brief insight into other functional languages will be given.

The course is held in the form of a lecture with integrated exercises. In the first few weeks, homework is also compulsory. Grades are awarded on the basis of project work, which is carried out in teams of two in the final weeks. The project work is concluded with a defense in which the developed programs are presented and further topics of the lecture are addressed.
 

Lecturer
Prof. Dr. Claudia Fohry
M.Sc. Rüdiger Nather

 

Further information, including the lecture dates, can be found in the Moodle course and in the HIS.

If you have any questions, Prof. Dr. Claudia Fohry and Mr. Rüdiger N ather will be happy to answer them.

The laboratory practical course "Building a miniature supercomputer" is a practical introduction to the world of supercomputing and is designed to give students an understanding of the concepts and technologies of high-performance computing (HPC).

At the beginning of the lab practical, students are taught the basics, such as how a supercomputer works, benchmarking, job scheduling, distributed file systems, resource management, user management, fault tolerance and elasticity. The aim is to provide students with the concepts and skills required to build a supercomputer.

In the main part of the lab practical, students work in groups and build their own miniature supercomputer using virtualization technologies. Students apply their acquired skills and knowledge in practice by evaluating the performance of their miniature supercomputer in various practical scenarios.

The results of the group work are presented in examination talks and form the basis for the assessment of the course.

 

Lecturer
Dr. Jonas Posner

 

Further information, including the course dates, can be found in the Moodle course and  course catalog.

If you have any questions, Dr. Jonas Posner will be happy to answer them.

For more than half a century, High Performance Computing (HPC) systems, better known as supercomputers, have been driving groundbreaking advances in science, research and industry. This seminar is dedicated to exploring the history and development of supercomputing and highlights key milestones in hardware and software development that have significantly shaped this field. From the emergence of the first supercomputers to the revolution through parallel architectures and the current exascale systems, we will trace the path that has not only exponentially increased computing power, but also changed science, research and industry for good. By analyzing major supercomputers, pioneering programming systems such as MPI and OpenMP, and modern technologies such as GPUs and FPGAs, this seminar provides a comprehensive overview of the past, present and future of supercomputing.


Participants will each present a topic that must be independently researched from the technical literature. The topics cover the different eras of supercomputing development - from vector processors, multiprocessor systems and microprocessors to clusters and accelerators. Students can suggest their own topics by arrangement.

 

Lecturer
Dr. Jonas Posner

 

Further information, including the course dates, can be found in the Moodle course and in the HIS for Bachelor and Master.

If you have any questions, Dr. Jonas Posner will be happy to answer them.