OpenMP is an application programming interface designed to facilitate parallelism. It is the model of choice for shared-memory programming, which enables serial programs to be parallelised using compiler directives. Since its introduction in 1997, OpenMP has tracked the evolution of hardware used in high-performance computing including the increasing use of accelerators such as GPUs.
In this course the student will learn a wide range of OpenMP topics, starting from the basics before moving to really advanced topics.
Day 1 will cover the "OpenMP common Core" while Day 2 will focus on how to get the best out of OpenMP in terms of performance by exploring the implications of possible OpenMP parallelization strategies. This includes tasking as well as data and thread locality on NUMA architectures. Each day will have a hands-on session and Q&A.
The material presented in this course is a prerequisite for the "OpenMP offloading" course and specifically the knowledge of the tasking.
Agenda
Time |
02.11. |
03.11. |
Session 1 10:00-12:00 CET(15 mins. break) |
WelcomeOverviewParallel RegionWorksharingScopingTasks & Compilers |
Tasking introduction and motivationTask loopDependenciesCut-off |
Lunch break 12:00-13:00 CET |
||
Session 2 13:00-16:00 CET(15 mins. break) |
DependenciesHands-on and Q&A |
NUMATask AffinityHands-on and Q&A |