Skip main navigation

Hands-on: Static typing in a simple extension

In this exercise you can practice using static type declarations in Cython modules. The code for this exercise is located under cython/static-typing. Continue with the simple Cython module for subtracting …

Hands-on: Using C-functions

In this exercise you can practice using C-functions in Cython modules. The code for this exercise is located under cython/c-functions. Fibonacci numbers are a sequence of integers defined by the …

Hands-on: Collective operations

In this exercise we test different routines for collective communication. Source code for this exercise is located in mpi/collectives/ First, write a program where rank 0 sends an array containing …

Week 4 summary

We hope that you have enjoyed the fourth, and final, week of Python in High Performance Computing! This week, we have looked into parallel computing using MPI for Python. With …

Bonus hands-on: Parallel heat equation solver

If you would like to further practice your parallel programming skills, we have as a final parallel programming hands-on exercise parallelization of the heat equation solver with MPI. Source code …

Collective communication: many to one

Next step is to look into how to use collective communication to collect data from all tasks to a single task, i.e. how to move data from many to one. …

Collective communication: many to many

Collective communication routines in MPI include also routines for global communication between all the processes. Global collective communication is extremely costly in terms of performance, so if possible one should …

Communicators

In MPI context, a communicator is a special object representing a group of processes that participate in communication. When a MPI routine is called, the communication will involve some or …

Collective communication: one to many

Collective communication transfers data between all the processes in a communicator. MPI includes collective communication routines not only for data movement, but also for collective computation and synchronisation. For example, …

Non-blocking communication

So far, we have been looking at communication routines that are blocking, i.e. the program is stuck waiting as long as communication is taking place. Blocking routines will exit only …

Hands-on: Non-blocking communication

In this exercise we explore non-blocking communication Source code for this exercise is located in mpi/non-blocking/ Go back to the Message chain exercise and implement it using non-blocking communication.

Hands-on: Message chain

In this exercise we explore a typical communication pattern, one-dimensional acyclic chain. Source code for this exercise is located in mpi/message-chain/ Write a simple program where every MPI task sends …

Hands-on: Message exchange

In this exercise we study sending and receiving data between two MPI process. Source code for this exercise is located in mpi/message-exchange/ Communicating general Python objects Write a simple program …

MPI communication

Since MPI processes are independent, in order to coordinate work, they need to communicate by explicitly sending and receiving messages. There are two types of communication in MPI: point-to-point communication …

Fast communication of large arrays

MPI for Python offers very convenient and flexible routines for sending and receiving general Python objects. Unfortunately, this flexibility comes with a cost in performance. In practice, what happens under …