Skip main navigation

Week 4 summary

Short summary of week four.
© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.

We hope that you have enjoyed the fourth, and final, week of Python in High Performance Computing!

This week, we have looked into parallel computing using MPI for Python. With MPI communication one is able to distribute the work to multiple CPU cores and to take care of the necessary data synchronisations while executing a parallel algorithm.

By now, you should know how to send and receive MPI messages, how to use collective communication, and how to create your own custom communicators. You should also be familiar with the key concepts of parallel computing and understand the execution and data model of MPI.

To learn more about MPI or parallel programming in general, please consider looking at the other training courses offered by PRACE: http://www.training.prace-ri.eu/

© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.
This article is from the free online

Python in High Performance Computing

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education