Want to keep learning?

This content is taken from the Partnership for Advanced Computing in Europe (PRACE), The University of Edinburgh & SURFsara's online course, Supercomputing. Join the course to learn more.

Message-Passing Model

The Message-Passing Model is closely associated with the distributed-memory architecture of parallel computers.

Remember that a distributed-memory computer is effectively a collection of separate computers, each called a node, connected by some network cables. It is not possible for one node to directly read or write to the memory of another node, so there is no concept of shared memory. Using the office analogy, each worker is in a separate office with their own personal whiteboard that no-one else can see. In this sense, all variables in the Message-Passing Model are private - there are no shared variables.

In this model, the only way to communicate information with another worker is to send data over the network. We say that workers are passing messages between each other, where the message contains the data that is to be transferred (e.g. the values of some variables). A very good analogy is making a phone call.

The fundamental points of message passing are:

  • the sender decides what data to communicate and sends it to a specific destination (i.e. you make a phone call to another office);

  • the data is only fully communicated after the destination worker decides to receive the data (i.e. the worker in the other office picks up the phone);

  • there are no time-outs: if a worker decides they need to receive data, they wait by the phone for it to ring; if it never rings, they wait forever!

The message-passing model requires participation at both ends: for it to be successful, the sender has to actively send the message and the receiver has to actively receive it. It is very easy to get this wrong and write a program where the sends and receives do not match up properly, resulting in someone waiting for a phone call that never arrives. This situation is called deadlock and typically results in your program grinding to a halt.

In this model, each worker is called a process rather than a thread as it is in shared-variables, and each worker is given a number to uniquely identify it.

Things to consider

When parallelising a calculation in the message-passing model, the most important questions are:

  • how are the variables (e.g. the old and new roads) divided up among workers?

  • when do workers need to send messages to each other?

  • how do we minimise the number of messages that are sent?

Because there are no shared variables (i.e. no shared whiteboard), you do not usually have to consider how the work is divided up. Since workers can only see the data on their own whiteboards, the distribution of the work is normally determined automatically from the distribution of the data: you work on the variables you have in front of you on your whiteboard.

To communicate a lot of data we can send one big message or lots of small ones, what do you think is more efficient? Why?

Share and discuss your opinion with your fellow learners!

Share this article:

This article is from the free online course:


Partnership for Advanced Computing in Europe (PRACE)

Get a taste of this course

Find out what this course is like by previewing some of the course steps before you join: