Skip main navigation

Message-Passing Model

This article describes the message-passing model of parallelisation for distributed-memory parallel computers.
© EPCC at The University of Edinburgh
The Message-Passing Model is closely associated with the distributed-memory architecture of parallel computers.
Remember that a distributed-memory computer is effectively a collection of separate computers, each called a node, connected by some network cables. It is not possible for one node to directly read or write to the memory of another node, so there is no concept of shared memory. Using the office analogy, each worker is in a separate office with their own personal whiteboard that no-one else can see. In this sense, all variables in the Message-Passing Model are private – there are no shared variables.
In this model, the only way to communicate information with another worker is to send data over the network. We say that workers are passing messages between each other, where the message contains the data that is to be transferred (e.g. the values of some variables). A very good analogy is making a phone call.
The fundamental points of message passing are:
  • the sender decides what data to communicate and sends it to a specific destination (i.e. you make a phone call to another office);
  • the data is only fully communicated after the destination worker decides to receive the data (i.e. the worker in the other office picks up the phone);
  • there are no time-outs: if a worker decides they need to receive data, they wait by the phone for it to ring; if it never rings, they wait forever!
The message-passing model requires participation at both ends: for it to be successful, the sender has to actively send the message and the receiver has to actively receive it. It is very easy to get this wrong and write a program where the sends and receives do not match up properly, resulting in someone waiting for a phone call that never arrives. This situation is called deadlock and typically results in your program grinding to a halt.
In this model, each worker is called a process rather than a thread as it is in shared-variables, and each worker is given a number to uniquely identify it.

Things to consider

When parallelising a calculation in the message-passing model, the most important questions are:
  • how are the variables (e.g. the old and new roads) divided up among workers?
  • when do workers need to send messages to each other?
  • how do we minimise the number of messages that are sent?
Because there are no shared variables (i.e. no shared whiteboard), you do not usually have to consider how the work is divided up. Since workers can only see the data on their own whiteboards, the distribution of the work is normally determined automatically from the distribution of the data: you work on the variables you have in front of you on your whiteboard.
To communicate a lot of data we can send one big message or lots of small ones, what do you think is more efficient? Why?
Share and discuss your opinion with your fellow learners!
© EPCC at The University of Edinburgh
This article is from the free online

Supercomputing

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education