Skip main navigation

Messages and communications

So far we have learnt about the rank and sizes but now we will learn how to use them to exchange information between the processes.

Until now, we have introduced the MPI and we have used some simple routines such as rank and size to distinguish between different processes and to actually assign them some numbers that we can recognise and use later. But so far we haven’t done anything useful with this knowledge, i.e., we haven’t sent any information between the processes. This is where we need to gain an understanding of messages in MPI.

When we are developing different advanced applications, at some point we will need to exchange information from one process to another. Usually, this information could be some integer, some other values or even arrays etc. This is where messages are used. Messages are packets of data moving between sub-programs. So, as previously described, if we pack the information to be shared between processes into some message, we can send them over the communication network so the other processes can receive them as a message. This is how the data and information is shared between the processes. And of course there is some important information that we will always need to specify in order for the messages to be sent and received efficiently.

Communication network Image courtesy: Rolf Rabenseifner (HLRS)

As we can see in this example, we are trying to send a message from a process with rank 0 to process with rank 2. And in order for this to work, we have to specify some information.

  • Data size and type

The sender needs to specify what kind of data is being sent. So for example, if we are sending an array of, lets say 100 numbers, we need to specify that the size equals 100. And as you probably already guessed, we would also need to mention what is the type of the data. So, whether it is a character? Is it a double integer? And so on.

  • Pointer to sent or received data

For this data exchange we would need two pointers. These pointers are from the sender that will need to point to its own memory to mention, OK, the data I’m trying to send is here. And then the receiver will need to specify the memory where it would like to receive this data.

  • Sending process and receiving process, i.e., the ranks

The MPI environment will need to know who is the sender and who is the receiver. This is where the ranks come in. So, for our previous example we would specify that the rank 0 is the sender and the rank 2 is the receiver.

  • Tag of the message

The next information we will need to specify is the tag of the message. A tag is a simple number that we can assign any value from which a receiver can identify the message. For instance, if we would send two messages we can assign one tag as, let’s say 0 and the other one as tag 1. This helps the receiver identify and differentiate between messages. But usually if we will have only one message, we can just put the tag as 0.

  • The communicator, i.e., MPI_COMM_WORLD

The last argument we will need to specify is what the communicator in which we are sending the messages is. In our case here, it would be the MPI_COMM_WORLD, but we would eventually learn better about the functions as we will do more exercises and hands on practice.

MPI Datatypes

The MPI environment defines its own basic data types. However, if you’re familiar with C they’re really simple because what you have to do is just put MPI in capital letters before the variable and change everything to capital case.

C Datatype MPI Datatype
char MPI_CHAR
int MPI_INT
long MPI_LONG
float MPI_FLOAT
double MPI_DOUBLE

So simply put, if you’re trying to send an integer, then the type is

MPI_INT

However, as we will get more involved with MPI, we will explore that there is also a way for the user to define its own derived data type. For instance, if we’re using struct in C, then we can define that struct as a new MPI data type. This proves to be very useful because we can just send everything in one message. So, this would not require us to send portions of the struct with different messages. But we will dwell deeper into the derived data types in the coming weeks. For today’s section we’re only using simple data. types.

This article is from the free online

Introduction to Parallel Programming

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education