Skip main navigation

Different types of communication in MPI

In this subsection we will learn how the types of communication in MPI are divided and we will learn about the Point to Point communication paradigms.

There are two criteria by which we can divide the types of communication in MPI.

First way to define types of communication is to divide it according to the number of processes that are involved. So, if there are only two processes involved in the communication, meaning only one sender and one receiver, then it is referred to as point to point communication. This is the simplest form of message passing where one process sends a message to another. The other type of communication in this category is the collective communication in which we have multiple processes that are involved. This implicates communication between either one processes with many other processes or even many processes communicating with several other processes. So, there are different ways that this can be executed. This is one criteria for distinguishing the types of communication, i.e., distinction by the number of processes involved.

The second criteria and perhaps more complex is by defining the type of communication into blocking and non blocking types. A blocking routine returns only when the operation has completed. This means blocking implies that if we send a message, we can’t proceed to the next steps until the receiver actually returns us information that it has received the message.

Blocking communication Image courtesy: Rolf Rabenseifner (HLRS)

The non blocking communication is more complicated than the simpler blocking counterpart. In this case it returns immediately and allows the sub-program to perform other work. It differs from the blocking communication in a way that if we send something to the receiver, we can execute some other tasks in between and after some time, we can check if the receiver has actually returned the information, i.e., it has received the message, or everything is OK. Many real applications usually employ this type of communication.

Non-blocking communication Image courtesy: Rolf Rabenseifner (HLRS)

Point-to-Point Communication

As we already saw in the previous section, Point-to-Point Communication is the simplest communication as it involves only two processes. One of the processes acts as a sender and the other one as the receiver. Here, the source or the sender sends a message via the communicator to the receiver. In order to do so, the environment has to know who is the sender and who is the receiver and this is taken care of by specifying the ranks of the processes.

Point to point communication Image courtesy: Rolf Rabenseifner (HLRS)

Sending paradigms

In order to send a message with Point-to-Point Communication, we use the function

MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm);

We will now see what arguments this routine actually needs.

  • buf is the pointer to the data with count elements and each of them is described with datatype. So first of all, we have to specify the address of the data that we would like to send. It’s a pointer to the data of the sender and then the second argument is actually the number of elements we send. For example, if we just send 1 integer, the size will be 1. If you send an array with 100 integers, this will be 100 and so on. The third argument this function would like to have is the datatype. So, if we are sending an integer, we will have to specify here MPI_INT and so on.
  • dest is the rank of the destination process. In this argument we specify the rank of the receiver. So, for instance, in the previous example, this will be 5.
  • tag is an additional integer information sent with the message. tag is basically a number by which we identify the message. So, usually if we just send only one message, we can just put the 0 tag there or maybe any number that we would like.
  • The last argument is the communicator and as we already discussed we usually use MPI_COMM_WORLD.

These are the arguments that are the most important information the MPI environment needs in order to know what data is sent and to whom.

Receiving paradigms

As we saw the sender needs a function to send a message and obviously the receiver has to call the receive function. This means that for the communication to work, we need two processes and two ranks. So, one will call the MPI_Send function and the other will similarly call MPI_Recv to receive. To be able to receive, we use the function

MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status);

The arguments needed by this function are similar to the MPI_Send function.

  • The buf/count/datatype describe the received data. In the first argument we specify the address of the data, i.e., the address where we would like to receive the data and similarly, we put the data type here.
  • source is the rank of the sender process. We have to have the rank of the sender process. In the previous example we would specify number 3, because the rank with number three is trying to send us a message.
  • Similar to MPI_Send, we also have a tag here. It is really important to make sure that we match this number with the sender. So, if the sender specifies that the message has tag 0, the receiver also has to specify the same number here. Otherwise, this will be an infinite loop as we will not receive anything because we would not match our receive function call with the send function call.
  • The next argument is the communicator and again we would just use the MPI_COMM_WORLD.
  • The last argument is not so important for us at the moment as this is something that we will be learning in the next exercise. For now we will just use
MPI_STATUS_IGNORE

We will learn more about this status structure in the coming weeks.

So, let’s go through an example to understand again the prerequisites for this communication to work efficiently and how this would actually work in a code.

Example
Image courtesy: Rolf Rabenseifner (HLRS)

Here, the left is the sender and the right is the receiver. Let’s suppose that the sender would like to send this buffer array that has n floats over to some other process. For this, it calls the MPI_Send routine function. As we already know, the first one is the pointer to the data. So, this is the send buffer. Then it needs to specify the number of data. In this case, it is n. The next argument is MPI_FLOAT, and we need to make sure that this data matches with the one mentioned earlier. As we previously discussed, this is the MPI data type that the environment defines, but it has to match with this one, otherwise the communication will not work. Another thing we need to keep in mind is that this data type has to match with the receiver. So, we have to be careful when we write these functions that all of these have to match. Now, the receiver has to call the receiver function with the same data type. Here it has to first define an array where it would like to receive this data, the receive buffer. The communicator, of course, has to be the same because they are bound in the same program. But we usually use the MPI_COMM_WORLD communicator. The next important part is that the tag has to match. And finally, the type of the message or type of the data has to match.

This article is from the free online

Introduction to Parallel Programming

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now