Skip main navigation

MPI_Reduce

We will learn how to use the *reduce* function in MPI as a more efficient alternative to scatter/gather.

So far in the basic collective communication we have encountered broadcast, scatter and gather. Now, we can move on to more advanced collective communication where we will cover routines MPI_Reduce and MPI_Allreduce.

Before we go into these routines, let’s revise the concepts of reduce or data reduction in practice. Simply put, data reduction involves reducing a set of numbers into a smaller set of numbers via some function. For example, let’s say we have a list of numbers (1, 2, 3, 4, 5). Then reducing this list of numbers with the sum function would produce:

sum(1, 2, 3, 4, 5) = 15

Similarly, if we would use another function say, multiply, the multiplication reduction would yield

multiply(1, 2, 3, 4, 5) = 120.

Quite simply, this is what an MPI reduction function does.

MPI_Reduce

MPI_Reduce is basically what we did in the last exercise with an additional functionality. In a way what the reduce routine does is basically similar to scatter/gather, but we also specify an MPI function like sum, multiplication, maximum or something similar. We will see later on, which functions are available and how we can use them. So, the MPI library uses those functions directly on this data that gives us the reduced result immediately. We don’t have to call the gather routine and then manually program to get the sum, but instead the library does it for us. Therefore, MPI_Reduce takes an array of input elements on each process and returns an array of output elements to the root process. The output elements contain the reduced result.

Perhaps it would be easier to understand it through an example.

MPI_Reduce
Image courtesy: Rolf Rabenseifner (HLRS)

Let’s assume that we’re trying to compute a sum, but different numbers are scattered across different processes. If we would have our numbers (1, 2, 3, 4, 5) we would call MPI_Reduce on this data and we will also need to mention the function that we would like to reduce the data, e.g., the sum. Then the root process will get the sum as a result. To be able to do this we would need the prototype of the MPI_Reduce

MPI_Reduce(void* send_data, void* recv_data, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm communicator);

The arguments of this routine are pretty similar to the ones we have seen so far.
So, the send_data parameter is an array of elements that each process wants to reduce. Following that is the recv_data that is only relevant on the process with a rank of root as it contains the reduced result. Then we mention the count, i.e., the number or quantity of the data and the datatype. However, this is where the MPI_Reduce function is different. Here we also mention the operation in the op parameter, i.e., the operation that we wish to apply to our data. The list of reduction operations in the MPI library is as follows:

Function MPI_Op
Maximum MPI_MAX
Minimum MPI_MIN
Sum MPI_SUM
Product MPI_PROD
Logical AND MPI_LAND
Logical OR MPI_LOR
This article is from the free online

Introduction to Parallel Programming

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now