Skip main navigation

How to allocate a shared memory?

In this step we explain how the allocation of shared memory technically works. You will first split the given communicator into shared memory islands.
The first thing you have to do is to call MPI_Comm_split_type. With this, you split your communicator into sub-communicators, one communicator per shared memory node. The first input argument is the old communicator, which is the red communicator where all the processes are, typically MPI_COMM_WORLD. The next input is the constant MPI_COMM_TYPE_SHARED, which indicates that the communicator will be split into shared memory islands. Then comes a key-value argument, which is typically zero. Zero means that the ranks in the new communicator are in the same order as in the old communicator. Of course, the ranks must always go from zero to size-1. As info argument, you simply set MPI_INFO_NULL, and then the new shared memory communicators come out.
The shared memory communicators are just new communicators, but we know that all the processes there have shared memory underneath. The next call is MPI_Win_allocate_shared. For each process, one must decide the size of the shared memory window portion. It is in bytes. Here in the example, the green local_window_count in every process, for example ten elements, is multiplied by the displacement unit. If the window elements are of type double precision, then this displacement would be 8. After this, the displacement unit and an info argument (here again MPI_INFO_NULL) are input arguments, and as the last one, the new shared memory communicator. We obtain the variable base-pointer with the address of where the window is as output from this routine.
Each process has its address pointer pointing to its window-portion, and the window handle is returned in win_sm. With this, we have learned that these green portions, with, for example 10 elements each, are glued together to a long array with 40 elements for 4 processes.

How does it technically work? In this step, you first split the given communicator in its shared memory islands. Then your processes of each island together allocate a shared memory window.

For further self-study, you can find the figure of the video together with the text in the pdf handout for weeks 3-4 on page 10.

This article is from the free online

One-Sided Communication and the MPI Shared Memory Interface

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now