Skip main navigation

How to allocate a shared memory?

In this step we explain how the allocation of shared memory technically works. You will first split the given communicator into shared memory islands.
2
The first thing you have to do is to call MPI_Comm_split_type. With this, you split your communicator into sub-communicators, one communicator per shared memory node. The first input argument is the old communicator, which is the red communicator where all the processes are, typically MPI_COMM_WORLD. The next input is the constant MPI_COMM_TYPE_SHARED, which indicates that the communicator will be split into shared memory islands. Then comes a key-value argument, which is typically zero. Zero means that the ranks in the new communicator are in the same order as in the old communicator. Of course, the ranks must always go from zero to size-1. As info argument, you simply set MPI_INFO_NULL, and then the new shared memory communicators come out.
62.4
The shared memory communicators are just new communicators, but we know that all the processes there have shared memory underneath. The next call is MPI_Win_allocate_shared. For each process, one must decide the size of the shared memory window portion. It is in bytes. Here in the example, the green local_window_count in every process, for example ten elements, is multiplied by the displacement unit. If the window elements are of type double precision, then this displacement would be 8. After this, the displacement unit and an info argument (here again MPI_INFO_NULL) are input arguments, and as the last one, the new shared memory communicator. We obtain the variable base-pointer with the address of where the window is as output from this routine.
125.8
Each process has its address pointer pointing to its window-portion, and the window handle is returned in win_sm. With this, we have learned that these green portions, with, for example 10 elements each, are glued together to a long array with 40 elements for 4 processes.

How does it technically work? In this step, you first split the given communicator in its shared memory islands. Then your processes of each island together allocate a shared memory window.

For further self-study, you can find the figure of the video together with the text in the pdf handout for weeks 3-4 on page 10.

This article is from the free online

One-Sided Communication and the MPI Shared Memory Interface

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education