# Essentials of an MPI shared memory window

The allocated shared memory is contiguous across process ranks i.e. the first byte of rank i starts right after the last byte of rank i-1...

The following list shows the essentials of an MPI shared memory window:

• The allocated shared memory is contiguous across process ranks,
i.e., the first byte of rank i starts right after the last byte of rank i-1.
• Processes can calculate remote addresses’ offsets with local information only.
• Remote accesses through load/store operations,
i.e., without MPI RMA operations (MPI_Get/Put, …)

• Although each process in comm_sm accesses the same physical memory,
the virtual start address of the whole array may be different in all processes!–> linked lists only with offsets in a shared array,but not with binary pointer addresses!

Having created a consecutive array implies that the first byte of rank i comes directly after the last byte of rank i-1. This means that a given process can normally calculate all addresses, where the elements of its neighboring processes are, directly by local address calculations, assuming that all processes have the same number of elements. Therefore, each process can access all data from all other processes. As a result, it is not necessary to do any RMA calls like MPI_Get, or MPI_Put: you simply use the very normal load and store operations from your programming language.

But this MPI shared memory differs drastically in one point from OpenMP shared memory: an MPI shared memory and an OpenMP shared memory are both physical shared memory, but in OpenMP the start of this shared memory is in the same place. If you go to the start address of all shared memory and look from the perspective of all threads in OpenMP, the start could be e.g. the virtual address 70.000. However, with MPI we don’t have operating system threads, but processes, and each process has its own conversion from virtual address to physical address. This implies that, for a given MPI process, the start addess of the shared memory might be at position 70.000, but for another MPI process, this is at the virtual address 30.000.

Considering the case where you would like to create a linked list in this shared memory and write pointers to it, e.g. letting process 0 create a pointer address with the ampersand operator in C, then the start address would be set to 70.000. If you point to the second element, which would mean to get the address 70.008, and now you read this address in the other process, where the virtual address starts at 30.000, then the address 70.008 would be completely outside of the memory range of the array, and therefore this may cause a segmentation fault.

So how do you fix this? In this case, the solution is relatively simple: you can do it like Fortran programmers did 40 years ago. For example, if you have some structures, you can simply define the linked list as an array of these structures, and refer to them using indexes inside this array. As a result, if you now want to refer to the structure located at address 70.008, you don’t use the address value 70.008, but the array location 3 or array location 1 depending on the corresponding process. As the indexes in an array are of course completely independent of their virtual addresses, then everything works fine.

• Window size == zero –> no base pointer is returned !

If a process does not define a positive window size, then this process will not get a base pointer and is therefore also not able to access the window portions of the other processes.