MPI 3.0 Neighborhood Collectives

Several scientific computing applications, especially with explicit numerical schemes, requires halo exchange MPI communications. These communications of the so-called "Ghost cells" are between neighboring MPI ranks within a given topology (typically cartesian). MPI 3.0 includes neighborhood collective operations in these two functions: MPI_Neighbor_allgather and MPI_Neighbor_alltoall.

Traditional implementations with point to point communication routines are tricky and require special handling of non-contiguous data. The use of these collective functions will significantly improve the clarity of your code and make it significantly simpler and potentially faster.

Remember that most MPI collective operations also have a non-blocking variant, such as MPI_Ineighbor_alltoall, which may allow you to further overlap MPI communications with other computations in your code.