MPI from the Ground Up:
From Operations to Implementations, Part I
Derek Schafer & Tony Skjellum,
University of Tennessee at Chattanooga
Abstract: To
understand MPI, we start from some fundamental, small programs that reflect
idioms at work in large-scale, bulk-synchronous parallel programs. MPI
offers the ability to move data between peer processes with collective,
point-to-point, remote-memory, and I/O operations. In this seminar, we will
start with small subsets of MPI, explain how they work, then describe their
implications on the design, implementation, and optimization of underlying MPI
implementations. As such, we will touch on small subsets of the
standard only.
A complementary paper, ExaMPI:
A Modern Design and Implementation to Accelerate Message Passing Interface
Innovation, is attached - we request that you read it in advance.
Future editions of this
CUP-ECS seminar will build on the top-to-bottom view of operations and APIs'
syntax and semantics, as well as underlying rules and norms that pervade the
MPI standard. Continued attention to the interactions of applications,
runtimes, and networks will inform this and the future lectures too, including
emergent heterogeneity (accelerators) and complexity of managing networks,
accelerators, multi-core CPUs, and application codes.