GPU
Integrated Communication
Jim Dinan, NVIDIA
Abstract: Communication can be a significant source of
overhead for HPC applications. One of the most common techniques for reducing
these overheads to overlap communication with computation; however, this
technique has been challenging to apply in the context of GPU accelerated
workloads. In this presentation, I will cover several technologies and
techniques that we have developed that enable users to reduce exposed overheads
from communication and improve GPU utilization. I will discuss how these can be
applied in the context of the Message Passing Interface (MPI) and review active
work in progress to extend the MPI standard with support for these new
communication techniques.