The library provides an accelerated, universal, multifabric layer for fast interconnects via OFI, including for these configurations:
- Transmission Control Protocol (TCP) sockets
- Shared memory
- Interconnects based on Remote Direct Memory Access (RDMA), including Ethernet and InfiniBand
It accomplishes this by dynamically establishing the connection only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available.
- Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network you choose at run time.
- Use a two-phase communication buffer-enlargement capability to allocate only the memory space required.
Application Binary Interface Compatibility
An application binary interface (ABI) is the low-level nexus between two program modules. It determines how functions are called and also the size, layout, and alignment of data types. With ABI compatibility, applications conform to the same set of runtime naming conventions.
Intel MPI Library offers ABI compatibility with existing MPI-1.x and MPI-2.x applications. So even if you are not ready to move to the new 3.1 standard, you can take advantage of the library’s performance improvements by using its runtimes, without recompiling.