Designed specifically for high performance computing (HPC), the Intel® Omni-Path Host Fabric Interface (Intel® OP HFI) uses an advanced connectionless design that delivers performance that scales with high node and core counts, making it the ideal choice for the most demanding application environments. Intel OP HFI supports 100 Gbps per port, which means each Intel OP HFI port can deliver up to 25 GBps per port of bidirectional bandwidth. The same ASIC utilized in the Intel OP HFI will also be integrated into future Intel® Xeon® processors and used in third-party products.

Compare Now
Compare Now

Products

{{#with products}} {{>product-card}} {{/with}}
{{#if product-card.title}}

{{product-card.title}}

{{/if}}
{{#each product-card.items}}
{{#if reviews.showRating}}
{{reviews.reviewsScore}}
{{/if}}
{{#if responsive-image}} {{#with responsive-image}} {{> responsive-image }} {{/with}} {{/if}} {{#if responsive-list-image}} {{#with responsive-list-image}} {{> responsive-image }} {{/with}} {{/if}} {{#with content-main}}

    {{#each productHighLights}}
  • {{this}}
  • {{/each}}
{{/with}}
{{#with content-extra}}
{{#if show-compare }} Compare Now {{/if}} {{#if price-tray}} {{> price-tray}} {{/if}}
{{/with}}
{{/each}}
{{#if product-card.loader}} {{/if}}
{{#each image-sizes}} {{#if smallren}} {{/if}} {{/each}}
{{#each product.productOffers}}
{{price}}{{#if cents}}{{decimalSeparator}}{{/if}}{{cents}} Buy
{{/each}}
{{#if price-tray.standalone}}
{{#if price-tray.fullwidth}}

Full Width

{{/if}} {{/if}} {{#if price-tray.standalone}} {{#if price-tray.fullwidth}}
{{/if}}
{{/if}}

Optimizations and Enhancements

Much of the improved HPC application performance and low end-to-end latency at scale comes from the following enhancements:

The application view of the fabric is derived heavily from—and application-level software compatible with—the demonstrated scalability of Intel® Omni-Path Architecture (Intel® OPA) by leveraging an enhanced next generation version of the Performance Scaled Messaging (PSM) library. Major deployments by the US Department of Energy and other have proven this scalability advantage. PSM is specifically designed for the Message Passing Interface (MPI) and is very lightweight—one-tenth of the user space code—compared to using verbs. This leads to extremely high MPI and Partitioned Global Address Space (PGAS) message rates (short message efficiency) compared to using InfiniBand* verbs.

Intel® Omni-Path Architecture (Intel® OPA)—based on a connectionless design—does not establish connection address information between nodes, cores, or processes while a traditional implementation maintains this information in the cache of the adapter. As a result, the connectionless design delivers consistent latency independent of the scale or messaging partners. This implementation offers greater potential to scale performance across a large node or core count cluster while maintaining low end-to-end latency as the application is scaled across the cluster.

Benchmarks for Intel® Omni-Path Architecture


See complete speed, performance, and configuration specs.

Related Videos