Mellanox VPI adapter flexibility enables any standard network, cluster, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand or Ethernet. VPI simplifies I/O system design and helps to meet the challenges of a dynamic data center.

InfiniBand -- ConnectX-3 delivers low-latency, high-bandwidth, and computing efficiency for performance-driven server and storage clustering applications. Efficient computing is achieved by offloading from the CPU protocol processing and data movement overhead such as RDMA and send/receive semantics, allowing more processor power for the application. CORE-Direct™ brings the next level of performance improvement by offloading application overhead such as data broadcasting and gathering as well as global synchronization communication routines.

RDMA over Converged Ethernet -- ConnectX-3 utilizing IBTA RoCE technology delivers similar low latency and high performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low-latency RDMA services over Layer 2 Ethernet.

Storage Accelerated -- A consolidated compute and storage network achieves significant cost- performance advantages over multifabric networks. Standard block and file access protocols can leverage InfiniBand RDMA for high-performance storage access. T11-compliant encapsulation (FCoIB or FCoE) with full hardware offloads simplifies the storage network while keeping existing Fibre Channel targets.

Key Features:

  • Virtual protocol interconnect (VPI)
  • CPU offload of transport operations
  • Fibre Channel encapsulation (FCoIB or FCoE)

 

Business Benefits:

  • I/O consolidation for networking, storage, and management applications
  • Scalable to tens of thousands of nodes
  • Power-efficient

 

 

Contacts

Eyal Gutkind

Marketing ManagerRegion: United States/Canada
(408) 970-3400Email Eyal