NVIDIA QUANTUM-2 INFINIBAND PLATFORM

Extreme performance for cloud-native supercomputing at any scale

InfiniBand Networking Solutions

Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these computing requirements continue to grow, NVIDIA InfiniBand—the world’s only fully offloadable, In-Network Computing platform—provides the dramatic leap in performance needed to achieve unmatched performance in high-performance computing (HPC), AI, and hyperscale cloud infrastructures with less cost and complexity.

InfiniBand Adapters

InfiniBand Adapters

InfiniBand host channel adapters (HCAs) provide ultra-low latency, extreme throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for today's modern workloads.

Data Processing Units (DPUs)

Data Processing Units (DPUs)

The NVIDIA® BlueField® DPU combines powerful computing, high-speed networking, and extensive programmability to deliver software-defined, hardware-accelerated solutions for the most demanding workloads. From accelerated AI computing to cloud-native supercomputing, BlueField redefines what’s possible.

InfiniBand Switches

InfiniBand Switches

InfiniBand switch systems deliver the highest performance and port density available. Innovative capabilities such as NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ and advanced management features such as self-healing network capabilities, quality of service, enhanced virtual lane mapping, and NVIDIA In-Network Computing acceleration engines provide a performance boost for industrial, AI, and scientific applications.

Routers and Gateway Systems

Routers and Gateway Systems

InfiniBand systems provide the highest scalability and subnet isolation using InfiniBand routers, and InfiniBand to Ethernet gateway systems. The latter is used to enable a scalable and efficient way to connect InfiniBand data centers to Ethernet infrastructures.

LinkX InfiniBand Cables and Transceivers

Cables and Transceivers

LinkX® cables and transceivers are designed to maximize the performance of HPC networks, requiring high-bandwidth, low-latency, highly reliable connections between InfiniBand elements.

InfiniBand Enhanced Capabilities

In-Network Computing

In-Network Computing

NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)  offloads collective communications operations from the compute to the switch network. This innovative approach decreases the amount of data traversing the network, dramatically reduces the time of Message Passing Interface (MPI) operations, and increases datacenter  efficiency. 

Self Healing Network

Self-Healing Network

In HPC and AI, clusters depend upon a high-speed and reliable interconnect. NVIDIA InfiniBand with self-healing network capabilities overcomes link failures, enabling network recovery 1,000X faster than any other software-based solution. The self-healing networking capabilities take advantage of the intelligence built into the latest generation of InfiniBand switches.

Quality of Service

Quality of Service

InfiniBand is the only high-performance interconnect solution with proven quality-of-service capabilities, including  advanced congestion control and adaptive routing, resulting in unmatched network efficiency.

Network Topologies

Network Topologies

InfiniBand has complete centralized management and can support any topology. The most popular topologies include Fat Tree, Hypercubes, multi-dimensional Torus, and Dragonfly+. Optimized routing algorithms provide optimized performance when designing a topology for particular application communication patterns.

Software for Optimal Performance

MLNX_OFED

OFED from OpenFabrics Alliance (www.openfabrics.org) has been hardened through collaborative development and testing by major high-performance input/output (IO) vendors. NVIDIA MLNX_OFED is an NVIDIA-tested and packaged version of OFED.

HPC-X

The NVIDIA HPC-X® is a comprehensive MPI and SHMEM/PGAS software suite. HPC-X leverages InfiniBand In-Network Computing and acceleration engines to optimize research and industry applications.

UFM

The NVIDIA UFM® platform empowers data center administrators to efficiently provision, monitor, manage and proactively troubleshoot their InfiniBand network infrastructure.

MAGNUM IO

NVIDIA Magnum IO utilizes network IO, In-Network Computing, storage, and IO management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems. Magnum IO enables NVIDIA GPU and NVIDIA networking hardware topologies to achieve optimal throughput and low latency.

Configure Your Cluster

Take Networking Courses

Ready to Purchase?

Resources

  • VIDEOS
  • SOLUTION BRIEFS