connectx ethernet adapter cards for ocp spec 3connectx® ethernet adapter cards for ocp spec 3.0...

7
ConnectX ® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor For illustration only. Actual products may vary. OCP3.0

Upload: others

Post on 24-Mar-2021

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

ConnectX® Ethernet Adapter Cards for OCP Spec 3.0High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor

† For illustration only. Actual products may vary.

OCP3.0

Page 2: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

Mellanox® Ethernet adapter cards in the OCP 3.0 form factor support speeds from 10 to 200 GbE. Combining leading features with best-in-class efficiency, Mellanox OCP cards enable the highest data center performance.

World-Class Performance and ScaleMellanox 10, 25, 40, 50, 100 and 200 GbE adapter cards deliver industry-leading connectivity for performance-driven server and storage applications. Offering high bandwidth coupled with ultra-low latency, ConnectX adapter cards enable faster access and real-time responses.

Complementing its OCP 2.0 offering, Mellanox offers a variety of OCP 3.0-compliant adapter cards, providing best-in-class performance and efficient computing through advanced acceleration and offload capabilities. These advanced capabilities that free up valuable CPU for other tasks, while increasing data center performance, scalability and efficiency, include:

• RDMA over Converged Ethernet (RoCE)• NVMe-over-Fabrics (NVMe-oF)• Virtual switch offloads (e.g., OVS offload) leveraging ASAP2 - Accelerated Switching and

Packet Processing®

• GPUDirect® communication acceleration• Mellanox Multi-Host® for connecting multiple compute or storage hosts to a single

interconnect adapter • Mellanox Socket Direct® technology for improving the performance of multi-socket servers.• Enhanced security solutions

Complete End-to-End NetworkingConnectX OCP 3.0 adapter cards are part of Mellanox’s 10, 25, 40, 50, 100 and 200 GbE end-to-end portfolio for data centers which also includes switches, application acceleration packages, and cabling to deliver a unique price-performance value proposition for network and storage solutions. With Mellanox, IT managers can be assured of the highest performance, reliability and most efficient network fabric at the lowest cost for the best return on investment.

In addition, Mellanox NEO®-Host management software greatly simplifies host network provisioning, monitoring and diagnostics with ConnectX OCP3.0 cards, providing the agility and efficiency for scalability and future growth. Featuring an intuitive and graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking control. NEO-Host also integrates with Mellanox NEO, Mellanox’s end-to-end data-center orchestration and management platform.

Open Compute Project Spec 3.0The OCP NIC 3.0 specification extends the capabilities of OCP NIC 2.0 design specification. OCP 3.0 defines a different form factor and connector style than OCP 2.0. The OCP 3.0 specification defines two basic card sizes: Small Form Factor (SFF) and Large Form Factor (LFF). Mellanox OCP NICs are currently supported in a SFF.** Future designs may utilize LFF to allow for additional PCIe lanes and/or Ethernet ports,

OCP3.0

Page 3: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

• Open Data Center Committee (ODCC) compatible• Supports the latest OCP 3.0 NIC specifications• All platforms: x86, Power, Arm, compute and storage• Industry-leading performance• TCP/IP and RDMA - for I/O consolidation• SR-IO virtualization technology: VM protection and QoS• Cutting-edge performance in virtualized Overlay Networks• Increased Virtual Machine (VM) count per server ratio

ConnectX OCP3.0 Ethernet Adapter Benefits

TARGET APPLICATIONS• Data center virtualization• Compute and storage platforms for public & private clouds• HPC, Machine Learning, AI, Big Data, and more• Clustered databases and high-throughput data warehousing• Latency-sensitive financial analysis and high frequency trading• Media & Entertainment• Telco platforms

OCP Spec 2.0 OCP Spec 3.0

Card Dimensions Non-rectangular (8000mm2) SFF: 76x115mm (8740mm2)

PCIe Lanes Up to x16 SFF: Up to x16

Maximum Power Capability Up to 67.2W for PCIe x8 card; Up to 86.4W for PCIe x16 card SFF: Up to 80W

Baseband Connector Type Mezzanine (B2B) Edge (0.6mm pitch)

Network Interfaces Up to 2 SFP side-by-side or 2 QSFP belly-to-belly Up to two QSFP in SFF, side-by-side

Expansion Direction N/A Side

Installation in Chassis Parallel to front or rear panel Perpendicular to front/rear panel

Hot Swap No Yes (pending server support)

Mellanox Multi-Host Up to 4 hosts Up to 4 hosts in SFF or 8 hosts in LFF

Host Management Interfaces RBT, SMBus RBT, SMBus, PCIe

Host Management Protocols Not standard DSP0267, DSP0248

For more details, please refer to the Open Compute Project (OCP) Specifications.

OCP3.0

OCP 3.0 also provides additional board real estate, thermal capacity, electrical interfaces, network interfaces, host conflagration and management. OCP 3.0 also introduces a new mating technique that simplifies FRU installation and removal, and reduces overall downtime.

The table below shows key comparisons between the OCP Specs 2.0 and 3.0.

Page 4: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

Max Network Speed Interface Type PCIe ConnectX® Mellanox

Multi-Host / Socket Direct (a)

Crypto Option(b) Default OPN(c)

2x 25GbE SFP28 Gen3.0 x8 ConnectX®-4 Lx MCX4621A-ACAB

Gen3.0 x16 ConnectX®-5 MCX562A-ACAB

Gen4.0 x16 ConnectX®-6 Dx MCX623432AC-ADAB

2x 50GbE QSFP28 Gen3.0 x16 ConnectX®-5 Contact Mellanox

SFP56 Gen4.0 x16 ConnectX®-6 Dx MCX623432AC-GDAB

QSFP28 Gen4.0 x16 ConnectX®-6 Dx Contact Mellanox

1x 100GbE QSFP28 Gen4.0 x16 ConnectX®-5 Ex MCX565M-CDAB

QSFP56 Gen4.0 x16 ConnectX®-6 Dx Contact Mellanox

2x 100GbE QSFP28 Gen3.0 x16 ConnectX®-5 MCX566A-CCAI

Gen4.0 x16 ConnectX®-5 Ex MCX566A-CDAB

QSFP56 Gen4.0 x16 ConnectX®-6 Dx MCX623436AC-CDAB

1x 200GbE QSFP56 Gen4.0 x16 ConnectX®-6 Dx MCX623435AC-VDAB

2x 200GbE QSFP56 Gen4.0 x16 ConnectX®-6 MCX613436A-VDAI

(a) When using Mellanox Multi-Host or Mellanox Socket Direct in virtualization or dual-port use cases, some restrictions may apply. For further details, contact Mellanox Customer Support.(b) Crypto-enabled cards also support secure boot and secure firmware update. (c) The last digit of each OPN-suffix displays the OPN’s default bracket option: B = Pull tab Thumbscrew; I = Internal Lock; E = Ejector Latch. For other bracket types, contact Mellanox. Note: ConnectX-5 Ex is an enhanced performance version that supports PCIe Gen4 and higher throughput.Note: Additional flavors with Mellanox Multi-Host, Mellanox Socket Direct, or Crypto-disabled/enabled are available; contact Mellanox for details.

Ethernet OCP 3.0 Adapter CardsSpecs & Part Numbers

OCP3.0

Page 5: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

I/O Virtualization and Virtual SwitchingMellanox ConnectX Ethernet adapters provide comprehensive support for virtualized data centers with Single-Root I/O Virtualization (SR-IOV), allowing dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization gives data center managers better server utilization and LAN and SAN unification while reducing cost, power and cable complexity.

Moreover, virtual machines running in a server traditionally use multilayer virtual switch capabilities, like Open vSwitch (OVS). Mellanox’s ASAP2 - Accelerated Switch and Packet Processing® technology allows for the offloading of any implementation of a virtual switch or virtual router by handling the data plane in the NIC hardware, all the while maintaining the control plane unmodified. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

RDMA over Converged Ethernet (RoCE)Mellanox RoCE doesn’t require any network configurations, allowing for seamless deployment and efficient data transfers with very low latencies over Ethernet networks — a key factor in maximizing a cluster’s ability to process data instantaneously. With the increasing use of fast and distributed storage, data centers have reached the point of yet another disruptive change, making RoCE a must in today’s data centers.

Flexible Multi-Host® TechnologyInnovative Mellanox Multi-Host technology provides high flexibility and major savings in building next generation, scalable, high-performance data centers. Mellanox Multi-Host connects multiple compute or storage hosts to a single interconnect adapter, separating the adapter PCIe interface into multiple and independent PCIe interfaces, without any performance degradation.

Security From Zero Trust to Hero TrustIn an era where privacy of information is key and zero trust is the rule, Mellanox ConnectX OCP 3.0 adapters offer a range of advanced built-in capabilities that bring security down to the end-points with unprecedented performance and scalability.Mellanox offers options for AES-XTS block-level data-at-rest encryption/decryption offload starting from ConnectX-6. Additionally, ConnectX-6 Dx includes IPsec and TLS data-in-motion inline encryption/decryption offload.

ConnectX-6 Dx also enables hardware-based L4 firewall, which offloads stateful connection tracking protection.

All Mellanox ConnectX OCP 3.0 adapters support secure firmware update, ensuring that only authentic images produced by Mellanox can be installed; this is regardless of whether the installation happens from the host, the network, or a BMC.

For an added level of security, ConnectX-6 Dx uses embedded Hardware Root-of-Trust (RoT) to implement secure boot.

OCP3.0

Page 6: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

OCP3.0

Accelerated StorageMellanox adapters support a rich variety of storage protocols and enable partners to build hyperconverged platforms where the compute and storage nodes are co-located and share the same infrastructure. Leveraging RDMA, Mellanox adapters enhance numerous storage protocols, such as iSCSI over RDMA (iSER), NFS RDMA, and SMB Direct to name a few. Moreover, ConnectX adapters also offer NVMe-oF protocols and offloads, enhancing utilization of NVMe based storage appliances.

Another storage related hardware offload is the Signature Handover mechanism based on an advanced T-10/DIF implementation.

Host ManagementMellanox host management sideband implementations enable remote monitor and control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe – Baseboard Management Controller (BMC) interface, supporting both NC-SI and PLDM management protocols using these interfaces. Mellanox OCP 3.0 adapters support these protocols to offer numerous Host Management features such as PLDM for Firmware Update, network boot in UEFI driver,UEFI secure boot, and more.

Enhancing Machine Learning Application PerformanceMellanox adapters with built-in advanced acceleration and RDMA capabilities deliver best-in-class latency, bandwidth and message rates, and lower CPU utilization. Mellanox PeerDirect® technology with NVIDIA GPUDirect™ RDMA enables adapters with direct peer-to-peer communication to GPU memory, without any interruption to CPU operations. Mellanox adapters also deliver the highest scalability, efficiency, and performance for a wide variety of applications, including bioscience, media and entertainment, automotive design, computational fluid dynamics and manufacturing, weather research and forecasting, as well as oil and gas industry modeling. Thus, Mellanox adapters are the best NICs for machine learning applications.

Mellanox Socket Direct®

Mellanox Socket Direct technology brings improved performance to multi-socket servers by enabling direct access from each CPU in a multi-socket server to the network through its dedicated PCIe interface. With this type of configuration, each CPU connects directly to the network; this enables the interconnect to bypass a QPI (UPI) and the other CPU, optimizing performance and improving latency. CPU utilization improves as each CPU handles only its own traffic, and not the traffic from the other CPU. Mellanox’s OCP 3.0 cards include native support for socket direct technology for multi-socket servers and can support up to 8 CPUs.

OCP3.0

Page 7: ConnectX Ethernet Adapter Cards for OCP Spec 3ConnectX® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute

Broad Software SupportAll Mellanox adapter cards are supported by a full suite of drivers for Linux major distributions, Microsoft® Windows®, VMware vSphere® and FreeBSD®. Drivers are also available inbox in Linux main distributions, Windows and VMware.

Multiple Form FactorsIn addition to the OCP Spec 3.0 cards, Mellanox adapter cards are available in other form factors to meet data centers’ specific needs, including:

• OCP Specification 2.0 Type 1 & Type 2 mezzanine adapter form factors, designed to mate into OCP servers.

• Standard PCI Express (PCIe) Gen3 and Gen4 adapter cards.

Standard PCI Express Adapter CardOCP2.0 Adapter Card

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

© Copyright 2020. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, ConnectX, GPUDirect, Mellanox PeerDirect, Mellanox Multi-Host, Mellanox Socket Direct and ASAP2 - Accelerated Switch and Packet Processing are registered trademarks of Mel-lanox Technologies, Ltd. Mellanox NEO-Host is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NOTES: (1) This brochure describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability.

(2) Product images may not include heat sync assembly; actual product may differ.

060275BR Rev 1.4

OCP3.0 Adapter Card

† For illustration only. Actual products may vary.

OCP3.0