connectx adapter cards why mellanox? · computing and data center connectivity. mellanox’s...

3
REFERENCE GUIDE Mellanox 40 to 56Gb/s InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scal- able switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to 10’s of thousands of nodes. Mellanox’s scale-out 10 and 40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Key Features - InfiniBand – 51.8Tb/s switching capacity – 100 to 300ns switching latency – Hardware-based routing – Congestion control Key Features - 10/40GbE – Linear scalability, non-blocking connectivity – High-density, lower power data center switch – Converged Enhanced Ethernet technology ConnectX ® Adapter Cards Why Mellanox? Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional 10GbE fabrics. Why up to 56Gb/s InfiniBand? Enables the highest performance and lowest latency – Proven scalability for 10s of thousands of nodes – Maximum return on investment Highest Efficiency / Maintains balanced system ensuring highest productivity – No artificial bottlenecks, performance match for PCIe 3.0 – Proven to fulfill multi-process networking requirements – Guaranteeing no performance degradation Performance driven architecture – MPI latency 1us, 6.6Gb/s with 40Gb/s InfiniBand (bi-directional) – MPI message rate of >40 Million/sec Superior application performance – From 30% to over 100% HPC applications performance increase – Doubles the storage throughput, reducing backup time in half Why Mellanox 10/40GbE? Mellanox’s Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. From 10 and 40 Gigabit Ethernet Converged Network Adapters, core and top-of-rack switches and fabric optimization software, a broader array of end users with less rigid performance requirements than those addressed by InfiniBand can benefit from a more scalable and high performance Ethernet fabric. han traditional 10GbE fabrics. Why HP? HP isn’t just a reseller of Mellanox InfiniBand and Ethernet NICs, switches and software products described in this reference guide. HP also tests these components in HP systems and integrates them with related HP products in end-to-end HP solutions built at our four regional integration centers. HP provides Level 1, 2, and 3 Support for these products, and our worldwide HPC engineering team meets regularly with Mellanox counterparts to keep in synch with new product enhancements. This means that customers can procure and deploy HP/Mellanox systems globally and be confident of high quality, reliability and support from a single solution provider. Mellanox adapter cards are designed to drive the full performance of high-speed InfiniBand (up to 56Gb/s) and 10 and 40 Gigabit Ethernet fabrics. Mellanox ConnectX adapter cards deliver high bandwidth and industry-leading connectivity for performance-driven server and storage applications in Enterprise Data Centers, Web 2.0, High-Performance Computing, and Embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few example applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and increased number of users per server. Benefits – Industry-leading throughput and latency performance – Enabling I/O consolidation by supporting TCP/IP, FC over Ethernet and RDMA over Ethernet transport protocols on a single adapter – Improved productivity and efficiency Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications – High-availability and high-performance for data center networking – Software compatible with standard TCP/UDP/IP and iSCSI stacks – High level silicon integration and no external memory design provides low power, low cost and high reliability Target Applications – Web 2.0 data centers and cloud computing – Low latency financial services – Data center virtualization – I/O consolidation (single unified wire for networking, storage and clustering) – Video streaming – Enterprise data center applications – Accelerating back-up and restore operations InfiniBand and 10/40 Gigabit Ethernet Switches

Upload: others

Post on 02-Jun-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ConnectX Adapter Cards Why Mellanox? · computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency,

REFERENCE GUIDE

Mellanox 40 to 56Gb/s InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scal-able switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to 10’s of thousands of nodes.

Mellanox’s scale-out 10 and 40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics.

Key Features - InfiniBand– 51.8Tb/s switching capacity – 100 to 300ns switching latency– Hardware-based routing– Congestion control

Key Features - 10/40GbE– Linear scalability, non-blocking connectivity– High-density, lower power data center switch– Converged Enhanced Ethernet technology

ConnectX® Adapter Cards Why Mellanox?Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional 10GbE fabrics.

Why up to 56Gb/s InfiniBand?Enables the highest performance and lowest latency– Proven scalability for 10s of thousands of nodes– Maximum return on investmentHighest Efficiency / Maintains balanced system ensuringhighest productivity– No artificial bottlenecks, performance match for PCIe 3.0– Proven to fulfill multi-process networking requirements– Guaranteeing no performance degradationPerformance driven architecture– MPI latency 1us, 6.6Gb/s with 40Gb/s InfiniBand (bi-directional)– MPI message rate of >40 Million/secSuperior application performance– From 30% to over 100% HPC applications performance increase– Doubles the storage throughput, reducing backup time in half

Why Mellanox 10/40GbE?Mellanox’s Mellanox’s scale-out 10/40GbE products enable users tobenefit from a far more scalable, lower latency, and virtualized fabricwith lower overall fabric costs and power consumption, greaterefficiencies, and simplified management than traditional Ethernetfabrics. From 10 and 40 Gigabit Ethernet Converged Network Adapters,core and top-of-rack switches and fabric optimization software, abroader array of end users with less rigid performance requirementsthan those addressed by InfiniBand can benefit from a more scalableand high performance Ethernet fabric.han traditional 10GbE fabrics.

Why HP?HP isn’t just a reseller of Mellanox InfiniBand and Ethernet NICs, switches and software products described in this reference guide. HPalso tests these components in HP systems and integrates them withrelated HP products in end-to-end HP solutions built at our fourregional integration centers. HP provides Level 1, 2, and 3 Support forthese products, and our worldwide HPC engineering team meetsregularly with Mellanox counterparts to keep in synch with newproduct enhancements. This means that customers can procure anddeploy HP/Mellanox systems globally and be confident of high quality,reliability and support from a single solution provider.

Mellanox adapter cards are designed to drive the full performance of high-speed InfiniBand (up to 56Gb/s) and 10 and 40 Gigabit Ethernet fabrics. Mellanox ConnectX adapter cards deliver high bandwidth and industry-leading connectivity for performance-driven server and storage applications in Enterprise Data Centers, Web 2.0, High-Performance Computing, and Embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few example applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and increased number of users per server.

Benefits– Industry-leading throughput and latency performance– Enabling I/O consolidation by supporting TCP/IP, FC over Ethernet

and RDMA over Ethernet transport protocols on a single adapter – Improved productivity and efficiency– Supports industry-standard SR-IO Virtualization technology and delivers VM protection and

granular levels of I/O services to applications – High-availability and high-performance for data center networking– Software compatible with standard TCP/UDP/IP and iSCSI stacks– High level silicon integration and no external memory design

provides low power, low cost and high reliabilityTarget Applications– Web 2.0 data centers and cloud computing– Low latency financial services– Data center virtualization – I/O consolidation (single unified wire for networking,

storage and clustering)– Video streaming – Enterprise data center applications– Accelerating back-up and restore operations

InfiniBand and 10/40 Gigabit Ethernet Switches

Page 2: ConnectX Adapter Cards Why Mellanox? · computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency,

REFERENCE GUIDE

BladeServer Products HP Part # Mellanox Part #

HP BLc 4X FDR IB Switch 648312-B21 N/A-CustomHP IB QDR/EN 10Gb 2P 544M Adapter 644160-B21 N/A-CustomHP IB FDR/EN 10/40Gb 2P 544M Adapter 644161-B21 N/A-CustomHP IB 4X DDR Switch Module for HP BladeSystem c-Class 489183-B21 N/A - CustomHP IB 4X QDR Switch Module for HP BladeSystem c-Class 489184-B21 N/A - CustomHP IB 4X QDR ConnectX-2 Dual Port Mezz HCA for HP BladeSys-tem c-Class 592519-B21 N/A - Custom

HP IB 4X DDR ConnectX Dual Port Mezz HCA for HP BladeSystem c-Class 448262-B21 N/A - Custom

HP NC542m 10GbE ConnectX Dual Port Flex-10 NIC for BladeSystem c-Class 539857-B21 N/A - Custom

NIC and FlexLOM Products HP Part # Mellanox Part #

HP IB FDR/EN 10/40Gb 2P 544QSFP Adapter 649281-B21 MCX354A-FCBTHP IB 4X QDR ConnectX-2 PCIe 2.0 Dual Port HCA 592520-B21 MHQH29C-XTRHP IB 4X DDR ConnectX-2 PCIe 2.0 Dual Port HCA 592521-B21 MHGH29B-XTRHP 10GbE PCIe 2.0 Dual Port NIC 516937-B21 MNPH29D-XTRHP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adapter 649282-B21 N/A - CustomHP IB QDR/EN 10Gb 2P 544FLR-QSFP Adapter 649283-B21 N/A - CustomMLNX 10GbE 1P SFP+ CX3 Adapter 682150-001

Switch Systems HP Part # Mellanox Part #

FDR Switch SystemsMellanox IB FDR 36P Switch 670767-B21 MSX6025F-2SFR Mellanox IB FDR 36P RAF Switch 670768-B21 MSX6025F-2SRRMellanox IB FDR 36P Managed Switch 670769-B21 MSX6036F-2SFRMellanox IB FDR 36P RAF Managed Switch 670770-B21 MSX6036F-2SRRMellanox IB QDR/FDR 648P Switch Chassis N+N PS 674277-B21 MSX6536-NRMellanox IB QDR/FDR 324P Switch Chassis N+N PS 674278-B21 MSX6518-NRMellanox IB QDR/FDR 216P Switch Chassis N+N PS 674279-B21 MSX6512-NRMellanox IB QDR/FDR Modular Management Brd 674280-B21 MSX6000MARMellanox IB QDR Modular Fabric Board 674281-B21 MSX6002TBRMellanox IB FDR Modular Fabric Board 674282-B21 MSX6002FLRMellanox IB QDR Modular Line Board 674283-B21 MSX6001TRMellanox IB FDR Modular Line Board 674284-B21 MSX6001FRQDR Switch SystemsGrid Director 4700 18 Slots QDR Basic Config 590200-B21 VLT-30040-HPSFB-4700 QDR Fabric Board 590201-B21 VLT-30041-HPSLB-4018 18 Port QDR Line Board 590203-B21 VLT-30043-HPGrid Director 4200 9 Slots QDR Basic Config 615679-B21 VLT-30060-HPSFB-4200 QDR Fabric Board 615680-B21 VLT-30061-HPSMB-CM Chassis Management Board 590204-B21 VLT-30044-HPGrid Director 4036 2PS 519571-B21 VLT-30111-HPGrid Director 4036 2PS RAF 619672-B21 VLT-30011-HPGrid Director 4036E 632220-B21 VLT-30034-HPGrid Director 4036E-LM 632222-B21 VLT-30035-HPGrid Director 4036E RAF 632221-B21 VLT-30036-HPGrid Director 4036E-LM RAF 632223-B21 VLT-30037-HP

Cables HP Part # Mellanox Part #

FDR FibreHP 3M IB FDR QSFP Optical Cable 670760-B21 N/AHP 5M IB FDR QSFP Optical Cable 670760-B22 MC2207310-005 HP 7M IB FDR QSFP Optical Cable 670760-B23 N/AHP 10M IB FDR QSFP Optical Cable 670760-B24 MC2207310-010HP 12M IB FDR QSFP Optical Cable 670760-B25 N/A

Page 3: ConnectX Adapter Cards Why Mellanox? · computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency,

350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

© Copyright 2012. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

REFERENCE GUIDE

HP Account Contacts:Ken BekampisHP Alliance ManagerOffice: 774-203-6959E-mail: [email protected]

M. Garth Fruge Director of Sales - HPOffice: 512 568 5232 E-Mail: [email protected]

HP 15M IB FDR QSFP Optical Cable 670760-B26 MC2207310-015 HP 20M IB FDR QSFP Optical Cable 670760-B27 MC2207310-020HP 30M IB FDR QSFP Optical Cable 670760-B28 MC2207310-030FDR CopperHP 0.5M IB FDR QSFP Copper Cable 670759-B21 N/AHP 1M IB FDR QSFP Copper Cable 670759-B22 MC2207130-001HP 1.5M IB FDR QSFP Copper Cable 670759-B23 N/AHP 2M IB FDR QSFP Copper Cable 670759-B24 MC2207130-002 HP 3M IB FDR QSFP Copper Cable 670759-B25 MC2207128-003 DDR/QDR 3M DDR/QDR QSFP Fiber Cable 498386-B23 MFS4R12CB-0035M DDR/QDR QSFP Fiber Cable 498386-B24 MFS4R12CB-005 10M DDR/QDR QSFP Fiber Cable 498386-B25 MFS4R12CB-010 15M DDR/QDR QSFP Fiber Cable 498386-B26 MFS4R12CB-01520M DDR/QDR QSFP Fiber Cable 498386-B27 MFS4R12CB-020 30M DDR/QDR QSFP Fiber Cable 498386-B28 MFS4R12CB-030 HP IB 0.5M 4X DDR/QDR QSFP Cu Cable 498385-B26HP IB 1M 4X DDR/QDR QSFP Cu Cable 498385-B21HP IB 1.5M 4X DDR/QDR QSFP Cu Cable 498385-B27HP IB 2M 4X DDR/QDR QSFP Cu Cable 498385-B22HP IB 3M 4X DDR/QDR QSFP Cu Cable 498385-B23HP IB 5M 4X DDR/QDR QSFP Cu Cable 498385-B24HP IB 7M 4X DDR/QDR QSFP Cu Cable 498385-B25 QSFP/SFP+ Adaptor Kit 655874-B21 MAM1Q00A-QSA

Software License HP Part # Mellanox Part #

Mellanox UFM 1yr 24x7 Flex Lic BD569AMellanox UFM 3yr 24x7 Flex Lic BD570AMellanox UFM Advanced 1yr 24x7 Flex Lic BD571AMellanox UFM Advanced 3yr 24x7 Flex Lic BD572AMellanox Accel SW 1 yr 24x7 Flex Lic BD573AMellanox Accel SW 3 yr 24x7 Flex Lic BD574A

ConnectX-2 InfiniBand Adapter Card with QSFP

34-port FDR 56Gb/s InfiniBand Switch for HP BladeSystem c-Class

ConnectX-3 Single-Port 10GbE ALOM Adapter Kit with SFP+

ConnectX-3 VPI Dual-Port 56Gb/s InfiniBand and/or 40GbE Adapter with QSFP

ConnectX-3 FDR 56Gb/s InfiniBand Mezzanine HCAs for HP BladeSystem c-Class

3595RG Rev 1.0

Dual-Port Flex-10 10 Gigabit Ethernet Mezzanine NIC for HP BladeSystem c-Class

Cables HP Part # Mellanox Part #

36-port Non-blocking Unmanaged 56Gb/s InfiniBand Switch System