ibm hortonworks design guide - mellanox technologies · ibm hortonworks design guide. 14-sep-17 v1....
TRANSCRIPT
-
Key Differentiators
Company Background
Established 1999 * NASDAQ: MLNXEnd-to-end Ethernet Connectivity Solutions
– Adapters, Switches, Cables, Software, Support World-class, non-outsourced technical
supportTrusted as switch manufacturer for every
major server OEM
MELLANOX: THE DIFFERENCE IS IN THE CHIP• Founded as a state-of-the-art silicon chip (ASIC) manufacturer• Intelligence built directly onto our own chip• Other switch vendors are forced to source expensive ASICs
from third parties such as Broadcom• Mellanox uses own chip & passes savings to customers
Market Opportunities
Unique CapabilityMAXIMIZE THE NETWORK• Maximize datacenter connectivity ROI through:
Density, Scale, Performance
OPEN THE NETWORK• Leverage new technologies that increase
functionality, investment & ROI• Freedom from vendor lock-in
IBM Hortonworks Design Guide14-Sep-17 v1
6th Generation10/40/56GbE & 40/56 Gb IB
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent www.mellanox.com [email protected] +1 (512) 897-8245
Main BenefitsMellanox designs &
builds intelligent ASICs that power switches,
adapters, & cables
SCALE-OUT STORAGECombines compute & storage, easier to manage & lowers cost – top of rack switch with density at lower price point most attractive
MEDIA & ENTERTAINMENTVideo streaming & post-production on 4k/8k workflows –needs extreme high bandwidth to support real-time frame-rates
CLOUDCreate economies of scale through shared services – open switch platform with fairness best for software-defined DC.
BIG DATAImproved analytics for better business decisions –needs non-blocking architecture to speed data ingestion.
7th Generation40/56/100 Gb IB
GENOMICSExtreme scalability using a building - block approach: Capacity, bandwidth and a single name space expand as more building blocks are added, resulting in near-linear performance gains
SCALE OUT DATABASEScale out of DB2 PureScale, Oracle RAC, SAP HANA
The Enterprise Integrated model is familiar to those with traditional SAN deployments. Adding ESS/Spectrum Scale will not only eliminate the data silos, but can also improve performance and reduce data bottleneck.
The most common deployment is using Network Shared Disks, whose modular design scales performance and capacity independently.
For those familiar with HDFS, or other scale-out software-defined storage, we support shared nothing clusters that provide the native locality APIs for HDFS, but work like centralized parallel storage for other protocols. Using commodity storage rich servers can be the most economical way to scale out your storage needs.
VALUE & PERFORMANCE
http://www.mellanox.com/mailto:[email protected]
-
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
IBM Mellanox Infrastructure for Hortonworks Choice of Cabling40GbE / FDR Cabling
EDR Cabling
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
* Optics are IBM Parts only
Length Description FC
0.5m EDR Copper Cable QSFP28 EB50
1m EDR Copper Cable QSFP28 EB51
2m EDR Copper Cable QSFP28 EB52
1.5m EDR Copper Cabling QSFP28 EB54
3m EDR Optical Cable QSFP28 EB5A
5m EDR Optical Cable QSFP28 EB5B
10m EDR Optical Cable QSFP28 EB5C
15m EDR Optical Cable QSFP28 EB5D
20m EDR Optical Cable QSFP28 EB5E
30m EDR Optical Cable QSFP28 EB5F
50m EDR Optical Cable QSFP28 EB5G
100m EDR Optical Cable QSFP28 EB5H
Internet
M E S S
EDWFirewall
Internet
M E S S
Firewall
Flat Dual Home
Public
Public
Private
Internet
M E S S
Firewall
Partial Home“Thin DMZ”
Public
Private
Internet
M E S S
Firewall
DMZ
Public
Private
Firewall
Speed Switch Cabling Adapter Optics*
EDR SB7700 – 8828-E36
See lists on right
EKAL FDR/EDR
40 GbESX1710 – 8831-NF2 EKAL 2@40
EC3L 2@40
10/40 GbESX1410 – 8831-S48 EKAU 10/25
EKAL 2@40EC3L 2@40
1/10GbE4610-54T – 8831-S52
LOM
X
-
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
IBM Mellanox Infrastructure for HortonworksAs you increase the speed of the network, the topology of the PCI slot becomes important.The two topologies that IBM has for the cards and slots in the servers are1. PCI Gen 3.0 x82. PCI Gen 3.0 x16The important piece is the x8/x16, what does this mean? This is the width of the PCI bus, how much bandwidth can be passed from Network to the CPU. How much network bandwidth can be passed thru these two PCI Slots.
Speed PCI Gen 3.0 x8 - # Ports FC# PCI Gen 3.0 x16 - # Ports FC#10 GbE 2
EKAU2 -
25 GbE 2 2 -40 GbE 1 EC3A 2 EC3L / EKAL50 GbE 1 EKAM*(x16 Card) 2 EC3L / EKAL56 GbE 1 EC3A 2 EC3L / EKAL100 GbE 0 - 1 EC3L / EKAM
FDR 1 - 2 EL3D / EKALEDR 0 - 1 EC3E / EKAL
NOTE: To provide Active/Active redundant network, the PCI Slot must have enough bandwidth to pass the data from the CPU to the Network.IBM FC# EC3A is only a PCI Gen3.0 x8 Card so is limited to max bandwidth of 56GbTo achieve dual 40GbE Active/Active redundant network, the FC# EC3L or EKAL should be used with both ports connected @40GbE on a card with PCI Gen3.0 x16.
NOTE:Bonding: The most common mode is Mode 4 LACP/802.3ad, this has an overhead and is originally to bond low speed unreliable links. With the implementation of modern Ethernet networks and enhancements to Linux. Mode 5 – TLB and Mode 6 – ALB. Using Mode 5/6 are good choice as they have less overhead than Mode 4 and they require no configuration on the switches to provide Active / Active redundancy.
NOTE:When Mellanox is configured end to end, Adapter, Cable and Switch, there is a free upgrade to Mellanox supported 56GbE. This provides 40% more bandwidth than 40GbE. Activation is a single command on the required interface of “speed 56000” on the switch interface.
NOTE:To achieve redundant network for IB - FDR, FC#EC3E / EKAL @ 2x FDRTo achieve redundant network for IB - EDR 2x FC#EC3E / EKAL @ EDRRedundancy is provided by Mode 1 Active / Standby, the bond is created the same as normal Linux bond
-
Speed Switch Cabling Adapter Optics
40 GbESX1710 – 8831-NF2
See list on right
EC3AEB27 +EB2J or
EB2K
10/40 GbESX1410 – 8831-S48 EC37 / EL3X
EC2M / EL40 (SR)
EB28 + ECBD or
ECBE
1GbE 4610-54T – 8831-S52
IBM Mellanox Infrastructure for 10 GbE Cluster Hortonworks
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
8831-NF2 36x 40GbE
6x 40 GbE Link per Spine
48x 10 GbE Endpoints per Leaf
Sample 192 Node L2 (VMS) Cluster
Spine
Leaf
IPL 4x 56GbE
T 10 GbE Client
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
6 x 40 GbE Link per Spine
8831-S4848x 10GbE + 12x 40GbE
TTTT
-
Speed Switch Cabling Adapter Optics
40 GbESX1710 – 8831-NF2
See list on right EKALEB27 +EB2J or
EB2K
IBM Mellanox Infrastructure for 40GbE Cluster Hortonworks
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
8831-NF2 36x 40GbE
7x 56 GbE Link per Spine
18x 40 GbE Endpoints per Leaf
Sample 72 Node L2 (VMS) Cluster
Spine
Leaf
IPL 4x 56GbE
C
4
4
C
Q
40 GbE Data Network
40 GbE Client
40 GbE Customer Network
10 GbE Customer Network
X
QSFP to SFP+ Adapter (QSA)
SFP+ DAC or Transceiver*
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
7x 56 GbE Link per Spine
C
COR
Q
X
4444
C
COR
QX
* 10GBE & Optics are IBM Parts only
-
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
18ports 18ports 18ports 18ports 18ports
6 Ports per TOR
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
18ports 18ports 18ports 18ports
ESS ESS ESS ESS
6 Ports per TOR6 Ports per TOR6 Ports per TOR
2x 100Gb Cards x 2 Ports @ 40 = 160Gb per NSD3x 40Gb Cards x 1 Port @ 40 = 120Gb per NSD
Compute Node
18ports
2x EC3L per NSD
4x ESS with 4x EC3L Cards @ 2x 40Gb
32x 40Gb Ports = ~ 112GB
1x EKAL @ 2x 40Gb per Node
Layer 3 OSPF/ECMP NetworkMellanox VMS
72x Top Port Dual Port Card72x Bottom Port Dual Port Card
Mode 6 - ALB
Mode 6 - ALB
36port40GbE
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
Speed Switch Cabling Adapter Optics
40 GbESX1710 – 8831-NF2
See list on right
EC3A 1@40GbEEC3L 2@40GbEEKAL 2@40GbE
EB27 +EB2J or
EB2K
1 GbE 4610-54T – 8831-S52
IBM Mellanox Infrastructure for 40 GbE Cluster Hortonworks
Sample 72 (90) Node Redundant L3 (VMS) ClusterDedicated Storage switches
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
-
Speed Switch Cabling Adapter Optics
FDR / EDRSB7700 – 8828-E36
See list on right EKAL NA
IBM Mellanox Infrastructure for IB Cluster Hortonworks Choice of Cabling
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
EDR Cabling
E QDR/FDR/EDR Client
Length Description FC
0.5m EDR Copper Cable QSFP28 EB50
1m EDR Copper Cable QSFP28 EB51
2m EDR Copper Cable QSFP28 EB52
1.5m EDR Copper Cabling QSFP28 EB54
3m EDR Optical Cable QSFP28 EB5A
5m EDR Optical Cable QSFP28 EB5B
10m EDR Optical Cable QSFP28 EB5C
15m EDR Optical Cable QSFP28 EB5D
20m EDR Optical Cable QSFP28 EB5E
30m EDR Optical Cable QSFP28 EB5F
50m EDR Optical Cable QSFP28 EB5G
100m EDR Optical Cable QSFP28 EB5H
8828-E36 36x QDR/FDR/EDR
9x EDR Link per Spine 9x EDR Link per Spine
E E E E
18x IB Endpoints per Leaf 18x IB Endpoints per Leaf 18x IB Endpoints per Leaf 18x IB Endpoints per Leaf
Sample 72 Node Cluster
Spine
Leaf
Some Rules: • Links from Leaf to Spine must be Modulo of 18 - 1,2,3,6,9• Non-Blocking requires as many links down to servers from Leaf as up to Spine from Leaf• Biggest two tier network is 648 Nodes, 18 Spines & 36 Leafs• Think ahead. Add Spines at day 1 for expansion, so extra leafs can be added without
re-cabling existing leafs
# Links to Spine # Spines #Leafs #Ports
1 18 36 648
2 9 18 324
3 6 12 216
6 3 6 108
9 2 4 72
18 1 1 36
-
IBM Mellanox Infrastructure for ESS/Spectrum Scale
Ports 10GbE 25GbE 40GbE 100GbE@2x 40GbE 56GbE 100GbE@2x 56GbE FDR EDR@2x FDR 100GbE EDROne Port 0.8 1.8 3.2 3.6 4.48 4.48 5.0 5.5 8.0 8.5Two Ports 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0
Three Ports 2.4 5.4 9.6 10.8 13.44 13.44 15.0 16.5 24.0 25.5Four Ports 3.2 7.2 14.4 17.92 20.0 22.0Five Ports 4.0 9.0 18.0 22.4 25.0 27.5Six Ports 4.8 10.8 21.6 26.88 30.0 33.0
SINGLE NSD Port Bandwidth options
0.8
1.8
3.2
3.6
4.48
4.48
5.0
5.5
8.0
8.5
1.6
3.6
6.4
7.2
8.96
8.96
10.0
11.0
16.0
17.0
2.4
5.4
9.6
10.8
13.44
13.44
15.0
16.5
24.0
25.5
3.2
7.2
14.4
17.92
20.0
22.0
4.0
9.0
18.0
22.4
25.0
27.5
4.8
10.8
21.6
26.88
30.0
33.0
1 0 G b E
2 5 G b E
4 0 G b E
1 0 0 G b E @ 2 x 4 0 G b E
5 6 G b E
1 0 0 G b E @ 2 x 5 6 G b E
F D R
E D R @ 2 x F D R
1 0 0 G b E
E D R
GB Bandwidth per Port per Speed for Single NSD/IO NodeOne Port Two Ports Three Ports Four Ports Five Ports Six Ports
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
Sheet1 (2)
SINGLE NSD Port Bandwidth options
Ports10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR
One Port0.81.83.23.64.484.485.05.58.08.5
Two Ports1.63.66.47.28.968.9610.011.016.017.0
Three Ports2.45.49.610.813.4413.4415.016.524.025.5
Four Ports3.27.214.417.9220.022.0
Five Ports4.09.018.022.425.027.5
Six Ports4.810.821.626.8830.033.0
GB Bandwidth per Port per Speed for single NSD
One Port
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE1.83.23.64.48000000000000044.480000000000000455.588.5Two Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE3.66.47.28.96000000000000098.960000000000000910111617Three Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE5.49.600000000000001410.813.44000000000000113.4400000000000011516.52425.5Four Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE7.214.417.9200000000000022022Five Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE91822.4000000000000022527.5Six Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE10.821.626.8800000000000033033Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE000000000
GB Bandwidth per Port per Speed for single NSD Node
One Port
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR0.81.83.23.64.48000000000000044.480000000000000455.588.5Two Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR1.63.66.47.28.96000000000000098.960000000000000910111617Three Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR2.40000000000000045.49.600000000000001410.813.44000000000000113.4400000000000011516.52425.5Four Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR3.27.214.417.9200000000000022022Five Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR491822.4000000000000022527.5Six Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR4.800000000000000710.821.626.8800000000000033033
Sheet2
Dual NSD Port Bandwidth options
Ports per NSD10GbE25GbE40GbE100GbE@ 2x 40GbE56GbE100GbE@ 2x 56GbEFDREDR@2x FDR100GbEEDR
One Port1.63.66.47.28.968.9610111617
Two Ports3.27.212.814.417.9217.9220223234
Three Ports4.810.819.221.626.8826.8830334851
Four Ports6.414.428.835.844044
Five Ports8183644.85055
Six Ports9.621.643.252.766066
Sheet1
SINGLE NSD Port Bandwidth options
Ports10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbEEDR
One Port0.81.83.23.64.484.485.05.58.08.5
Two Ports1.63.66.47.28.968.9610.011.016.017.0
Three Ports2.45.49.610.813.4413.4415.016.524.025.5
Four Ports3.27.214.417.9220.022.0
Five Ports4.09.018.022.425.027.5
Six Ports4.810.821.626.8830.033.0
GB Bandwidth per Port per Speed for single NSD
One Port
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE1.83.23.64.48000000000000044.480000000000000455.588.5Two Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE3.66.47.28.96000000000000098.960000000000000910111617Three Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE5.49.600000000000001410.813.44000000000000113.4400000000000011516.52425.5Four Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE7.214.417.9200000000000022022Five Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE91822.4000000000000022527.5Six Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE10.821.626.8800000000000033033Ports
10GbE25GbE40GbE100GbE@2x 40GbE56GbE100GbE@2x 56GbEFDREDR@2x FDR100GbE000000000
-
-
5
10
15
20
25
30
35
40
10 100 1,000
Max
Sequ
entia
l Thr
ough
put (
GByt
es/s
)Re
ad, I
OR,
Infin
iban
d+RD
MA
netw
ork,
16M
B fil
esys
tem
blo
cksi
ze (E
SS)
TB Usable CapacityApprox max capacity using 8+2P (ESS), combined MD+Data pool. Note logarithmic scale.
Sequential throughput vs. Capacity for selected ESS models
IBM Mellanox Infrastructure for ESS/Spectrum Scale
GL6S = 34 GB/s
GL6 = 25 GB/sGL4S = 23 GB/s
GL4 = 17 GB/s
GL2S = 11 GB/s
GL2 = 8 GB/s
Dual NSD Port Bandwidth optionsPorts per NSD 10GbE 25GbE 40GbE 100GbE@ 2x 40GbE 56GbE 100GbE@ 2x 56GbE FDR EDR@2x FDR 100GbE EDROne Port 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0Two Ports 3.2 7.2 12.8 14.4 17.92 17.92 20.0 22.0 32.0 34.0Three Ports 4.8 10.8 19.2 21.6 26.88 26.88 30.0 33.0 48.0 51.0Four Ports 6.4 14.4 28.8 35.84 40.0 44.0Five Ports 8.0 18. 36.0 44.8 50.0 55.0Six Ports 9.6 21.6 43.2 52.76 60.0 66.0
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
-
IBM Support Contacts – Thank YouDuane Dial – Director of Sales, IBM [email protected]
Jim Lonergan – Business Development IBM [email protected]@mellanox.comSametime [email protected]
Lyn Stockwell-White – North America Channels [email protected]@mellanox.comSametime [email protected]
Matthew Sheard - Solutions Architect – IBM [email protected]@mellanox.comSametime [email protected]
John Biebelhausen – Sr. OEM [email protected]
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]
-
www.mellanox.com/oem/ibm
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
http://www.mellanox.com/oem/ibm
-
https://community.mellanox.com/community/solutions
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
https://community.mellanox.com/community/solutions
-
http://academy.mellanox.com/en/
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consentwww.mellanox.com [email protected] +1 (512) 897-8245 14-Sep-17 v1
http://academy.mellanox.com/en/
Slide Number 1Slide Number 2Slide Number 3Slide Number 4Slide Number 5Slide Number 6Slide Number 7Slide Number 8Slide Number 9IBM Support Contacts – Thank YouSlide Number 11Slide Number 12Slide Number 13