trill and datacenter alternatives
Embed Size (px)
DESCRIPTION
Rajesh Kumar Sundararajan, Assistant VP of Product Management at Aricent, gave a talk about TRILL and Datacenter technologies at the Interop Show in Las Vegas, May 2012.TRANSCRIPT

TRILL & Datacenter technologies – their importance, and alternatives to datacenter network convergence
Rajesh Kumar SundararajanAssistant VP Product Management, AricentMay 10, Interop Las Vegas, 2012

2
ARICENT Group
• Global innovation, technology, and services company
• Focused exclusively on communications
• Co-creating most innovative communications products and applications with customers
• Complete lifecycle solutions for networks, management, applications
Me
• Rajesh Kumar Sundararajan
• Assistant Vice President – Product Line Management
• Ethernet, IP and Datacenter offerings
About Me

3
Agenda
Datacenter imperatives
Solutions proposed to datacenter imperatives
Categorization of the solutions
Technological overview of each of the solutions
TRILL and alternatives
Summary comparison
Conclusion

4
• Physical separation of users from applications• Latency-related concerns due to geographical distribution of
applications• High performance demands of Cloud Services applications
• Increasing amount of physical space to accommodate new hardware
•Additional CAPEX for hardware and software•Increased OPEX for staffing and power
COST
CAPACITY
• Ever-increasing amounts of bandwidth demanded by consumers and enterprise applications
• Increasing proliferation of video to deliver both consumer as well as
business-related content
Operational trends in the Datacenter driven by Virtualization, Convergence, & Cloud Services
COMPLEXITY
3 Famous C’s of Datacenter Operations

5
Improved efficiencies in the Datacenter from •Increased utilization of individual servers•Consolidation of servers and network ports•Simplified management and operations•Network virtualization (beyond storage and server virtualization)
• Convergence of equipment and network architectures• Simplified design and increased flexibility of network architecture• LAN/SAN convergence enables the ubiquity and extends the
reach of Ethernet in the Datacenter
• Ability to push hardware (storage, server) and software (SaaS) to a 3rd party provider
• Eliminates need to procure, install, update and upgrade hardware & software, resources can be obtained on as-needed basis
• Drive performance / load across datacenters• Remote datacenter backup
Virtualization, Convergence, & Cloud Services
VIRTUALIZATION
CONVERGENCE
CLOUD SERVICES

6
In c r e a s e P r ic e / P e r fo r m a n c e R a t io
• 1 0 G b E th e rn e t s u p p o r t r e q u ire d fo r a l l n e w D a ta c e n te r E th e rn e t e q u ip m e n t
• M ig ra t io n to 4 0 G b E /1 0 0 G b E n e e d s to b e p la n n e d
• N e x t g e n p ro d u c ts n e e d to b a la n c e h ig h p e r fo rm a n c e w ith lo w C A P E X &
O P E X
Increase Price/ Performance Ratio
• 10Gb Ethernet support required for all new Datacenter Ethernet equipment• Migration to 40GbE/100GbE needs to be planned• Next gen products need to balance high performance with low CAPEX &
OPEXIm p r o v e E n e r g y E f f ic ie n c y
• N e tw o rk in g e q u ip m e n t n e e d s to lo w e r e n e rg y c o n s u m p t io n c o s ts
• N e w s e rv e r c h ip s e ts , a r c h ite c tu re s a n d s o f tw a re n e e d e d to im p ro v e o v e ra l l
e n e rg y e ff ic ie n c y
Improve Energy Efficiency
• Networking equipment needs to lower energy consumption costs• New server chipsets, architectures and software needed to improve overall
energy efficiency
E v o lv e S ta n d a r d s
• D C B X , E T S , P F C , Q C N
• F C o E , F IP, F IP S n o o p in g
• O p e n f lo w , S D N , T R IL L , S P B , M C -L A G
• V x L A N , N V G R E
Evolve Standards
• DCBX, ETS, PFC, QCN• FCoE, FIP, FIP Snooping• Openflow, SDN, TRILL, SPB, MC-LAG• VxLAN, NVGRE
S u p p o r t M u l t ip le M ig r a t io n O p t io n s
• M u lt ip le m ig ra t io n o p t io n s a v a ila b le fo r e v o lu t io n to a c o n v e rg e d n e tw o rk
• C o m p a n ie s m a y s ta r t b y m ig ra t in g to c o n v e rg e d n e tw o rk a d a p to rs to to p -o f -
ra c k s w itc h e s to a c o n v e rg e d c o re o r v ic e v e rs a
• E q u ip m e n t v e n d o rs n e e d to s u p p o r t d iffe re n t m ig ra t io n o p t io n s in p ro d u c ts
Support Multiple Migration Options
• Multiple migration options available for evolution to a converged network• Companies may start by migrating to converged network adaptors to top-of-
rack switches to a converged core or vice versa• Equipment vendors need to support different migration options in products
Imperatives and Solutions

7
Supporting Technologies
DCBX
PFC ETS
QCN
SPB TRILL
FCoE
VxLANNVGRE
VEPA
FIP FIP Snooping
Lossless Ethernet Convergence
Fabric scalability & performanceInterconnecting datacenters
Endpoint Virtualization
OpenFlowSDN
NPIV NPV
Network VirtualizationUncomplicate Switching
MCLAG

8
QoS techniques are primarily
– Flow control – pause frames between switches to control sender’s rate
– 802.1p and DSCP based queuing – flat or hierarchical queuing
– Congestion avoidance methods – WRED, etc.
Issues
– Flow control – no distinction between different applications or frame priorities
– 802.1p and DSCP based QoS methods
• Need to differentiate classes of applications (LAN, SAN, IPC, Management)
• Need to allocate deterministic bandwidth to classes of applications
– Congestion avoidance methods
• Rely on dropping frames at the switches
• Source may continue to transmit at same rate
• Fine for IP based applications which assume channel loss
• Don’t work for storage applications which are loss intolerant
Lossless EthernetWhy existing QoS and Flow Control are not enough

9
ETH
Internal back pressure
PAUSE frames
Internal back pressure
PAUSE (CoS = 3) frames
Flow control – All traffic on port is affected
Priority Flow Control – Traffic for specific CoS on port is affected
Q1 – CoS 1Q2 – CoS 2Q3 – CoS 3Q4 – CoS 4
Q1 – CoS 1Q2 – CoS 2Q3 – CoS 3Q4 – CoS 4
Quantized Congestion Notification (QCN)
ETH
Reaction point (source/end-point)
ThrottleTx rate Congestion
notification message
Congestion point (switch facing congestion on egress port)
DCBX
TLVs in LLDP messagesAdvertise own capabilitiesPriority groups = x;PFC = Yes, which priorities;Congestion notification = Yes;
TLVs in LLDP messagesAccept or NoAdvertise own capabilities
Switches advertise and know capabilities to use on links
Pri
ori
ty F
low
Co
ntr
ol (
PF
C)
Lossless Ethernet

10
NPIV & NPV• NPIV = N_PortID_Virtualization
- host based technology
• NPV = N_Port_Virtualization– switch based technology
• Technology for the storage side
• Relevant to Datacenter Ethernet because of the virtualization capabilities and bearing on FCoE
• NPIV – virtualization of storage device port to support VMs, multiple zones on same link
• Requires support on storage endpoint and connected switch as well
• NPV – endpoint is unaware; switch proxies for multiple endpoints (N_Ports) using a single NP_Port
• Reduces number of switches required
Storage nodeFiberChanne
l switchFiberChanne
l switch
N_Port(N_PortID)
F_PortE_Port
E_Port
Storage nodeFiberChanne
l switch
N_Port1 (N_PortID1)
F_Port
N_Port2 (N_PortID2)
N_Port2 (N_PortID3)
Physical port
Logical port
F_Port NP_Port
NPIV
NPV
Multiple N_PortIDs

11
FCoE (FiberChannel Over Ethernet)
• Means to carry FiberChannel frames within Ethernet frames
• Interconnects FiberChannel endpoints or switches across an Ethernet (DataCenter Bridged Ethernet) network
• FC frames encapsulated in Ethernet header
• New EtherType to transport FC frames
• FCoE can be enabled on – FC endpoint devices / FC switches / Ethernet switches
Ethernetinterconnect
FC endpoint
FC endpoints
FCoE switch
FCoE switch
FC link
Ethernet switch
Ethernet switch
Ethernet switch
Ethernet switch
Ethernet switch
Ethernet
Ethernet
FC switch
FCoE switchFCoE switchEthernet switchEthernet switch
FC Endpoint
FC link
FC frame
EthernetEthernetEthernet
FC frameEth headerFCoE frameFCoE frame
FCoE frame
FC link
FC frame

12
FIP (FCoE Initialization Protocol)
• Protocol between FC devices built on assumption of direct connection
• Traversing an Ethernet cloud requires additional procedures
• Addressed by FIP
Device discovery
Initializing communication
Maintaining communication
• FCoE
Control frames – FIP – uses different EtherType than FCoE
Data frames – FC frames encapsulated with FCoE EtherType
FCoE deviceEthernet
FCoE device
FCoE device discoveryInitializing communication
Maintaining communication

13
FIP Snooping• Protocol between FC devices built on assumption of direct connection
• FC switch enforces many configurations, performs validations and access control on attached endpoints
• Security concerns when this is exposed over non-secure Ethernet
• Addressed by FIPSnooping
Done on transit switches carrying FCoE, on VLANs dedicated to FCoE
Switches install firewall filters to protect FCoE ports
Filters are based on inspection of the FLOGI procedure
Example
(a) deny Enodes using FC-Switch MAC address as source MAC
(b) ensure address assigned to Enode is used only for FCoE traffic
FCoE device
Ethernet
FCoE device
Ethernet switchEthernet switch
FIP FIP Snooping
FIP Snooping

14
Necessary fundamentals for FCoE to work
Multipath through the network
Lossless fabric
Rapid convergence in fabric
Spanning tree (with variants like RSTP, MSTP, PVRST) is the universal way to provide redundancy and stability (loop avoidance) in Ethernet networks
Spanning tree is a distance vector based protocol
• Routing equivalent of spanning tree (distance vector based) = RIP
• Limits the size of the network that it can handle
• Much smaller network size than link state based protocols (OSPF / ISIS)
Datacenter networks have got much bigger (and getting bigger still !!)
Spanning tree blocks links / paths to create redundancy inefficient capacity utilization
Does not support multipath which is important for SAN/LAN convergence
The TRILL solution
• Apply link state routing to bridging / Layer 2 Ethernet
• Use technique like ECMP for alternate paths without blocking any links or paths
Fabric Scalability and PerformanceWhy Spanning Tree (RSTP/MSTP/PVRST) is not enough

15
Focus on problem of dense collection of interconnected clients and switches
Attempt to:
• Eliminate limitations of spanning tree centric solutions
• Bring the benefits of routing technologies to the L2 network (without the need for IP/subnets, etc.)
Objectives:
• Zero configuration and zero assumptions
• Forwarding loop mitigation
• No changes to spanning tree protocols
Key components:
• R-bridges (Routing bridges)
• Extensions to IS-IS
• Apply link state routing to VLAN aware bridging problem
RBridgeRBridge
TRILL control protocol (IS-IS extension)
RBridgeRBridge RBridge
TRILL header
Normal bridge or destination
MAC frame
MAC frame
MAC frame
MAC frame
TRILL header
Learn MAC
RBridgeRBridge
MAC frame
TRILL control frame (advertise learnt MAC)
Fabric Scalability and PerformanceTRILL – Transparent Interconnection of Lot of Links

16
Fabric Scalability and PerformanceTRILL – Handling Multicast and Broadcast
Root for Distribution Tree 2
Root for Distribution Tree 1
• Create distribution tree with selected root
• Distribute from root to rest of tree
• Multiple distribution trees for
– Multipath and load distribution
– Alternate paths and resilience
• All Rbridges pre-calculate and maintain the distribution trees
• Algorithm specified for ensuring identical calculations at all RBRidges
• By default - distribution tree is shared across all VLANs and multicast groups
• How an ingress node selects a specific tree (from multiple existing trees) is not specified
• Ingress Rbridge receiving multicast encapsulates in TRILL header and sends to root of tree and to downstream branches
• Frame with TRILL header is distributed down branches of tree
• Rbridges at edges remove TRILL header and send to receivers
• Rbridges listen to IGMP messages
• Rbridges prune trees based in presence of multicast receivers
• Information from IGMP messages propagated through the TRILL core to prune the distribution trees
Distribution Tree 2
Distribution Tree 1

17
• Does not (yet) address different types of virtual networks (VxLAN, NVGRE…)
• Provides for L2 multipathing (traffic within a VLAN) but L3 (routed traffic across VLANs) is unipath only
• Initial scope of TRILL defined to address spanning tree limitations
• IP maybe an afterthought; only 1 default router with VRRP = unipath for L3
• Result of above – forces datacenter operators to provision larger VLANs (more members per VLAN) – so, restricts segmentation using VLANs
• Requires hardware replacement in switching infrastructure
• Existing security processes have to be enhanced
– Existing security processes rely on packet scanning and analysis
– Encapsulation changes packet headers, existing tools must be modified / enhanced
• Does not inherently address fault isolation
• Does not inherently address QoS mapping between edge & core (example – congestion
management requires congestion in network to be signaled to source)
• Does not clearly address source specific multicast; so multicast based on groups only
Fabric Scalability and PerformanceTRILL – Issues and Problems

18
Network centric approaches
SPB
EVB – VEPA / VN-Tag
MC-LAG
Openflow / SDN
• Endpoint /server centric approaches
VxLAN
NVGRE
Alternatives?

19
• Key Components
– Extend IS-IS to compute paths between shortest path bridges
– Encapsulate MAC frames in an additional header for transport between shortest path bridges
• Variations in Encapsulation
– SPB – VID
– SPB – MAC (reuse 802.1ah encapsulation)
• Allows reuse of reliable Ethernet OAM technology (802.1ag, Y.1731)
• Source MAC learning from SPB encapsulated frames at the edge SP-bridges
SP-BridgeSP-Bridge
Control protocol (IS-IS extension)
SP-BridgeSP-Bridge SP-Bridge
SPB heade
r
Normal bridge or destination
MAC frame
MAC frame
MAC frame
MAC frame
SPB header
Learn MAC
Learn MAC
Fabric Scalability and PerformanceSPB – Shortest Path Bridging

20
EVB (Edge Virtual Bridging)Addresses interaction between virtual switching environments in a hypervisor and 1st layer of physical switching infrastructure
2 different methods – VEPA (Virtual Ethernet Port Aggregator) & VN-Tag
Without VEPA – in virtualized environment, traffic between VMs is switched within the virtualizer
Key issues – monitoring of traffic, security policies, etc, between VMs is broken
With VEPA – all traffic is pushed out to the switch and then to the appropriate VM
Key issue – additional external link bandwidth requirement, additional latency
Switch must be prepared to do “hairpin turn”
Accomplished by software negotiation between switch and virtualizer
VM-11 VM-12 ……… VM-1n
Virtualizer
Ethernet switch
VM-11 VM-12 ……… VM-1n
Virtualizer
Ethernet switch
VEB / vSwitch
VEPANegotiatio
n

21
MC-LAG (Multi Chassis LAG)Relies on fact that datacenter network is large but with predictable topology
Downstream node has multiple links to different upstream nodes
Links are link-aggregated (trunked) into single logical interface
Can be used in redundant or load-shared mode
Load-shared mode offers multipath
Resilience and multipath inherent
No hardware changes required
Accomplished with software upgrade
Switches must have protocol extension to coordinate LAG termination across switches
Does nothing about address reuse problem from endpoint virtualization
CORE
AGGREGATION & ACCESS
Typical datacenter network
LAG
LAGLAG LAG
Coordination protocol across switches

22
SDN Controller – Focus on Service OpenFlow – Enabling Network Virtualization
Source -ONF
SDN Controller [ Service Creation, Flow management, first packet
processing, route creation)
Flowvisor ( Responsible for Network Partitioning
based on Rules ( e.g. Bridge IDs, Flow ids, User
credentials)
Open Flow enabled Switches
Open Flow enabled Switches
Open Flow enabled Switches
Open Flow enabled Switches
Open Flow enabled Switches
Open Flow enabled Switches
Open Flow enabled Switches
Secure connection
OpenFlow and Software Defined Networking (SDN)Paradigm shift to networking
• Simplify the network (make it dumb?
• Move the intelligence outside

23
Fabric Scalability and PerformanceOpenFlow and Software Defined Networking (SDN) paradigm
SDN Controller – Focus on Service
• Open platform for managing the traffic on “open flow” complying switches
• Functions – network discovery, network service creation, provisioning, QoS, “flow” management, first packet handling
• Interoperability with existing networking infrastructure – hybrid networks
• Overlay networks, application aware routing, performance routing, extensions to existing network behavior
OpenFlow – Enabling Network Virtualization
• Light weight software (strong resemblance to client software)
• Standard interface for access and provisioning
• Secure access to controller
• Push/pull support for statistics
• Unknown flow packet trap and disposition through controller
• Comply to OpenFlow specifications (current version 1.2)
• Accessed and managed by multiple controllers

24
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Rule Action Stats
1. Forward packet to port(s)2. Encapsulate and forward to controller3. Drop packet4. Send to normal processing pipeline
+ mask
Packet + byte counters
DCBX –for Data centers
PFC , ETS LLDP
Management (CLI, SNMP, WEB)
Routing Block( Protocols –RIP, OSFP, ISIS, BGP, RTM, Multicast)
IP forwarding
QOS (Hierarchical, multiple Scheduling Scheme) & ACL
Management
Congestion Notification
Layer -2 Block –Vlan, STP, LACP,
IGMP
Master Policy Engine
Chassis Management and System Monitoring
HAL Layer for open flow
Secure Connection (SSL)
Encap / Decap
Event Handler
Infrastructure software
• Open flow enabled switches – 2 types
– Hybrid (OpenFlow HAL and current network control plane)
– Pure OpenFlow Switches
• Pure OpenFlow
– Simpler, low in software content, lower cost
• Primarily contains
– SSL (for secure management)
– Encap/decap
– Hardware programming layer/driver
– Event handler
• Open Flow switches receive instructions from service controllers
• Architectural aspects
– Resource partitioning
– Packet flow aspects
ISS
Softw
are
Fabric Scalability and PerformanceOpenFlow/SDN based switches

25
Openflow – key issues•Enormous amount of provisioning for rules in each switch
• In today’s switches – must rely on setting up ACLs
•Switches typically have low limits on number of ACLs that can be set up
•Will need hardware upgrade to use switches with large amounts of ACLs
•Ensuring consistency of ACLs across all switches in the network? –
troubleshooting challenges

26
NVGRE and VXLAN – BackgroundChallenges with Virtualization:
• More VMs = more MAC addresses and more IP addresses
• Multi-user datacenter + VMs = need to reuse MAC and IP addresses across users
• Moving applications to cloud
= avoid having to renumber all client applications
= need to reuse MAC addresses, IP addresses and VLAN-IDs across users
• More MAC addresses and IP addresses
= larger table sizes in switches
= larger network, more links and paths
Necessity = Create a virtual network for each user
Possible Solutions:
• VLANs per user – limitations of VLAN-Id range
• Provider bridging (Q-in-Q)–Limitations in number of users (limited by VLAN-ID range)
–Proliferation of VM MAC addresses in switches in the network (requiring larger table sizes in switches)
–Switches must support use of same MAC address in multiple VLANs (independent VLAN learning)
• VXLAN, NVGRE – new methods

27
To ip-21To mac-21
To ip-BTo mac-B
27
VM-11 VM-12 ……… VM-1n
ip-11mac-11
ip-12mac-12
ip-1nmac-1n
HYPERVISOR
SERVER – A (ip-A, mac-A)
VM-21 VM-22 ……… VM-2n
ip-21mac-21
ip-22mac-22
ip-2nmac-2n
HYPERVISOR
SERVER – B (ip-B, mac-B)
payloadS-IP = ip-11D-IP = ip-21S-MAC = mac-11D-MAC = mac-21
Ethernet frame from VM-11 to VM-12
Ethernet header IP header
VM-11 VM-12 ……… VM-1n
ip-11mac-11
ip-12mac-12
ip-1nmac-1n
HYPERVISOR
SERVER – A (ip-A, mac-A)
VM-21 VM-22 ……… VM-2n
ip-21mac-21
ip-22mac-22
ip-2nmac-2n
HYPERVISOR
SERVER – B (ip-B, mac-B)
payloadS-IP = ip-11D-IP = ip-21S-MAC = mac-11D-MAC = mac-21
Ethernet frame from VM-11 to VM-12
Outer Ethernet header Outer IP header
S-IP = ip-AD-IP = ip-BS-MAC = mac-AD-MAC = mac-B VNI
Inner Ethernet header Inner IP header
Without VxLAN
UsingVxLAN(tunneling in
UDP/IP)
VxLAN (Virtual eXtensible LAN) – How it Works

28
VM-11 VM-12 ……… VM-1n
ip-11mac-11
ip-12mac-12
ip-1nmac-1n
HYPERVISOR
SERVER – A (ip-A, mac-A)
VM-21 VM-22 ……… VM-2n
ip-21mac-21
ip-22mac-22
ip-2nmac-2n
HYPERVISOR
SERVER – B (ip-B, mac-B)
VN-X (User X)
VN-Y (User Y)
VM-11 VNI-X
VM-12 VNI-Y
VM-13 VNI-Z
ip-21 VNI-X ip-B
ip-22 VNI-X ip-B
ip-2n VNI-Y ip-B
Self table
Remote table – learnt and aged continuously based on actual traffic
Remote table
VXLAN - Internals

29
• NVGRE = Network Virtualization using Generic Routing Encapsulation
• Transport Ethernet frames from VMs by tunneling in GRE (Generic Routing Encapsulation)
• Tunneling involves GRE header + outer IP header + outer Ethernet header
• Relies on existing standardized GRE protocol – avoids new protocol, new Assigned Number, etc
• Use of GRE (as opposed to UDP/IP) loss of multipath capability
04/13/2023
29
VXLAN NVGRE
VNI – VXLAN Network Identifier (or VXLAN Segment ID)
TNI – Tenant Network Identifier
VxLAN header + UDP header + IP header + Ethernet header = 8+8+40+16 = 72 bytes addition per Ethernet frame
GRE header + IP header + Ethernet header = 8+40+16 = 64 bytes addition per Ethernet frame
VTEP - VXLAN Tunnel End Point - originates or terminates VXLAN tunnels
NVGRE endpoint
VXLAN Gateway - forwards traffic between VXLAN and non-VXLAN environments
NVGRE gateway
New protocol Extends existing protocol for new usage
Multipath using different UDP ports No multipath since GRE header is same
NVGRE

30
Problems with NVGRE and VXLAN
New Ecosystem:• Existing network analysis tools won’t
work – partner ecosystem for technology has to be developed
• Existing ACLs installed in network infrastructure are broken
• Needs additional gateway to communicate outside the virtual network
Partial Support:• Does not address QoS
– Encapsulation / tunneling techniques like Provider Bridging or PBB clearly addressed QoS by mapping the internal “marking” to external “marking”
• What is tunneled may be already tunneled – questions of backward compatibility with existing apps
Configuration and Management:• Controlled multicast (with the use of say, IGMPv3)
within tenant network now gets broadcast to all endpoints in the tenant’s virtual network – since broadcast and multicast will get mapped to one multicast address for the entire VXLAN/VNI
• Requires configuration of the virtual network mapping consistently on all the virtual machines – management nightmare without tools to debug and isolate misconfiguration
• These maybe just the tip of the iceberg – will we need virtualized DHCP, virtualized DNS, etc.
Security:• Existing security processes are broken
– Existing security processes rely on packet scanning and analysis
– Encapsulating MAC in IP changes packet headers, existing tools must be modified / enhanced
• Starts to put more emphasis on firewalls and IPS in virtualizers – redoing Linux network stack !!

31
Need for both – depending on what requirement a datacenter addresses
TRILL VxLAN
Addresses network; tries to optimize network
Addresses end points (like servers)
Technology to be implemented in network infrastructure (switches/routers)
Technology to be implemented in virtualizers
Needs hardware replacement of switch infrastructure
Needs software upgrade in virtualizers (assuming virtualizer supplier supports this)
Restricted to handling VLANsNo optimizations for VXLAN/NVGRE
Agnostic about switches/routers between end-points (leaves them as good/bad as they are)
No changes required in end-points More computing power requirement from virtualizers (additional packet header handling, additional table maintenance, associated timers)
Need is for large datacenters (lots of links and switches)
Need is primarily for multi tenant datacenters
TRILL vs. VXLAN (or) TRILL & VXLAN?

32
TRILL SPB VEPA VN-Tag MC-LAG SDN / Openflow
Multipath Multipath No / NA No / NA Multipath Multipath
Requires h/w change
Requires h/w change
No h/w change
H/w change at analyzer point
No h/w change
No h/w change (for small number of flows)Requires h/w change for large numbers
More complex switch
More complex switch
Negligible additional complexity
More complex endpoint
Negligible additional complexity
Simpler switchComplex controller
Does not address network trouble shooting
Tools available from Mac-in-Mac technology
Existing tools continue to work
Requires enhancement to existing tools at analyzer point
Existing tools continue to work
Existing tools continue to work
Lesser provisioning Least provisioning
Lesser provisioning
Least provisioning
High amount of provisioning
TRILL & the other networking alternatives

33
Aricent IPRs
Intelligent Switching Solution (ISS)
with Datacenter Switching Extensions
Services
Aricent’s Datacenter Ethernet Software Frameworks

Feel free to meet me at booth #2626 or get in touch with me for any questions at:
Rajesh Kumar Sundararajan: [email protected]

Thank you.