data centre cabling - event · pdf filedata centre cabling 20 august 2015 stefan naude rcdd...
TRANSCRIPT
Data Centre Cabling
20 August 2015
Stefan Naude RCDD
The SIEMON Company
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Data Centre Standards
Data Centre Standards
ISO 24764
IT Generic Cabling Systems Part 1:
General requirements for Data
Centers
ISO/IEC 11801
Information technology – Generic cabling for customer
premises
ISO/IEC 14763-1
Information technology – Implementation and
operation of customer premises cabling – Part 1:
Administration
ISO/IEC 14763-2
Information technology – Implementation and
operation of customer cabling – Part 2: Planning and
Installation
ISO/IEC 14763-3
Information technology – Implementation and
operation of customer cabling – Part 3: Testing of
Optical Fibre
ISO/IEC 14763-2
Information technology – Implementation and
operation of customer cabling – Part 4: Testing of
balanced copper cabling in the horizontal subsystem
Data Centre Standards
ANSI/TIA 568-C.0
Generic Telecommunications Cabling
for Customer Premises
ANSI/TIA 569-C
Telecommunications Pathways and
Spaces
ANSI/TIA 606-B
Administration Standard
Telecommunications Infrastructure
ANSI/TIA 607-B
Generic Telecommunications Bonding
& Grounding for Customer Premises
ANSI/TIA 568-C.1
Commercial Building
Telecommunications Cabling
ANSI/TIA 568-C.2
Balanced Twisted-Pair
Telecommunications Cabling &
Components
ANSI/TIA 568-C.4
Broadband Coaxial
Cabling & Components
ANSI/TIA 862-A
Building Automation
Systems Cabling
ANSI/TIA 570-C
Residential Telecommunications
Infrastructure Standard
ANSI/TIA 75B-A
Customer Owned Outside Plant
Telecommunications Infrastructure
ANSI/TIA 942-A
Telecommunications Infrastructure
Standard for Data Centres
ANSI/TIA 1005-A
Telecommunications Infrastructure
Standard for Industrial Premises
ANSI/TIA 568-C.3
Optical Fibre
Cabling Components
Data Centre Standards
BICSI 002-2011 – Data Centre Design and best practices
o Addresses telecommunication , information technology , electrical ,mechanical and architectural issues for the design and installation of Data Centre’s
o Guidance on coordination between design and construction disciplines
o Guidance on managing Data Centre projects
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
The Need for Speed
Data Centres have different functions
The Need for Speed
o Server Virtualization
o New Applications
o IoT (Estimated 50 Billion object by 2020)
o TV/Video on demand
o Wireless Devices
o “Big Brother” society
o Cloud Computing
The Need for Speed
255Tbs speeds achieved via spatial multiplexing
5.1Tbs per carrier and 50 carriers down the 7
cores
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Copper in the Data Centre
Why a minimum of CAT6A in the Data Centre?
o 1G vs 10G around 15 to 20% cost difference
o Standards advise Category 6A
o Biggest concern at high frequency is ANEXT
o Difficult to test ANEXT and time consuming
Copper in the Data Centre
• Alien Near End Cross Talk
happens at high frequency
transmission when you
can ̎hear ̎ the signal
transmitted on a cable, on
another cable in the bundle
• Its not NEXT which happens
in the same cable
Copper in the Data Centre
Problem ANEXT Solution
Copper in the Data Centre
Why 10G on copper ?
o Energy Efficient Ethernet
o Wake on LAN
o Short Reach Mode
o Auto negotiate between speed
o 10G more efficient then 1G (less watt per port)
o 100m link limitation = flexibility
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Fibre in the Data Centre
o With 10G to the server (Access)
o 40G and 100G is already used in Data Centres
o Higher Data rates required on the uplinks
o Mostly parallel optics and Multimode fibre
Fibre in the Data Centre
o OM3 to 100m
o OM4 to 150m
o Spare 4 fibre's
can be used to
create another
port
TX
TX
TX
TX
TX
TX
TX
TX
RX
RX
RX
RX
RX
RX
RX
RX
10G
10G
10G
10G
10G
10G
10G
10G
40G MTP
Fibre in the Data Centre
o OM3 to 70m
o OM4 to 100m
o Approved by IEEE
March 2015
TX
TX
TX
TX
TX
TX
TX
TX
RX
RX
RX
RX
RX
RX
RX
RX
25G
25G
25G
25G
25G
25G
25G
25G
100G MTP
Fibre in the Data Centre
Fibre in the Data Centre
Benefits of MTP fibre
o MTP fibre 10G day 1(MTP to LC cassettes)
o Once equipment is ready – upgrade to
40G/100G
o Plug and Play
o Pre tested
Fibre in the Data Centre
Design Considerations for fibre in the DC
o Link budgets
o Upgrade path
o Fibre Polarity
o High Density
Fibre in the Data Centre
Link Budgets
Application Distance
(Meters)
Max Channel
Loss/Connector Loss
Fiber
Attenuation
(3.0dB/km)
10 GbE OM3
@850 nm300 2.6 dB/NA 0.9 dB
40/100 GbE
OM3 @ 850 nm100 1.9 dB/1.5 dB 0.3 dB
10 GbE OM4
@850 nm400 2.9 dB/NA 1.2 dB
40/100 GbE
OM4 @ 850 nm150 1.5 dB/1.0 dB 0.4 dB
Fibre in the Data Centre
Ethernet Supported distances on Fibre
Fibre in the Data Centre
Fibre in the Data Centre
Direct Attach Cabling distance support
Fibre in the Data Centre
Link Budgets
Fibre loss @
1. Length Attenuation – physical fibre
2. LC to LC interface (Front of cassette)
3. MTP to MTP interface( Back of cassette)
**Consider Low Loss products**
Fibre in the Data Centre
Upgrade Path 10G to 40G/100G
Fibre in the Data Centre
Polarity – Method A
Fibre in the Data Centre
Polarity – Method B
Fibre in the Data Centre
Polarity – Method C
Fibre in the Data Centre
High Density
o Space in a DC comes at premium
o Environments with high volume of fibre
o SAN switches
o LAN Switches
oCross Connects in a centralized patching zone
o 144 Cores in 1U is high density
o Consider the panels and management
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Top of Rack vs End or Row
Top of Rack vs End of Row
Top of Rack vs End of Row
Top of Rack vs End of Row
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Other Design Considerations
Data Centre is a complex collection of many aspects including:
o Facility and Facility layout
Cooling , power (UPS, ATS, Generator, Mains etc.)
Flooring , Lights
o Network – Switching , redundancy , maintenance , SLA’s
o Servers – Physical , Applications
o Security – Physical and Network
o Monitoring – Humidity, Temperature, Power use
Other Design Considerations
We are only looking at physical infrastructure
o Routing - above or below or both? ( Fill ratios, bending)
o Coordination with other services (Lights , power)
o Type of Network topology (EOR , MOR , CPZ?)
o Cable Management (In rack and other)
o Level of Redundancy
o Physical Layout of racks in the room ( cold isle/hot?)
o Switch Harnessing
o Consider limitations of design (distance/application)
Other Design Considerations
Network Topology 3 Tier
Other Design Considerations
Network Topology Leaf and Spine
Other Design Considerations
Cable Management
Other Design Considerations
Correct Cable Management
Other Design Considerations
Correct Cable Management
Agenda
1. Data Centre Standards
2. The Need for Speed
3. Copper in the Data Centre
4. Fibre in the Data Centre
5. Top of Rack vs EOR topology
6. Other Design Considerations
7. Future Technology
Future Technology
Category 8 Copper for 25GBaseT/40GBaseT
o 30m link
o 1250 MHz for 25G and 2000MHz for 40G
o 2 Connector link
o Shielded- RJ45 (Class I)
o Fully Shielded – TERA (Class II)
o 2w per port
o Not yet finalised – expected mid 2016
Future Technology
o Demand only one way - UP
o DCIM – more complex monitoring for
efficiency
o IoT - Cisco research shows that today’s global data traffic per month is 24 times that in
2013; it will be 95 times that by 2018, reaching 15.9 exabytes per month by 2018
o Storage – flash/solid state
o Software defined “everything” SDx