1 progress and challenges toward 100gbps ethernet joel goergen vp of technology / chief scientist...

104
1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the progress and challenges for development of technology and standards for 100 GbE. Joel is an active contributor to IEEE802.3 and the Optical Internetworking Forum (OIF) standards process. Joel will discuss design methodology, enabling technologies, emerging specifications, and crucial considerations for performance and reliability for this next iteration of LAN/WAN technology.

Upload: alize-skipper

Post on 14-Dec-2015

225 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

1

Progress and Challenges toward 100Gbps Ethernet

Joel GoergenVP of Technology / Chief Scientist

Abstract: This technical presentation will focus on the progress and challenges for development of technology and standards for 100 GbE.  Joel is an active contributor to IEEE802.3 and the Optical Internetworking Forum (OIF) standards process.  Joel will discuss design methodology, enabling technologies, emerging specifications, and crucial considerations for performance and reliability for this next iteration of LAN/WAN technology.

Page 2: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

2

Overview

Network Standards Today

Available Technology Today

Feasible Technology for 2009

The Push for Standards within IEEE and OIF

Anatomy of a 100Gbps or 160Gbps Solution

Summary

Backup Slides

Page 3: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

3

Network Standards Today:The Basic Evolution

19831983

10 Mb10 Mb

19941994

100Mb100Mb

19961996

1 GbE1 GbE20022002

10 GbE10 GbE2010???2010???

100 GbE ???100 GbE ???

Page 4: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

4

Network Standards Today:The Basic Structure

Aggregation / AccessSwitches

Servers / PCs

Internet

CoreSwitches

` `

Page 5: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

5

Network Standards Today:The Desk top

1Gbps Ethernet– 10/100/1000 Copper ports have been shipping with

most desktop and laptop machines for a few years.– Fiber SMF/MMF

IEEE 802.11a/b/g Wireless– Average useable bandwidth reaching 50Mbps

Page 6: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

6

Network Standards Today:Clusters and Servers

1Gbps Ethernet– Copper

10Gbps Ethernet– Fiber– CX-4

Page 7: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

7

Network Standards Today:Coming Soon

10Gbps LRM– Multi-mode fiber to 220meters.

10Gbps Base-T– 100meters at more then 10Watts per port ???– 30meters short reach at 3Watts per port ???

10Gbps Back Plane– 1Gbps, 4x3.125Gbps, 1x10Gbps over 1meter

improved fr-4 material.

Page 8: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

8

Available Technology Today:System Implementation A+B

PassiveCopper

Backplane

AFabric

Line Card

Line Card

AA

AA

AA

BB

BB

SPIxSPIx

Front End

SPIxSPIx

Front End

BFabric

10G10G

10G10G

1G1G

1G1G

1G1G

1G1G

10G10G

10G10G

BB

Page 9: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

9

Available Technology Today:System Implementation N+1

Front End

PassiveCopper

Backplane

1stSwitchFabric

Line Card

Line Card

L1L1

Ln+1Ln+1SPIxSPIx

Front End

SPIxSPIx

NthSwitchFabric

N+1SwitchFabric

10G10G

10G10G

1G1G

1G1G

1G1G

1G1G

10G10G

10G10G

L1L1

L1L1

LnLn

Ln+1Ln+1

LnLn

Ln+1Ln+1

Page 10: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

10

Available Technology Today:Zoom to Front-end

Front End

PassiveCopper

Backplane

1stSwitchFabric

Line Card

Line Card

L1L1

Ln+1Ln+1SPIxSPIx

Front End

SPIxSPIx

NthSwitchFabric

N+1SwitchFabric

10G10G

10G10G

1G1G

1G1G

1G1G

1G1G

10G10G

10G10G

L1L1

L1L1

LnLn

Ln+1Ln+1

LnLn

Ln+1Ln+1

Page 11: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

11

Available Technology Today:Front-end

Copper– RJ45– RJ21 (mini … carries 6 ports)

Fiber– XFP and variants (10Gbps)– SFP and variants (1Gbps)– XENPAK– LC/SC bulkhead for WAN modules

Page 12: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

12

Available Technology Today:Front-end System Interfaces

TBI– 10bit Interface. Max speed 3.125Gbps.

SPI-4 / SXI– System Protocol Interface. 16bit Interface. Max speed

11Gbps.

SPI-5– System Protocol Interface. 16bit Interface. Max speed

50Gbps.

XFI– 10Gbps Serial Interface.

Page 13: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

13

Available Technology Today:Front-end Pipe Diameter

1Gbps– 1Gbps doesn’t handle a lot of data anymore.– Non standard parallel also available based on OIF

VSR.

10Gbps LAN/WAN or OC-192– As port density increases, using 10Gbps as an

upstream pipe will no longer be effective.

40Gbps OC-768– Not effective port density in an asynchronous system.– Optics cost close to 30times 10Gbps Ethernet.

Page 14: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

14

Available Technology Today:Front-end Distance Requirements

x00 m (MMF)– SONET/SDH (Parallel): OIF VSR-4, VSR-5– Ethernet:: 10GBASE-SR, 10GBASE-LX4, 10GBASE-LRM

2-10 km– SONET/SDH: OC-192/STM-64 SR-1/I-64.1, OC-768/STM-256 VSR2000-3R2/etc.– Ethernet: 10GBASE-LR

~40 km– SONET/SDH: OC-192/STM-64 IR-2/S-64.2, OC-768/STM-256– Ethernet: 10GBASE-ER

~100 km– SONET/SDH: OC-192/STM-64 LR-2/L-64.2, OC-768/STM-256– Ethernet: 10GBASE-ZR

DWDM– OTN: ITU G.709 OTU-2, OTU-3

Assertion– Each of these applications must be solved for ultra high data rate interfaces.

Page 15: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

15

Available Technology Today:Increasing Pipe Diameter

1Gbps LAN by 10links parallel

10Gbps LAN by x-links WDM

10Gbps LAN by x physical links

Multiple OC-192 or OC-768 Channels

Page 16: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

16

Available Technology Today:Zoom to Back Plane

Front End

PassiveCopper

Backplane

1stSwitchFabric

Line Card

Line Card

L1L1

Ln+1Ln+1SPIxSPIx

Front End

SPIxSPIx

NthSwitchFabric

N+1SwitchFabric

10G10G

10G10G

1G1G

1G1G

1G1G

1G1G

10G10G

10G10G

L1L1

L1L1

LnLn

Ln+1Ln+1

LnLn

Ln+1Ln+1

Page 17: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

17

Available Technology Today:Back Plane

BackplaneSERDES

Traces

Line Cards--GbE / 10 GbE

RPMsSFMsPower Supplies

Data Packet

Page 18: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

18

Available Technology Today:Making a Back Plane

Simple! It’s just multiple sheets of glass with copper traces and copper planes added for electrical connections.

Reference: Isola

Page 19: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

19

Available Technology Today:Back Plane Pipe Diameter

1.25Gbps– Used in systems with five to ten year old technology.

2.5Gbps/3.125Gbps– Used in systems with five year old or less technology.

5Gbps/6.25Gbps– Used within the last 12 months.

Page 20: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

20

Available Technology Today:Increasing Pipe Diameter

Can’t WDM copper

10.3Gbps/12.5Gbps– Not largely deployed at this time.

Increasing the pipe diameter on a back plane with assigned slot pins can only be done by changing the glass construction.

Page 21: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

21

Available Technology Today:Pipe Diameter is NOT Flexible

Once the pipe is designed and built to a certain pipe speed, making the pipe faster is extremely difficult, if not impossible.

SDD21 3 Connectors in N4000-13

-75

-70

-65

-60

-55

-50

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

0 100020003000400050006000700080009000100001100012000130001400015000

Freq in MHZ

dB

5_5_20_10 SDD21

Old adhoc SDD21

New adhoc SDD21

3_3_20_7 SDD21

3_3_15_10 SDD21

3_3_15_7 SDD21

5_5_10_10 SDD21

3_3_10_7 SDD21

5_5_3_10 SDD21

3_3_3_7 SDD21

Page 22: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

22

Available Technology Today:Gbits Density per Slot with Front End and Back Plane Interfaces Combined

Year System Introduced Slot density

2000 40Gbps

2004 60Gbps

2006/7 – in design now 120Gbps

Based on max back plane thickness of 300mils, 20TX and 20RX differential pipes.

Page 23: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

23

Feasible Technology for 2009: Defining the Next Generation

The overall network architecture for next generation ultra high (100, 120 and 160Gbps) data rate interfaces should be similar in concept to the successful network architecture deployed today using 10Gbps and 40Gbps interfaces.

The internal node architectures for ultra high (100, 120 and 160Gbps) data rate interfaces should follow similar concepts in use for 10Gbps and 40Gbps interfaces.

All new concepts need to be examined, but there are major advantages to scaling current methods with new technology.

Page 24: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

24

Feasible Technology for 2009:Front-end Pipe Diameter

80Gbps … not enough Return On Investment 100Gbps 120Gbps 160Gbps Reasonable Channel Widths

– 10λ by 10-16 Gbps– 8λ by 12.5-20 Gbps– 4λ by 25-40 Gbps– 1λ by 100-160 Gbps

Suggest starting at an achievable channel width while pursuing a timeline to optimize the width in terms of density, power, feasibility, and cost - depending on optical interface application/reach.

Page 25: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

25

Feasible Technology for 2009:Front-end Distance Requirements

x00 m (MMF)– SONET/SDH: OC-3072/STM-1024 VSR– Ethernet: 100GBASE-S

2-10 km– SONET/SDH: OC-3072/STM-1024 SR– Ethernet: 100GBASE-L

~40 km– SONET/SDH: OC-3072/STM-1024 IR-2– Ethernet: 100GBASE-E

~100 km– SONET/SDH: OC-3072/STM-1024 LR-2– Ethernet: 100GBASE-Z

DWDM (OTN)– SONET/SDH: Mapping of OC-3072/STM-1024– Ethernet: Mapping of 100GBASE

Assertion– These optical interfaces are defined today at the lower speeds. It is highly likely that industry

will want these same interface specifications for the ultra high speeds.– Optical interfaces, with exception of VSR, are not typically defined in OIF. In order to specify

the system level electrical interfaces, some idea of what industry will do with the optical interface has to be discussed. It is not the intent of this presentation to launch these optical interface efforts within OIF.

Page 26: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

26

Feasible Technology for 2009:Front-end System Interfaces

Reasonable Channel Widths (SPI-?)– 16 lane by 6.25-10Gbps– 10 lane by 10-16Gbps– 8 lane by 12.5-20Gbps– 5 lane by 20-32Gbps– 4 lane by 25-40Gbps

Port Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.

Page 27: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

27

Feasible Technology for 2009:Back Plane Pipe Diameter

Reasonable Channel Widths– 16 lane by 6.25-10Gbps– 10 lane by10-16Gbps– 8 lane by 12.5-20Gbps– 5 lane by 20-32Gbps– 4 lane by 25-40Gbps

Port Density is impacted by channel width. Fewer lanes translates to higher Port Density and less power.

Page 28: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

28

Feasible Technology for 2009: Pipe Diameter is NOT Flexible

New Back Plane designs will have to have pipes that can handle 20Gbps to 25Gbps.

SDD21 3 Connectors in N4000-13

-75

-70

-65

-60

-55

-50

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

0 100020003000400050006000700080009000100001100012000130001400015000

Freq in MHZ

dB

5_5_20_10 SDD21

Old adhoc SDD21

New adhoc SDD21

3_3_20_7 SDD21

3_3_15_10 SDD21

3_3_15_7 SDD21

5_5_10_10 SDD21

3_3_10_7 SDD21

5_5_3_10 SDD21

3_3_3_7 SDD21

Page 29: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

29

Feasible Technology for 2009: Gbits Density per Slot with Front End and Back Plane Interfaces Combined

Year System Introduced Slot density

2000 40Gbps

2004 60Gbps

2006/7 – in design now 120Gbps

2009 500Gbps

Based on max back plane thickness of 300mils, 20TX and 20RX differential pipes.

Page 30: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

30

Feasible Technology for 2009:100Gbps Options

Device Interconnect Device Interconnect Device Interconnect Device InterconnectOptical Width Chip2Chip Framer or MAC Chip2Chip NPU Chip2Chip mux Back Plane

16Lane by 6.25 16Lane by 6.25 16Lane by 6.25 16Lane by 6.2510_ by 10 10Lane by 10 10Lane by 10 10Lane by 10 10Lane by 10 Not efficient design; mux to 5Lane for back plane efficiency

8Lane by 12.5 X 8Lane by 12.5 X 8Lane by 12.5 X 8Lane by 12.510_ by 10 5Lane by 20 5Lane by 25 5Lane by 20 5Lane by 20 More efficient, but not usuall multiples, power ???

4Lane by 25 4Lane by 25 4Lane by 25 4Lane by 25

8_ by 12.5 16Lane by 6.25 16Lane by 6.25 16Lane by 6.25 16Lane by 6.25 Not efficient design; mux to 8Lane for better back plane efficiency10Lane by 10 X 10Lane by 10 X 10Lane by 10 X 10Lane by 10

8_ by 12.5 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 Scalable, power ???, back plane efficiency ???5Lane by 20 X 5Lane by 25 X 5Lane by 20 X 5Lane by 20

8_ by 12.5 4Lane by 25 4Lane by 25 4Lane by 25 4Lane by 25 Efficient and scalable, needs power feasibility study

4_ by 25 16Lane by 6.25 16Lane by 6.25 16Lane by 6.25 16Lane by 6.2510Lane by 10 X 10Lane by 10 X 10Lane by 10 X 10Lane by 10

4_ by 25 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 Scalable, power ???, back plane efficiency ???5Lane by 20 X 5Lane by 25 X 5Lane by 20 X 5Lane by 20

4_ by 25 4Lane by 25 4Lane by 25 4Lane by 25 4Lane by 25 Efficient and scalable

1_ by 100 16Lane by 6.25 16Lane by 6.25 16Lane by 6.25 16Lane by 6.251_ by 100 10Lane by 10 X 10Lane by 10 10Lane by 10 X 10Lane by 10 Not efficient design; mux to 5Lane for back plane efficiency1_ by 100 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 8Lane by 12.5 Scalable, power ???, back plane efficiency ???1_ by 100 5Lane by 20 X 5Lane by 25 5Lane by 20 X 5Lane by 20 More efficient, but not usuall multiples, power ???1_ by 100 4Lane by 25 4Lane by 25 4Lane by 25 4Lane by 25 Efficient and scalable

X Possible conversion or mux point to a more efficient lane width as required.

Possible Path

Best PathScalable Path based on ASIC technology

Possible Path but not efficient

Bit rate shown above is based on 100Gbps. Scale the bit rateaccordingly to achieve 160Gbps.

Page 31: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

31

The Push for Standards:Interplay Between the OIF & IEEE

OIF defines multi-source agreements within the Telecom Industry.– Optics and EDC for LAN/WAN– SERDES definition– Channel models and simulation tools

IEEE 802 covers LAN/MAN Ethernet– 802.1 and 802.3 define Ethernet over

copper cables, fiber cables, and back planes.

– 802.3 leverages efforts from OIF.

Membership in both bodies is important for developing next generation standards.

Page 32: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

32

The Push for Standards:OIF

Force10 Labs introduced three efforts within OIF to drive 100Gbps to 160Gbps connectivity. – Two interfaces for interconnecting optics, ASICs, and

backplanes.– A 25Gbps SERDES– Updates of design criteria to the Systems User Group

Page 33: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

33

Case Study: Standards ProcessP802.3ah – Nov 2000 / Sept 2004

Call for Interest

Study Group

Task Force

Working Group Ballot

Sponsor Ballot

Standards Board Approval

Publication

By a member of 802.3

Open participation

Open participation

Members of 802.3

Public ballot group

RevCom & Stds Board

IEEE Staff, project leaders

50% WG vote

75% WG PAR vote, 50% EC & Stds Bd

75% WG vote

75% WG ballot, EC approval

75% of ballot group

50% vote

Page 34: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

34

10GBASE-LRM Innovations:

• TWDP Software reference equalizer Determines EDC penalty of transmitter

• Dual Launch Centre and MCP

Maximum coverage for minimum EDC penalty

• Stress Channels Precursor, split and post-cursor Canonical tests for EDC

Nov03CFI

Jan04Study Group

May04Taskforce

Nov03TF Ballot

Mar05WG BAllot

Dec05Sponsor Ballot

Mid-06Standard

Time line

- 4.5 dBm

0.4 dB: Fiber attenuation

0.3 dB: RIN

- 11.2 dBm

4.4 dB: TP3 TWDP and connector loss @ 99% confidence level

0.2 dB: Modal noise

Optical Power Budget (OMA)

0.9 dB: Unallocated power

Launch power (min)

Required effective receiver sensitivity

0.5 dB: Transmitter implementation

Case Study: Standards Process10GBASE_LRM: 2003 / 2006

Reference: David Cunningham – Avago Technologies

Page 35: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

35

Case Study: Standards Process10GBASE_LRM

- 4.5dBm Launch power minimum

Connector losses = 1.5dB

Fiber attenuation = 0.4dB

Power budgetstarting at TP2

Interaction penalty = 0.1dB

Ideal EDC power penalty, PIE_D = 4.2dB

- 6.5 dBm

Specified optical power levels (OMA)

Effective maximum unstressed 10GBASE-LRM receiver sensitivity - 11.2 dBm

Optical input to receiver (TP3) compliance test allocation

Attenuation (2 dB)

Noise (0.5 dB)

Dispersion (4.2 dB)

RIN = 0.3 dB

Modal noise = 0.2 dB

TWDP and connector loss at 99th percentile (4.4 dB)

Fiber attenuation = 0.4 dB

RIN = 0.3 dB

Modal noise = 0.2 dB

Transmit implementationallowance = 0.5 dB

Unallocated margin 0.9 dB

Stressed receiver sensitivity

Reference: David Cunningham – Avago Technologies

Page 36: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

36

Case Study: Standards Process10GBASE_T: 2002 / 2006

Techno-babble– 64B/65B encoding (similar to 10GBASE-R)– LDPC(1723,2048) framing– DSQ128 constellation mapping (PAM16 with ½ the code points

removed)– Tomlinson-Harshima precoder

Reach– Cat 6 up to 55 m with the caveat of meeting TIA TSB-155– Cat 6A up to 100 m– Cat 7 up to 100 m– Cat 5 and 5e are not specified

Power– Estimates for worst case range from 10 to 15 W– Short reach mode (30 m) has a target of sub 4 W

Page 37: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

37

Case Study: Standards Process10GBASE_T

Noise and EMI– Alien crosstalk has the biggest impact on UTP cabling– Screened and/or shielded cabling has better performance

Power– Strong preference for copper technologies, even though higher

power– Short reach and better performance cable reduce power requirement

Timeline– The standard is coming… products in the market end of `06, early

`07

JULJUL NOVNOV MARMAR20042004

JULJUL NOVNOV MARMAR20052005

JULJUL NOVNOV MARMAR20062006

JULJUL

TutorialTutorial& CFI& CFI

D1.0D1.0

802.3 Ballot802.3 Ballot

D2.0D2.0 STDSTD

SponsorSponsorBallotBallot

NOVNOV20022002

MARMAR20032003

PARPAR Task ForceTask Forcereviewreview

D3.0D3.011stst Technical TechnicalPresentationPresentation

Page 38: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

38

Birth of A StandardIt Takes About 5 Years

Ideas from industry

Feasibility and research

Call for Interest (CFI) –100 GbE EFFORT IS HERE

Marketing / Sales potential, technical feasibility

Study Group

Work Group

Drafts

Final member vote

Page 39: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

39

The Push for Standards:IEEE

Force10 introduces a Call for Interest (CFI) in July 2006 IEEE802 with Tyco Electronics.– Meetings will be held in the coming months to determine the

CFI and the efforts required.– We target July 2006 because of resources within IEEE.– Joel Goergen and John D’Ambrosia will chair the CFI effort.

The anchor team is composed of key contributors from Force10, Tyco, Intel, Quake, and Cisco. It has since broadened to include over 30 companies.

Page 40: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

40

The Ethernet AlliancePromoting All Ethernet IEEE Work

Key IEEE 802 Ethernet projects include– 100 GbE– Backplane– 10 GbE LRM / MMF– 10 G Base-T

Force10 is on the BoD, principle member

20 companies at launch– Sun, Intel, Foundry, Broadcam. . . – Now approaching 40 companies

Launch January 10, 2006

Opportunity for customers to speak on behalf of 100 GbE Ethernet

Page 41: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

41

Anatomy of a 100Gbps Solution: Architectural Disclaimers

There Are Many Ways to Implement a system– This section covers two basic types.– Issues facing 100Gbps ports are addressed in basic

form.

Channel Performance or ‘Pipe Capacity’ is difficult to measure

Two Popular Chassis Heights– 24in to 34in Height (2 or 3 Per Rack)– 10in to 14in Height (5 to 8 Per Rack)

Page 42: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

42

Anatomy of a 100Gbps Solution: What is a SERDES?

Device that attaches to the ‘channel’ or ‘pipe’

Transmitter:– Parallel to serial– Tap values– Pre-emphasis

Receiver:– Serial to Parallel– Clock and Data

Recovery– DFE– Circuits are very

sensitive to power noise and low Signal to Noise Ration (SNR)

Reference: Altera

Page 43: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

43

Anatomy of a 100Gbps Solution: Interfaces that use SERDES

TBI– 10bit Interface. Max speed 3.125Gbps across all 10 lanes. This is

a parallel interface that does not use SERDES technology.

SPI-4 / SXI– System Protocol Interface. 16bit Interface. Max speed 11Gbps.

This is a parallel interface that does not use SERDES technology.

SPI-5– System Protocol Interface. 16bit Interface. Max speed 50Gbps.

This uses 16 SERDES interfaces at speeds up to 3.125Gbps.

XFI– 10Gbps Serial Interface. This uses 1 SERDES at 10.3125Gbps.

XAUI– 10Gbps 4 lane Interface. This uses 4 SERDES devices at

3.125Gbps each.

Page 44: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

44

Anatomy of a 100Gbps Solution:Power Noise thought …

Line Card SERDES Noise Limits– Analog target 60mVpp ripple– Digital target 150mVpp ripple

Fabric SERDES Noise Limits– Analog target 30mVpp ripple– Digital target 100mVpp ripple

100Gbps interfaces won’t operate well if these limits can not be meet.

Page 45: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

45

Anatomy of a 100Gbps Solution: Memory Selection

Advanced Content-Addressable Memory (CAM)– Goal: Less power per search– Goal: 4 times more performance– Goal: Enhanced flexible table management schemes

Memories– Replacing SRAMs with DRAMs when performance allows to

conserve cost– Quad Data Rate III SRAMs for speed– SERDES based DRAMs for buffer memory

Need to drive JEDEC for serial memories that can be easily implemented in a communication system.– The industry is going to have to work harder to get high speed

memories for Network Processing in order to reduce latency.

Memory chips are usually the last thought! This will need to change for 100Gbps sustained performance.

Page 46: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

46

Anatomy of a 100Gbps Solution: ASIC Selection

High Speed Interfaces– Interfaces to MACs, Backplane, Buffer Memory are all

SERDES based. SERDES all the way. Higher gate counts with internal memories target 3.125 to 6.25 SERDES; higher speeds difficult to design in this environment.

– SERDES used to replace parallel busing for reduced pin and gate count

Smaller Process Geometry– Definitely 0.09 micron or lower

– More gates(100% more gates over 0.13 micron process)– Better performance(25% better performance)– Lower power(1/2 the 0.13 micron process power)– Use power optimized libraries

Hierarchical Placement and Layout of the Chips– Flat placement is no longer a viable option

To achieve cost control, ASIC SERDES speed is limited to 6.25Gbps in high density applications.

Page 47: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

47

Anatomy of a 100Gbps Solution: N+1 Redundant Fabric - BP

Front End

PassiveCopper

Backplane

1stSwitchFabric

Line Card

Line Card

L1L1

Ln+1Ln+1SPIxSPIx

Front End

SPIxSPIx

NthSwitchFabric

N+1SwitchFabric

L1L1

L1L1

LnLn

Ln+1Ln+1

LnLn

Ln+1Ln+1

Page 48: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

48

PassiveCopper

Midplane

Anatomy of a 100Gbps Solution: N+1 Redundant Fabric – MP

1stSwitchFabric

Line Card

Line Card

L1L1

SPIxSPIx

Front End

SPIxSPIx

Front End

SPIxSPIx

SPIxSPIx

Ln+1Ln+1

NthSwitchFabric

N+1SwitchFabric

L1L1

L1L1

LnLn

Ln+1Ln+1

LnLn

Ln+1Ln+1

Page 49: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

49

Anatomy of a 100Gbps Solution: N+1 High Speed Channel Routing

1stSwitchFabric

NthSwitchFabric

N+1SwitchFabric

2ndSwitchFabric

Line Card Line Card Line Card Line Card

Page 50: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

50

PassiveCopper

Backplane

Anatomy of a 100Gbps Solution: A/B Redundant Fabric - BP

AFabric

Line Card

Line Card

AA

AA

AA

BB

BB

BB

SPIxSPIx

Front End

SPIxSPIx

Front End

BFabric

Page 51: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

51

PassiveCopper

Midplane

Anatomy of a 100Gbps Solution: A/B Redundant Fabric – MP

AFabric

Line Card

Line Card

AA

AA

BFabric

BB

BB

BB

SPIxSPIx

Front End

SPIxSPIx

Front End

SPIxSPIx

SPIxSPIx

AA

Page 52: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

52

Anatomy of a 100Gbps Solution: A/B High Speed Channel Routing

BSwitchFabric

LineCard

ASwitchFabric

LineCard

LineCard

LineCard

Page 53: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

53

Anatomy of a 100Gbps Solution: A Quick Thought …..

Looking at both Routing and Connector Complexity designed into the differential signaling ….– Best Case: N+1 Fabric in a Back Plane.– Worst Case: A/B Fabric in a Mid Plane.

All implementations need to be examined for best possible performance over all deployed network interfaces. Manufacturability and channel (Pipe) noise are two of the bigger factors.

Page 54: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

54

Anatomy of a 100Gbps Solution: Determine Trace Lengths

After careful review of possible Line Card, Switch Fabric, and Back Plane Architectural blocks, determine the range of trace lengths that exist between a SERDES transmitter and a SERDES receiver. 30 inches or .75meters total should do it.

Several factors stem from trace length.– Band Width– Reflections from via and or thru-holes– Circuit board material– BER– Coding

Keep in mind that the goal is to target one or both basic chassis dimensions.

Page 55: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

55

Anatomy of a 100Gbps Solution: Channel Model Description

A “Channel” or “Pipe” is a high speed single-ended or differential signal connecting the SERDES transmitter to the SERDES receiver. The context of “Channel” or “Pipe” from this point is considered differential.

Develop a channel model based on the implications of Architectural choices and trace lengths.– Identifies a clean launch route to a BGA device.– Identifies design constraints and concerns.– Includes practical recommendations.– Identifies channel Bandwidth.

Page 56: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

56

Anatomy of a 100Gbps Solution: Channel Simulation Model

XMTR FR4+CONNPLUG

CONNJACK FR4+

CONNJACK

CONNPLUG FR4+

RCV FILTER

RCV SLICER

Transmitter

Receiver

TP1 TP4

Channel

DCblock

TP2 and TP3 not used

FR4+

TP5Informative

equivalent cap circuit

Back Plane Line Card: Receiver

Page 57: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

57

Anatomy of a 100Gbps Solution: Channel: Back Plane

13mil Drill

13mil Drill

24mil 24miltrace

Clearance Pad Digital GND

Power Plane

PressFitConnector

Back Drill ??

Shows signal trace connecting pins on separate connectors across a back plane.

Page 58: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

58

Anatomy of a 100Gbps Solution: Channel: Line Card Receiver

Shows a signal trace connecting the back plane to a SERDES in a Ball Grid Array (BGA) package.

BackPlaneConn

13mil Drill

24mil

24mil

24milx32mil 24milx32mil 21mil BGA

trace to BP

trace

13mil Drill

24mil

BlockingCapacitor

BGA orLike device

Back Drill ??

PowerDGND

Signal

Page 59: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

59

Channel Model Definition:Back Plane Band Width

2Ghz to 3Ghz Band Width– Supports 2.5Gps NRZ –

8B10B

2Ghz to 4Ghz Band Width– Supports 3.125Gps NRZ –

8B10B

2Ghz to 5Ghz Band Width (4Ghz low FEXT)– Supports 6.25Gps PAM4– Supports 3.125Gps NRZ –

8B10B or Scrambling

2Ghz to 6.5Ghz– Supports 6.25Gps NRZ –

8B10B– Limited Scrambling

Algorithms

2Ghz to 7.5Ghz– Supports 12Gps– Limited Scrambling

Algorithms

2Ghz to 9Ghz– Supports 25Ghz multi-level

How do we Evaluate the signal speed that can be placed on a channel?

Page 60: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

60

Channel Model Definition – IEEE 802.3ae XAUI Limit

b1 = 6.5e-6

b2 = 2.0e-10

b3 = 3.30e-20 SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2)

f = 50Mhz to 15000Mhz

Page 61: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

61

Channel Model Definition – IEEE 802.3ap A(min)

b1 = 2.25e-5

b2 = 1.20e-10

b3 = 3.50e-20

b4 = 1.25e-30 SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2 - b4*f^3)

f = 50Mhz to 15000Mhz

Page 62: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

62

Anatomy of a 100Gbps Solution: Channel Model Limit LinesSDD21 SDD11 SDD22 CH12 AGGR2 N4000-13

-75

-70

-65

-60

-55

-50

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

0 100020003000400050006000700080009000100001100012000130001400015000

Freq in MHZ

dB

Measured SDD21

XAUI Limit

IEEE P802.3ap Limit

Page 63: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

63

Anatomy of a 100Gbps Solution: Comments on Limit Lines

IEEE802.3ae XAUI is a 5year old channel model limit line.

IEEE P802.3ap channel model limit is based on mathematical representation of improved FR-4 material properties and closely matches “real life” channels. This type of modeling will be essential for 100Gbps interfaces.

A real channel is shown with typical design violations common in the days of XAUI. Attention to specific design techniques in the channel launch conditions can eliminate the violation to the defined channel limits.

Page 64: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

64

Anatomy of a 100Gbps Solution: Receiver Conditions – Case 1

TP4TP5

Informative

13m

il D

rill

13m

il D

rill

13m

il D

rill

24mil

24mil

24milx32mil 24milx32mil 24mil

24mil

24mil

24mil

21mil BGA612milAG

trace to BP

trace trace

trace

trace dogbone

34mil Anti Pad

Page 65: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

65

Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 1

Poor Signal Integrity – SDD11/22/21

Standard Cad Approach

Easiest / Lowest Cost to Implement

Approach will not have the required performance for SERDES implementations used in 100Gbps interfaces.

Page 66: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

66

Anatomy of a 100Gbps Solution: Receiver Conditions– Case 4

TP4TP5

Informative

13m

il D

rill

24mil

24mil

24milx32mil 24milx32mil 21mil BGA612milAG

trace to BP

trace trace trace dogbone

34mil Anti Pad

Page 67: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

67

Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 4

Ideal Signal Integrity– Eliminates two VIAS– Increases pad impedance to reduce SDD11/22

High speed BGA pins must reside on the outer pin rows

Crosstalk to traces routed under the open ground pad is an issue for both the BGA and the Capacitor footprint

Requires 50mil pitch BGA packaging to avoid ground plane isolation on the ground layer under the BGA pads

Potential to require additional routing layer

Page 68: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

68

Available Technology Today:Remember this Slide ?

Circuit board material is just multiple sheets of glass with copper traces and copper planes added for electrical connections.

Reference: Isola

Page 69: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

69

Anatomy of a 100Gbps Solution: Channel Design Considerations

Circuit Board Material Selection is Based on the Following:– Temperature and Humidity effects on Df (Dissipation

Factor) and Dk (Dielectric Constant)– Required mounting holes for mother-card mounting,

shock and vibration– Required number of times a chip or connector can be

replaced– Required number of times a pin can be replaced

on a back plane– Aspect ratio (Drilled hole size to board thickness)– Power plane copper weight– Coding / Signaling scheme

Page 70: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

70

Anatomy of a 100Gbps Solution: Materials in Perspective

Dissipation Factor(Loss Tangent)

DielectricConstant

Dielectric Constant(Most materials are f lat from 100Hz - 2GHz)

FR-4/ E

CE/ PTFE

PTFE/ E

CE/ E

Epoxy/ NE (N4000-13SI)PPO-Epoxy/ NE

(GETEK-IIX, Megtron-5)CE-Epoxy/ NE

Modified FR-4(IS620)

Mod Epoxy/ E(N4000-13/ FR408)

PPO-Epoxy/ E(GETEK)

CE-Epoxy/ E

IS402N4000-2

Graph provided by Zhi Wong [email protected]

Page 71: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

71

“Improved FR-4” in Reference to IEEE P802.3ap

Improved FR-4 (Mid Resolution Signal Integrity):– 100Mhz: Dk ≤ 3.60; Df ≤ .0092– 1Ghz: Dk ≤ 3.60; Df ≤ .0092– 2Ghz: Dk ≤ 3.50; Df ≤ .0115– 5Ghz: Dk ≤ 3.50; Df ≤ .0115– 10Ghz: Dk ≤ 3.40; Df ≤ .0125– 20Ghz: Dk ≤ 3.20; Df ≤ .0140

Temperature and Humidity Tolerance (0-55degC, 10-90% non-condensing):– Dk:+/- .04– Df: +/- .001

Resin Tolerance (standard +/-2%):– Dk:+/- .02– Df: +/- .0005

Page 72: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

72

Anatomy of a 100Gbps Solution: Channel or Pipe Considerations

Channel Constraints Include the Following:– Return Loss

– Thru-hole reflections– Routing reflections

– Insertion Loss based on material category– Insertion Loss based on length to first reflection point– Define coding and baud rate based on material

category– Connector hole crosstalk– Trace to trace crosstalk– DC blocking Capacitor at the SERDES to avoid

shorting DC between cards.– Temperature and Humidity losses/expectations based

on material category

Page 73: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

73

Channel Model Starting Point:Materials in Perspective

Dissipation Factor(Loss Tangent)

DielectricConstant

Dielectric Constant(Most materials are f lat from 100Hz - 2GHz)

FR-4/ E

CE/ PTFE

PTFE/ E

CE/ E

Epoxy/ NE (N4000-13SI)PPO-Epoxy/ NE

(GETEK-IIX, Megtron-5)CE-Epoxy/ NE

Modified FR-4(IS620)

Mod Epoxy/ E(N4000-13/ FR408)

PPO-Epoxy/ E(GETEK)

CE-Epoxy/ E

IS402N4000-2

Graph provided by Zhi Wong [email protected]

Target Area

Page 74: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

74

Channel Model Starting Point:Real Channels

SDD21 2 Connectors in N4000-13

-75

-70

-65

-60

-55

-50

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

0 100020003000400050006000700080009000100001100012000130001400015000

Freq in MHZ

dB

10_20_10 SDD21

Proposed P802.3apSDD21

10_15_10 SDD21

10_10_10 SDD21

Starting Point Jan06SDD21

Page 75: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

75

Channel Model Starting Point:Equation

b1 = 1.25e-5

b2 = 1.20e-10

b3 = 2.50e-20

b4 = 0.95e-30 SDD21 = -20*log10(e)*(b1*sqrt(f) + b2*f + b3*f^2 - b4*f^3)

f = 50Mhz to 15000Mhz

Page 76: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

76

Anatomy of a 100Gbps Solution: Channel Design Considerations

Channel BER– Data transmitted across the back plane channel is

usually done in a frame with header and payload– The frame size can be anywhere from a few hundred

bytes to 16Kbytes, typical– A typical frame contains many PHY-layer packets– BER of 10E-12 will result in a frame error of 10E-7 or

less, depending on distribution– That is a lot of frame loss

Page 77: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

77

Anatomy of a 100Gbps Solution: Channel Design Considerations

Channel BER– Customers want to see a frame loss of zero– Systems architects want to see a frame loss of zero– Zero error is difficult to test and verify … none of us will live that

long– The BER goal should be 10E-15

– It can be tested and verified at the system design level– Simulate to 10E-17– Any frame loss beyond that will have minimal effect on

current packet handling/processing algorithms– Current SERDES do not support this. Effective 10E-15 is

obtained by both power noise control and channel model loss

This will be tough to get through, but without this tight requirement, 100Gbps interfaces will need to run faster by 3% to 7%. Or worse, pay a latency penalty for using FEC or DFE.

Page 78: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

78

Anatomy of a 100Gbps Solution:Remember Interface Speeds …

Reasonable Channel Widths for 100Gbps:– 16 lane by 6.25Gbps *BEST– 10 lane by 10Gbps– 8 lane by 12.5Gbps– 5 lane by 20Gbps– 4 lane by 25Gbps *BEST

Page 79: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

79

Anatomy of a 100Gbps Solution: Channel Signaling Thoughts

Channel Signaling:– NRZ

– In general, breaks down after 12.5Gbps– 8B10B is not going to work at 25Gbps– 64B66B is not going to work at 25Gbps– Scrambling is not going to work at 25Gbps

– Duo-Binary– Demonstrated to 33Gbps

– PAM4 or PAMx– Demonstrated to 33Gbps

Page 80: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

80

Anatomy of a 100Gbps Solution: Designing for EMI Compatibility

Treat each slot as a unique chamber– Shielding Effectiveness determines the maximum

number of 1GigE ports, 10GigE ports, or 100GigE ports before saturating emissions requirements.

– Requires top and bottom seal using honeycomb

Seal the back plane / mid plane– Cross-hatch chassis ground– Chassis ground edge guard and not edge plate– Digital ground sandwich for all signal layers

Provide carrier mating surface

EMI follows wave equations. Signaling spectrum must be considered.

Page 81: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

81

Anatomy of a 100Gbps Solution: Power Design

Power Routing Architecture from Inputs to All Cards– Bus bar– Power board– Cabling harness– Distribution through the back plane / mid plane using copper foil

Design the Input Filter for Maximum Insertion Loss and Return Loss– Protects your own equipment– Protects all equipment on the power circuit

Design Current Flow Paths for 15DegC Max Rise, 5 DegC Typical

Design all Distribution Thru-holes to Support 200% Loading at 60DegC– Provides for the case when the incorrect drill size is selected in the

drilling machine and escapes computer comparison. Unlikely case but required in carrier applications

Power Follows Ohm’s Law. It Can Not Be Increased without Major Changes or Serious Thermal Concerns

Page 82: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

82

Summary

Industry has been successful scaling speed since 10Mbps in 1983.

The efforts in 1GigE and 10GigE have taught us many aspects of interfaces and interface technology.

100Gbps and 160Gbps success will depend on useable chip and optics interfaces.

Significant effort is underway in both IEEE and OIF to define and invent interface to support the next generation speeds.

Systems designers will need to address many new issues to support 100Gbps port densities of 56 or more per box.

Page 83: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

83

Thank You

Page 84: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

84

Backup Slides

The following slides provide additional detail to support information provided within the base presentation.

Page 85: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

85

Acronym Cheat Sheet

CDR – Clock and Data Recovery CEI – Common Electrical Interface CGND / DGND – Chassis Ground / Digital Ground EDC – Electronic Dispersion Compensation MAC – Media Access Control MDNEXT / MDFEXT – Multi Disturber Near / Far End Cross

Talk MSA – Multi Source Agreement NEXT / FEXT – Near / Far End Cross Talk OIF – Optical Internetworking Forum PLL - Physical Link Layer SERDES – Serialize / De-serialize SFI – System Framer Interface SMF / MMF – Single Mode Fiber / Multi Mode Fiber XAUI – 10Gig Attachment Unit Interface

Page 86: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

86

PMD

Anatomy of a 100Gbps Solution: Basic Line Card Architecture

PHY,Framer

Protocol Stacks,Applications

APPsAPPs

Non “Wire

Speed”µP

NetworkProcessor

FabricInterface

Page 87: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

87

Anatomy of a 100Gbps Solution: Basic Line Card Architecture 1

Architecture:– Long trace lengths.– Poor power noise control

means worse than...– Analog target

60mVpp ripple– Digital target

150mVpp ripple– Poor SERDES to

connector signal flow will maximize ground noise.

– This layout is not a good choice for 100Gbps.

Media

ForwardingEngine

Ba

ck

pla

ne

Opticalor Copper

Media

Reserved for Power

NetworkProcessor

SERDES

SERDES

SERDES

SERDES

Page 88: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

88

Anatomy of a 100Gbps Solution: Basic Line Card Architecture 2

Architecture:– Clean trace routing.– Good power noise

control means better than...

– Analog target 60mVpp ripple

– Digital target 150mVpp ripple

– Excellent SERDES to connector signal flow to minimize ground noise.

– Best choice for 100Gbps systems.

Media

ForwardingEngine

Ba

ck

pla

ne

Opticalor Copper

Media

Reserved for Power

NetworkProcessor

SERDES

SERDES

Page 89: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

89

Mid

pla

ne

Media

ForwardingEngine

Opticalor Copper

Media

Reserved for Power

NetworkProcessor

SERDES

Anatomy of a 100Gbps Solution: Basic Line Card Architecture 3

Architecture:– Clean trace routing.– Good power noise

control means better than...

– Analog target 60mVpp ripple

– Digital target 150mVpp ripple

– Difficult SERDES to connector signal flow because of Mid Plane.

– This layout is not a good choice for 100Gbps.

Page 90: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

90

Line CardInterface

Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture

Non “Wire

Speed”µP

Digital orAnalogX bar

Line CardInterface

Page 91: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

91

Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 1

Architecture:– Long trace lengths.– Poor power noise control

means worse than...– Analog target

30mVpp ripple– Digital target

100mVpp ripple– Poor SERDES to

connector signal flow will maximize ground noise.

– This layout is not a good choice for 100Gbps.

Reserved for Power

DigitalCross Bar

SERDES

SERDES

Page 92: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

92

Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 2

Architecture:– Clean trace routing.– Good power noise

control means better than...

– Analog target 30mVpp ripple

– Digital target 100mVpp ripple

– Excellent SERDES to connector signal flow to minimize ground noise.

DigitalCross Bar

SERDES

SERDES

Reserved for Power

Page 93: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

93

AnalogCrossBar

Reserved for Power

Anatomy of a 100Gbps Solution: Basic Switch Fabric Architecture 3

Architecture:– Clean trace routing.– Good power noise

control means better than...

– Analog target 30mVpp ripple

– Digital target 100mVpp ripple

– Excellent SERDES to connector signal flow to minimize ground noise.

– True Analog Fabric is not used anymore.

Page 94: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

94

Anatomy of a 100Gbps Solution: Back Plane or Mid Plane

Redundancy– N+1 Fabric– A / B Fabric

Connections– Back Plane– Mid Plane

Page 95: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

95

Anatomy of a 100Gbps Solution: Trace Length Combinations - Max

24in to 34in height (2 or 3 per rack)

N+1 Fabric A/B Fabric

Position: Top or Bottom of Line Cards Position: Top or Bottom of Line CardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 16 6 4 Trace Length 16 6 4SF -1 18 56 46 44 SF -3 14 52 42 40

SF -2 12 50 40 38 Back Plane 22Back Plane 22

N+1 Fabric A/B Fabric

Position: Middle of Line Cards Position: Middle of line cardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 16 6 4 Trace Length 16 6 4SF -1 18 52 42 40 SF -3 14 48 38 36

SF -2 12 46 36 34 Back Plane 18Back Plane 18

Note: All dimensions in inches

Page 96: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

96

Anatomy of a 100Gbps Solution: Trace Length Combinations - Min

24in to 34in height (2 or 3 per rack)

N+1 Fabric A/B Fabric

Position: Top or Bottom of Line Cards Position: Top or Bottom of Line CardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 16 6 4 Trace Length 16 6 4SF -1 18 39 29 27 SF -3 14 35 25 23

SF -2 12 33 23 21 Back Plane 5Back Plane 5

N+1 Fabric A/B Fabric

Position: Middle of Line Cards Position: Middle of line cardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 16 6 4 Trace Length 16 6 4SF -1 18 36 26 24 SF -3 14 32 22 20

SF -2 12 30 20 18 Back Plane 2Back Plane 2

Note: All dimensions in inches

Page 97: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

97

Anatomy of a 100Gbps Solution: Trace Length Combinations - Max

10in to 14in height (5 to 8 per rack)

N+1 Fabric A/B Fabric

Position: Top or Bottom of Line Cards Position: Top or Bottom of Line CardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 9 5 n/a Trace Length 9 5 n/aSF -1 15 41 37 n/a SF -3 n/a n/a n/a n/a

SF -2 9 35 31 n/a Back Plane 17Back Plane 17

N+1 Fabric A/B Fabric

Position: Middle of Line Cards Position: Middle of line cardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 9 5 n/a Trace Length 9 5 n/aSF -1 15 38 34 n/a SF -3 n/a n/a n/a n/a

SF -2 9 32 28 n/a Back Plane 14Back Plane 14

Note: All dimensions in inches

Page 98: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

98

Anatomy of a 100Gbps Solution: Trace Length Combinations - Min

10in to 14in height (5 to 8 per rack)

N+1 Fabric A/B Fabric

Position: Top or Bottom of Line Cards Position: Top or Bottom of Line CardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 9 5 n/a Trace Length 9 5 n/aSF -1 15 26 22 n/a SF -3 n/a n/a n/a n/a

SF -2 9 20 16 n/a Back Plane 2Back Plane 2

N+1 Fabric A/B Fabric

Position: Middle of Line Cards Position: Middle of line cardsCase LC -1 LC -2 LC -3 Case LC -1 LC -2 LC -3

Trace Length 9 5 n/a Trace Length 9 5 n/aSF -1 15 26 22 n/a SF -3 n/a n/a n/a n/a

SF -2 9 20 16 n/a Back Plane 2Back Plane 2

Note: All dimensions in inches

Page 99: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

99

Anatomy of a 100Gbps Solution: Receiver Conditions– Case 2

TP4TP5

Informative

13m

il D

rill

13m

il D

rill

13m

il D

rill

24mil

24mil

24milx32mil 24milx32mil 24mil

24mil

24mil

24mil

21mil BGA612milAG

trace to BP

trace trace

trace

trace dogbone

34mil Anti Pad

Page 100: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

100

Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 2

Crosstalk to Traces Routed Under the Open Ground Pad is an Issue

Allows Good Pin Escape from the BGA

Poor Signal Integrity - Has High SDD11/22/21 at the BGA

Potential to Require Additional Routing Layer

Approach will not have the required performance for SERDES implementations used in 100Gbps interfaces.

Page 101: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

101

Anatomy of a 100Gbps Solution: Receiver Conditions– Case 3

TP4TP5

Informative

13m

il D

rill

13m

il D

rill

13m

il D

rill

24mil

24mil

24milx32mil 24milx32mil 24mil

24mil

24mil

24mil

21mil BGA612milAG

trace to BP

trace trace

trace

trace dogbone

34mil Anti Pad

Page 102: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

102

Anatomy of a 100Gbps Solution: Constraints & Concerns – Case 3

Allows for Inner High Speed Pad Usage within the BGA

Medium Poor Signal Integrity – SDD11/22/21. Has the Extra VIAS to Content with in the Break Out– Increases pad impedance to reduce SDD11/22

Crosstalk to Traces Routed Under the Open Ground Pad is an Issue for Both the BGA and the Capacitor Footprint

Allows Good Pin Escape from the BGA

Potential to Require Additional Routing Layer

Requires 50mil Pitch BGA Packaging to Avoid Ground Plane Isolation on the Ground Layer Under the BGA Pads

Page 103: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

103

Anatomy of a 100Gbps Solution: Stack-up Detail

Specified Layer Name Layer Resin M aterial Impedance Geometry Impedance Geometry

Layers Thickness Cross Section Diagram Definition Type Content Type

0.7 &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&Mask

L01 2.0 &&& Foil & Plating Plating Pads Only

1.3 Pads Only 1 oz. Cu 98ohm+-4 10on10 56ohm+-5 10on10

6.0 2x1080 rc: 65% N4000-13

L02 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L03 1.3 &&& HS1 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L04 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L05 1.3 &&& HS2 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L06 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L07 1.3 &&& HS3 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L08 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L09 1.3 &&& HS4 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L10 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L11 1.3 &&&&&&&&&&&&&&&&&&&&&&Plane 1 oz. Cu

10.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L12 1.3 &&&&&&&&&&&&&&&&&&&&&&Plane 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L13 1.3 &&&&&&&&&&&&&&&&&&&&&&Plane 1 oz. Cu

10.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L14 1.3 &&&&&&&&&&&&&&&&&&&&&&Plane 1 oz. Cu

5.5 core 2x1080 rc: 58.4% N4000-13

L15 1.3 &&&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L16 1.3 &&& HS5 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

5.5 core 2x1080 rc: 58.4% N4000-13

L17 1.3 &&&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L18 1.3 &&& HS6 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

5.5 core 2x1080 rc: 58.4% N4000-13

L19 1.3 &&&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L20 1.3 &&& HS7 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

5.5 core 2x1080 rc: 58.4% N4000-13

L21 1.3 &&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

9.1 1x1080/2x106/1x1080 rc: 65%/75%/65% N4000-13

L22 1.3 &&& HS8 1 oz. Cu 100ohm+-3 6on14 51ohm+-3 6on14

5.5 core 2x1080 rc: 58.4% N4000-13

L23 1.3 &&&&&&&&&&&&&&&&&&&&&&GND 1 oz. Cu

6.0 2x1080 rc: 65% N4000-13

1.3 Pads Only 1 oz. Cu 98ohm+-4 10on10 56ohm+-5 10on10

L24 2.0 &&& Foil & Plating Plating Pads Only

0.7 ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((Mask

Total: 202.1 Est. Finish Thickness Over Plating & Mask200.7 Est. Finish After Copper Plating

194.1 Est. Finish Thickness Dielectric

Page 104: 1 Progress and Challenges toward 100Gbps Ethernet Joel Goergen VP of Technology / Chief Scientist Abstract: This technical presentation will focus on the

104

Requirements to Consider when Increasing Channel Speed

Signaling Scheme vs Available Bandwidth

NEXT/FEXT Margins

Average Power Noise as Seen by the Receive Slicing Circuit and the PLL

Insertion Loss (SDD21) Limits