the lhc open network environment - desy · hamburg, 10.12.2012 desy computing seminar kars...
TRANSCRIPT
Hamburg, 10.12.2012DESY Computing SeminarKars Ohrenberg
The LHC Open Network Environment
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHC Computing Infrastructure
> WLCG in brief:■ 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide
2
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
The LHC Optical Private Network
> The LHCOPN (from http://lhcopn.web.cern.ch)
■ The LHCOPN is the private IP network that connects the Tier0 and the Tier1 sites of the LCG.■ The LHCOPN consists of any T0-T1 or T1-T1 link which is dedicated to the
transport of WLCG traffic and whose utilization is restricted to the Tier0 and the Tier1s.■ Any other T0-T1 or T1-T1 link not dedicated to WLCG traffic may be part of the
LHCOPN, assuming the exception is communicated to and agreed by the LHCOPN community
> Very closed and restricted access policy
> No Gateways
3
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCOPN Network Map
4
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Computing Models Evolution
> The original MONARC model was strictly hierarchical
> Changes introduced gradually since 2010
> Main evolutions:■ Meshed data flows: Any site can use any other
site as source of data■ Dynamic data caching: Analysis sites pull
datasets from other sites „on demand“, including from Tier-2s in other regions
■ Remote data access
> Variations by experiment> LHCOPN only connects T0 and T1
5
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHC Open Network Environment> With the successful operation of the LHC accelerator and the start of the
data analysis, there has come a re-evaluation of the computing and data models of the experiments
> The goal of LHCONE (LHC Open Network Environment) is to ensure better access to the most important datasets by the worldwide HEP community
> Traffic patterns have altered to the extent that substantial data transfers between major sites are regularly being observed on the General Purpose Networks (GPN)
> The main principle is to separate the LHC traffic from the GPN traffic, thus avoiding degraded performance
> The objective of LHCONE is to provide entry points into a network that is private to the LHC T1/2/3 sites.
> LHCONE is not intended to replace LHCOPN but rather to complement it
6
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE Achitecture
7
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE VRF Map (from Bill Johnston, ESNet)
8
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE VRF Map (from Bill Johnston, ESNet)
9
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE: A global Infrastructure
10
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE Activities> With the above in mind, LHCONE has defined the following
activities: > VRF-based multipoint service: a “quick-fix” to provide multipoint
LHCONE connectivity, with logical separation from R&E GPN > Layer 2 multipath: evaluate use of emerging standards such as
TRILL (IETF) or Shortest Path Bridging (SPB, IEEE 802.1aq) in WAN environments
> Openflow: There was wide agreement that SDN is the most probable candidate technology for LHCONE in the long-term (but needs more investigations)
> Point-to-point dynamic circuits pilots > Diagnostic Infrastructure: each site to have the ability to perform
E2E performance tests with all other LHCONE sites
11
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE VRF
> Implementation of multiple logical router instances inside a physical device (virtualized Layer 3)
> Logical control plane separation between multiple clients
> VRF in LHCONE: regional networks implement VRF domains to logically separate LHCONE from other flows
> BGP peerings used inter-domain and to end-sites
12
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Multipath in LHCONE
> Multipath problem:■ How to use the many (transatlantic) paths at Layer 2 among the many partners,
e.g. USLHCNet, GEANT, SURFnet, NORDUnet, ...
> Layer 3 (VRF) can use some BGP techniques■ MED, AS padding, local preference, restricted announcements■ works in a reasonably small configuration, not clear it will scale up to O(100)
end-sites
> Some approaches to Layer 2 mulitpath:■ IETF: TRILL (TRansparent Interconnect of Lots of Links)■ IEEE: 802.1aq (Shortest Path Bridging)
> None of these L2 protocols is designed for WAN!■ R&E needed
13
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE Routing Policies
> Only the networks which are announced to LHCONE are allowed to reach the LHCONE■ Networks announced by DESY:
■ 131.169.98.0/24, 131.169.160.0/21, 131.169.191.0/24 (Tier-2 Hamburg)
■ 141.34.192.0/21, 141.34.200.0/24 (Tier-2 Zeuthen)
■ 141.34.224.0/22, 141.34.228.0/24, 141.34.229.0/24, 141.34.230.0/24 (NAF Hamburg)
■ 141.34.216.0/23, 141.34.218.0/24, 141.34.219.0/24, 141.34.220.0/24 (NAF Zeuthen)
■ e.g. networks announced by CERN:■ 128.142.0.0/16 but not 137.138.0.0
> Only these networks will be reachable via the LHCONE> Other traffic uses the public, general purpose networks> Asymmetric routing should be avoided as this will cause
problems for traffic passing (public) firewalls
14
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE - the current status
> Currently ~100 network prefixes■ German sites currently participating in LHCONE
■ DESY, KIT, GSI, RWTH Aachen, Uni Wuppertal
■ Europe■ CERN, SARA , GRIF (LAL + LPNHE), INFN, FZU, PIC, ...
■ US:■ AGLT2 (MSU + UM), MWT2 (UC), BNL, ...
■ Canada■ TRIUMF, Toronto, ...
■ Asia■ ASGC, ICEPP, ...
> Detailed monitoring via perfSONAR
15
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
LHCONE Monitoring
16
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
R&D Network Trends
> Increased multiplicity of 10Gbps links in the major R&E networks: GEANT, Internet2, ESnet, various NREN, ...
> 100Gbps Backbones in place and transition now underway■ GEANT, DFN, ...■ CERN - Budapest 2 X 100G for LHC Remote Tier- 0 Center
> OpenFlow (Software-defined switching and routing) taken up by much of the network industry and R&E networks
17
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Software-Defined Networking (SDN)
> Is a form of network virtualization in which the control plane is separated from the data plane and implemented in a software application
> This architecture allows network administrators to have programmable central control of network traffic without requiring physical access to the network's hardware devices
> SDN requires some method for the control plane to communicate with the data plane. One such mechanism is OpenFlow which is a standard interface for controlling computer networking switches
18
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Bandwidth Evolution @ DFN
> DFN is upgrading the optical platform of the X-WiN■ Contract awarded to ECI Telecom (http://www.ecitele.com)■ Migration work is currently underway
> High Bandwidth Capabilities■ 88 wave length per fiber■ Up to 100 Gbps per wave length
■ thus 8.8 Tbps per fiber!
■ 1 Tbps Switching Fabric (aggregation of 10 Gbps lines on single 100 Gbps line)
19
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Bandwidth Evolution @ DFN
> Significant cheaper components for 1 Gbps and 10 Gbps components -> reduced cost for VPN connections, new DFN conditions announced last week
> New DFN conditions starting 1.7.2013■ DESYs contract of 2 x 2 GBits will go to 2 x 5 Gbps without additional costs■ New cost model for Point-to-point VPNs■ 1) Initial installation payment
■ 10 GBps ~ 11.400 €, 40 GBps ~ 38.000 €, 100 GBps ~ 94.000 €
■ 2) Annual fee now depends on the distance■ Hamburg <> Berlin at ~ 20% of the current costs (for 10 Gbps)
■ Hamburg <> Berlin at ~ 80% of the current costs (for 40 Gbps)
■ Hamburg <> Karlsruhe at ~ 45% of the current costs (for 10 Gbps)
■ Hamburg <> Karlsruhe at ~ 150% of the current costs (for 40 Gbps)
20
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Growing WiN Capacities
21
Kapazitätsentwicklung des WiNÜbertragungskapazität der Kernnetzverbindungen des Wissenschaftsnetzes:
X-WiN 2006: 400 Gbit/s
X-WiN 2012: 8.800 Gbit/s
B-WiN: 622 Mbit/s
G-WiN: 10 Gbit/s
Seite 6
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
WAN + LHCONE Infrastructure at DESY
22
Montag, 10. Dezember 12
Kars Ohrenberg | DESY Computing Seminar | 10.12.2012 | Seite
Summary
> The LHC computing and data models continue to evolve towards more dynamic, less structured, on-demand data movement thus requiring different network structures
> LHCOPN and LHCONE may merge in the future
> With the evolution of the new optical platforms bandwidth will get more affordable
23
Montag, 10. Dezember 12