les les robertson lcg project leader wlcg – management overview lhcc comprehensive review 25-26...

34
Les Les Robertson LCG Project Leader LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

Upload: tracey-parker

Post on 14-Jan-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

Les Les RobertsonLCG Project LeaderLCG Project Leader

WLCG – Management Overview

LHCC Comprehensive Review25-26 September 2006

Page 2: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

LHC Committee - LHCCScientific Review

Computing Resources Review Board - C-RRB

Funding Agencies

Overview Board - OB

Management Board - MBManagement of the Project

LCG Organisation – Phase 2

Grid Deployment BoardCoordination ofGrid Operation

Architects ForumCoordination of

Common Applications

Collaboration Board – CBExperiments and Regional Centres

PhysicsApplications

Software

DistributedAnalysis &

Grid Support

GridDeployment

ComputingFabricActivity Areas

Page 3: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 3

LCG Planning and Resource Data

Planning page

Computing RRB- formal list of regional centres- latest version of MoU- resource planning and commitments

Management Board- Meeting agendas- Wiki

Page 4: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 4

LCG

Progress since last Review

Page 5: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 5

LCG Progress since last review

Applications Area Merge of the SEAL and ROOT activities into a single project FLUKA released with new licence schema Migration to the new Reflex library by POOL and Python

scripting completed and experiments started to migrate to this new version (ROOT 5)

The first public release of the new re-engineered version of the relational database API package (CORAL) made available

The HepMC package installed in the LCG external area and maintained by FNAL effort

New procedures for testing and building the software put in place to optimize the time that takes to integrate by the experiments the changes and bug fixes in libraries provided by the AA

Page 6: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 6

LCG Progress since last review

Baseline services from the TDR are in operation Agreement (after much discussion) on VO Boxes..

gLite 3 Basis for startup on EGEE grid Introduced (just) on time for SC4 New Workload Management System - now entering production

Metrics accounting introduced for Tier-1s and CERN (cpu and storage) site availability measurement system introduced – reporting for

Tier-1s & CERN from May job failure analysis

Grid operations All major LCG sites active Daily monitoring and operations now mature –

- in the EGEE grid coordination taken in turn by 5 sites Evolution of EGEE regional operations support structure

Page 7: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 7

LCG Progress since last reviewCERN Fabric Tier-0 testing has progressed well

Artificial system tests .. and ATLAS Tier-0 testing at full throughput

Comfortable that target data rates, throughput can be met .. Including CASTOR 2

But DAQ systems not yet integrated in these tests CERN Analysis Facility (CAF)

Testing of experiment approaches to this has started only in the past few months

Includes PROOF evaluation by ALICE Much has still to be understood Essential to maintain Tier-0/CAF flexibility for hardware during

early years CASTOR 2

Performance is largely understood Stability and the ability to maintain a 24 X 365 service is now the

main issue

Page 8: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 8

LCGWLCG depends on two major science grid infrastructures ….

EGEE - Enabling Grids for E-ScienceOSG - US Open Science Grid

Page 9: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 9

LCG EGEE-II

Phase 2 approved just after last year’s comprehensive review Significant operations resources at CERN and Regional Centres HEP applications support (NA4) extended to cover the full range of

activities that interface to the grid – production as well as analysis The experiments represented directly in the Technical Coordination

Group, where decisions on the evolution of middleware developments are prepared

Former “LCG” middleware package merged into “gLite 3”

Funding for Phase 2 - April 2006 March 2008 Discussion started on successor project – or third phase of EGEE Prospects will be clearer in 6 months

Page 10: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 10

LCG Open Science Grid Operates an expanding Production Distributed Facility which

provides core capabilities in operations, security, software releases, the Virtual Data Toolkit, trouble-shooting, …. and supports the engagement of new communities to use the OSG.

Education, Training and Outreach with a core focus on expanding the number, location and scope of the Grid Schools and web based training.

User Support, Service and System Extensions for increased scale, performance and capability of the common services for the stakeholder VOs.

OSG proposal submitted to DOE and NSF in February/March, revised proposal In June

Planning going ahead in expectation of a positive decision in next few months – five year funding cycle

Page 11: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 11

LCG Production Grids for LHC

EGEE Grid ~50K jobs/day

~14K simultaneous jobs during prolonged periods

Jobs/Day - EGEE Grid

0

10

20

30

40

50

60

Jun-05

Jul-05

Aug-05

Sep-05

Oct-05

Nov-05

Dec-05

Jan-06

Feb-06

Mar-06

Apr-06

May-06

Jun-06

Jul-06

Aug-06

month

K jobs/day

alice

atlas

cms

lhcb

geant4

dteam

non-LHC

Last month, running jobs for the whole Grid

lhcb cms atlas alicelhcb cms atlas alice

EGEE Grid

Jobs/day EGEE Grid

14K

Page 12: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 12

LCG OSG Production for LHC

OSG

~15K jobs/day. 3 big users are ATLAS, CDF, CMS.

~3K simultaneous jobs --

at the moment use quite spiky.

ATLASCMS

OSG-CMS Data Distribution -

past 3 months

OSG-ATLAS Running Jobs - past 3 months10,000

20,000

1,000

Jobs/day OSG Grid

Page 13: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 13

LCG

Pre-SC4 April tests CERN T1s –

SC4 target 1.6 GB/s reached – but only for one day

Sustained data rate 80% of the target

But – experiment-driven transfers (ATLAS and CMS) sustained 50% of the SC4 targetunder much more realistic conditions

CMS transferred a steady 1 PByte/month between Tier-1s & Tier-2s during a 90 day period

ATLAS distributed 1.25 PBytes from CERN during a 6-week period

Data Distribution

1.6 GBytes/sec

0.8 GBytes/sec

1.3 GBytes/sec

Page 14: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 14

LCGStorage Resource

Management September 2006 – Agreement on requirements for SRM v2.1 End of year – first implementations appear

First signs that requirements not so clear Issues emerge on interoperability, interpretation of the standard

February – Mumbai workshop Agreement on storage classes and a way of expressing them in

SRM v2 Group of experts set up to agree on the details, testing, deployment

May – workshop at FNAL – strong guideline to converge agreement on SRM v2.2 – functionality and storage classes geared to WLCG usage, but still compatible with other

implementations experiments, developers, sites, SRM community

Long process but major advance in understanding real storage management needs of the experiments

Scheduled to be in production 1Q07

Page 15: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 15

LCG

SC4 Formal Reliability Targets

(Level 1 Milestone)

1. 8 Tier-1s and 20 Tier-2s - must have demonstrated availability better than 90% of the levels specified in MoU 88%

2. Success rate of standard application test jobs greater than 90% (excluding failures due to the applications environment and non-availability of sites)

Page 16: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

target 88%74% 75%85% 86%

avail: 95% reliability: 95% avail: 69% reliability: 71% avail: 94% reliability: 94%

avail: 69% reliability: 73% avail: 59% reliability: 60% avail: 83% reliability: 83%

avail: 87% reliability: 87% avail: 97% reliability: 97% avail: 4% reliability: 4%

avail: 88% reliability: 88% avail: n/a reliability: 0% avail: n/a reliability: 0%

USCMS-FNAL-WC1

IN2P3-CC

SARA-MATRIX

NDGF

Data from SAM monitoring. Site availability and reliability as agreed in WLCG MB on 11 July 2006 (scheduled interruptions are excluded when calculating reliability)

TRIUMF-LCG2 Taiwan-LCG2

CERN-PROD FZK-LCG2

INFN-T1 RAL-LCG2

scheduled downlegend:

PIC BNL

Availability of WLCG Tier-1 Sites + CERN August 2006

tests passed

average (8 best sites):

reliabilityaverage (all sites):

site average colour coding: < 90% of target ≥ 90% of target ≥ targetavailability

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

SAM tests fail due to dCache function failure that does not affect CMS jobs. The problem is understood and is being worked on

All sites assumed up while SAM had problems on 1, 3, 4 August

Site not integrated into the Site Availability Monitoring (SAM) system -

not included in overall average

Site not integrated into the Site Availability Monitoring (SAM) system -

not included in overall average

Page 17: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

target 88%74% 75%85% 86%

avail: 95% reliability: 95% avail: 69% reliability: 71% avail: 94% reliability: 94%

avail: 69% reliability: 73% avail: 59% reliability: 60% avail: 83% reliability: 83%

avail: 87% reliability: 87% avail: 97% reliability: 97% avail: 4% reliability: 4%

avail: 88% reliability: 88% avail: n/a reliability: 0% avail: n/a reliability: 0%

USCMS-FNAL-WC1

IN2P3-CC

SARA-MATRIX

NDGF

Data from SAM monitoring. Site availability and reliability as agreed in WLCG MB on 11 July 2006 (scheduled interruptions are excluded when calculating reliability)

TRIUMF-LCG2 Taiwan-LCG2

CERN-PROD FZK-LCG2

INFN-T1 RAL-LCG2

scheduled downlegend:

PIC BNL

Availability of WLCG Tier-1 Sites + CERN August 2006

tests passed

average (8 best sites):

reliabilityaverage (all sites):

site average colour coding: < 90% of target ≥ 90% of target ≥ targetavailability

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

SAM tests fail due to dCache function failure that does not affect CMS jobs. The problem is understood and is being worked on

All sites assumed up while SAM had problems on 1, 3, 4 August

Site not integrated into the Site Availability Monitoring (SAM) system -

not included in overall average

Site not integrated into the Site Availability Monitoring (SAM) system -

not included in overall average

August 2006 • two sites not yet integrated in measurement framework• target 88% availability• 10-site average – 74% • best 8 sites average – 85%• reliability (excludes scheduled down time) ~1% higher

Page 18: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 18

LCG May-August Availability

availability reliability

77% 81% -70% 79% -70% 82% 83%74% 85% 86%

JulyAugust

average all 10 sites

8 best sites

MayJune

TargetDaily availab-ility (10 sites)

SC4 target 88%

Page 19: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 19

LCG Job Reliability Monitoring Ongoing work System to process and analyse job logs implemented for some

of the major activities in ATLAS and CMS Errors identified, frequency reported to developers, TCG

Expect to see results feeding through from development to products in a fairly short time

More impact expected when the new RB enters in full production (old RB is frozen)

Daily report on most important site problems allows the operation team to drill down from site, to computing

elements to worker nodes In use by the end of August

Intention is to report long-termtrends by site, VO

0

0.2

0.4

0.6

0.8

1

1.2

23-May 12-Jun 2-Jul 22-Jul 11-Aug 31-Aug 20-Sep

FNAL

Page 20: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 20

LCG

LHCC Comments from 2005 Review

Page 21: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 21

LCGComments and recommendations

from last year

The connection between the middleware and the experiments is considered to be too weak, with the risk that the former will not satisfy the requirements of the experiments.

We now have the baseline services in production – including the middleware (e.g. gLite 3 on the EGEE grid)

The experiments are directly represented in the EGEE Technical Coordination Group

The experiment-specific “Task Forces” focus on specific middleware and operational issues

Page 22: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 22

LCGComments and recommendations

from last year

Significant delays were reported in the deployment of the Enabling Grids for e-Science (EGEE) gLite services, in the CASTOR2 disk pool management system and in the distributed database services.

gLite 3.0 in production June 06 – met milestone, but skipped Beta test period

upgraded components had few problems major new component (Workload Management System) now

entering production CASTOR 2 –

good performance numbers at CERN in Tier-0 exercises but this is still not mature internal review concluded that this is the path that we should stick

to 3D Distributed Database Deployment –

definition of services and deployment plan agreed (October 2005) SQUID and Oracle Streams Phase 1 sites ready by the target date

of end September difficulties with Streams Phase 2 sites

Page 23: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 23

LCGComments and recommendations

from last year

The interoperability and the provision of a common interface of the various types of middleware being produced should continue to be pursued at a larger scale.

Main OSG/EGEE interoperability requirements now in production - authentication, VOMS, information publishing, job submission, SRM; FTS; catalogue selected by experiment not the grid, resource brokers are grid independent [common information schema]

Operational level - still work to be done - monitoring; error and log formats; ... 

Work also ongoing with NDGF - funding to both sides from EU

A "common application interface" is not seen at present as a requirement

Standards still some way off – Enterprise Grid Alliance (EGA) and the Global Grid Forum (GGF) Open Grid Forum (OGF)

Page 24: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 24

LCGComments and recommendations

from last year

The analysis models are to a large extent untested

Continued ARDA activity The situation is improving:

CMS in July was running at over 13k jobs/day submitted by ~100 users using ~75 sites

Improved understanding of simultaneous production and analysis - setting up analysis queues (ATLAS and CMS)

We still have to see the bulk of the users using the grid for dependable time-critical activities (including calibrations

Page 25: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 25

LCG

Schedule and Challenges for 2007

Page 26: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 26

LCG

Commissioning

Schedule2006

2007

2008

SC4 – becomes initial service whenreliability and performance goals met

01jul07 - service commissioned - full 2007 capacity, performance

first physics

Continued testing of computing models, basic services

Testing DAQTier-0 (??) & integrating into DAQTier-0Tier-1data flow

Building up end-user analysis support

Exercising the computing systems, ramping up job rates, data management performance, ….

Initial service commissioning – increase reliability, performance, capacity to target levels, experiencein monitoring, 24 X 7 operation, ….

Introduce residual services Full FTS services; 3D; SRM v2.2; VOMS roles

Experiments Sites & Services

Page 27: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 27

LCG Challenges and Concerns

Site reliability Achieve MoU targets – with a more comprehensive set of tests Tier-0, Tier-1 and (major) Tier-2 sites Concerns on staffing levels at some sites 24 X 7 operation needs to be planned and tested – will be

problematic at some sites, including CERN, during the first year when unexpected problems have to be resolved

Tier-1s and Tier-2s learning exactly how they will be used Mumbai workshop, several Tier-2 workshops Experiment computing model tests Storage, data distribution

Tier-1/Tier-2 interaction Test out data transfer services, network capability Build operational relationships

Mass storage Complex systems - difficult to configure Castor 2 not yet fully mature SRM v2.2 to be deployed – and storage classes, policies implemented by

sites 3D Oracle - Phase 2 – sites not yet active/staffed

Page 28: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 28

LCG Challenges and Concerns

Experiment service operation Manpower intensive Interaction between experiments & Tier-1s, large Tier-2s Need sustained test load – to verify site and experiment

readiness

Analysis on the Grid Very challenging Overall growth in usage very promising

CMS running over 13k analysis jobs/day submitted by ~100 users using ~75 sites (July 06)

Understanding how the CERN Analysis Facility will be used DAQ testing looks late

the Tier-0 needs time to react to any unexpected requirements and problems

Page 29: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 29

LCG

Revised Startup Scenario

Page 30: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 30

LCGRevised Estimates of Physics Beam

Time

TDR requirements were based on the assumption that there would be 50 days of physics data taking in 2007, with 10**7 seconds of proton beam time and 10**6 seconds of heavy ions in subsequent years

As the planning for the start-up is refined it has become clear that this is unrealistic and a better estimate of running time in the first years was needed in order to allow funding agencies and regional centres to improve their medium term planning

New assumptions: 2007 – 0.7 x 10**6 seconds protons 2008 – 4 x 10**6 secs protons, 0.2 x 10**6 secs ions 2009 – 6 x 10**6 secs protons, 10**6 secs ions 2010 – 10**7 secs protons, 2 x 10**6 secs ions

Page 31: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 31

LCG Revised resource estimates

CERN and Tier-1 sites must have 2007 capacity in production by 1 July 2007 – to ensure that everything is ready for the first beams

In many cases this means that the acquisition process must start in October/November 2006

ALICE, ATLAS and LHCB have made preliminary estimates of the resources needed for the new scenario –

- these estimates have NOT yet been ENDORSED by the collaborations

Page 32: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 32

LCG

Agreement reached at the LCG Overview Board 11 September to enable timely planning for 2007

The four experiments will complete the process of revising their estimated requirements for 2007-2010 by early October

The revised estimates will be made available to the WLCG collaboration and to the funding agencies

Which are expected to use them to revise their funding and capacity planning

To ensure that the necessary equipment for 2007 is installed, tested and in production by 1 July 2007

To complete the longer term re-planning of resources across the experiments in time for the April 2007 C-RRB

Risk that funding agencies may over-react to the revised startup scenario by cutting funding

we have to provide initial services in 2007, and full scale service in 2008

ramp-up gradient must not be too steep there are likely to be hidden performance and

operational problems sites need time to build up the scae of their

operations

Page 33: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 33

LCG Effect on costs at CERN

Discussion of resources at CERN with the four experiments, using preliminary estimates

Taking account of the new estimates, the costs in Phase 1 (2006-2008) look now to be about 10% over the available funding

Page 34: Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review 25-26 September 2006

les.robertson@cern,ch 34

LCG