the atlas collaboration a.j. lankford atlas deputy spokesperson university of california, irvine...

31
The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations, particularly by F. Gianotti & S. Myers. July 16, 2009 1 Lankford - SLUO LHC Workshop Outline: Collaboration, organization, membership LHC and ATLAS schedule Focus of current activities

Upload: charles-jack-singleton

Post on 12-Jan-2016

221 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

The ATLAS Collaboration

A.J. LankfordATLAS Deputy Spokesperson

University of California, Irvine

Many slides are drawn from recent ATLAS Plenary presentations, particularly by F. Gianotti & S. Myers.

July 16, 2009 1Lankford - SLUO LHC Workshop

Outline:•Collaboration, organization, membership•LHC and ATLAS schedule•Focus of current activities

Page 2: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Fabiola Gianotti, ATLAS RRB, 28-04-2009 2

ATLAS Collaboration

(Status April 2009)

37 Countries169 Institutions

2815 Scientific participants total(1873 with a PhD, for M&O share)

Albany, Alberta, NIKHEF Amsterdam, Ankara, LAPP Annecy, Argonne NL, Arizona, UT Arlington, Athens, NTU Athens, Baku, IFAE Barcelona, Belgrade, Bergen, Berkeley LBL and UC, HU Berlin, Bern, Birmingham, UAN Bogota, Bologna, Bonn, Boston, Brandeis, Brasil Cluster, Bratislava/SAS Kosice, Brookhaven NL, Buenos Aires, Bucharest, Cambridge, Carleton, CERN, Chinese Cluster, Chicago, Chile, Clermont-Ferrand, Columbia, NBI Copenhagen, Cosenza, AGH UST Cracow, IFJ PAN Cracow, UT Dallas, DESY, Dortmund, TU Dresden, JINR Dubna, Duke, Frascati, Freiburg, Geneva, Genoa, Giessen, Glasgow, Göttingen, LPSC Grenoble, Technion Haifa, Hampton, Harvard, Heidelberg, Hiroshima, Hiroshima IT, Indiana, Innsbruck, Iowa SU, Irvine UC, Istanbul Bogazici, KEK, Kobe, Kyoto, Kyoto UE, Lancaster, UN La Plata, Lecce, Lisbon LIP, Liverpool, Ljubljana, QMW London, RHBNC London, UC London, Lund, UA Madrid, Mainz, Manchester, CPPM Marseille, Massachusetts, MIT, Melbourne, Michigan, Michigan SU, Milano, Minsk NAS, Minsk NCPHEP, Montreal, McGill Montreal, RUPHE Morocco, FIAN Moscow, ITEP Moscow, MEPhI Moscow, MSU Moscow, Munich LMU, MPI Munich, Nagasaki IAS, Nagoya, Naples, New Mexico, New York, Nijmegen, BINP Novosibirsk, Ohio SU, Okayama, Oklahoma, Oklahoma SU, Olomouc, Oregon, LAL Orsay, Osaka, Oslo, Oxford, Paris VI and VII, Pavia, Pennsylvania, Pisa, Pittsburgh, CAS Prague, CU Prague, TU Prague, IHEP Protvino, Regina, Ritsumeikan, Rome I, Rome II, Rome III, Rutherford Appleton Laboratory, DAPNIA Saclay, Santa Cruz UC, Sheffield, Shinshu, Siegen, Simon Fraser Burnaby, SLAC, Southern Methodist Dallas, NPI Petersburg, Stockholm, KTH Stockholm, Stony Brook, Sydney, AS Taipei, Tbilisi, Tel Aviv, Thessaloniki, Tokyo ICEPP, Tokyo MU, Toronto, TRIUMF, Tsukuba, Tufts, Udine/ICTP, Uppsala, Urbana UI, Valencia, UBC Vancouver, Victoria, Washington, Weizmann Rehovot, FH Wiener Neustadt, Wisconsin, Wuppertal, Würzburg, Yale, Yerevan

Page 3: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Projects & Activities• Activities of ATLAS members from the U.S. are embedded in the projects

and activities of ATLAS• 5 ATLAS Detector Projects:

– Inner Detector (Pixels, SCT, TRT)– Liquid Argon Calorimeter– Tile Calorimeter– Muon Instrumentation (RPC, TGC, MDT, CSC)– Trigger & Data Acquisition

• 5 ATLAS “horizontal” Activities:– Detector Operation– Trigger– Software & Computing– Data Preparation– Physics

• Upgrade• U.S. contributions are well integrated into ATLAS.

– US ATLAS Management works closely with ATLAS Management to set priorities and to make U.S. contributions maximally effective in the areas of detector M&O, software and computing, and now upgrades. This cooperation is greatly appreciated by ATLAS.

July 16, 2009 Lankford - SLUO LHC Workshop 3

Page 4: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Fabiola Gianotti, ATLAS RRB, 28-04-2009 4

5 detector systems, 5 “horizontal” experiment-wide activities, plus upgrade (not shown here) Detector Project Leaders and Activity Coordinators: 2-year term. Each year a Deputy Activity Coordinator is appointed, who becomes Coordinator one year later for one year (this staggering mechanism ensures continuity) Experiment’s execution reviewed monthly in the Executive Board: 1.5 day meeting (one day open to the full Collaboration followed by half day closed)

Detector Operation (Run Coordinator)Detector operation during data taking, online data quality, …

Executive Board

ATLAS managementCollaboration management, experiment execution, strategy, publications, resources, upgrades, etc.

PublicationsCommittee,Speakers Committee

CB

Trigger (Trigger Coordinator)Trigger performance,menu tables, new triggers, ...

Data Preparation (Data Preparation Coordinator)Offline data quality, calibration, alignment,…

Computing (Computing Coordinator)Software infrastructure, world-wide computingoperation

Physics (Physics Coordinator)optimization of algorithms for physics objects, physics channels

TechnicalCoordination

Detector systems: Inner Detector, Liquid-Argon, Tiles, Muons, TDAQ

Note: upgrade activities andorganization not shown here

Page 5: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS OrganizationJuly 2009

ATLAS Plenary Meeting

Collaboration Board(Chair: K. Jon-AndDeputy: G. Herten)

Resources ReviewBoard

Spokesperson(F. Gianotti

Deputies: A.J. Lankfordand D. Charlton)

Technical Coordinator

(M. Nessi)

Resources Coordinator(M. Nordberg)

Executive Board

CB Chair AdvisoryGroup

Inner Detector(P. Wells)

LAr Calorimeter(I. Wingerter-Seez)

Tile Calorimeter(A. Henriques)

PubComm Coordinator

(J. Pilcher)

Commissioning/Run Coordinator

(C. Clément,dep. B. Gorini)

ComputingCoordination

(D. Barberis,dep. K. Bos)

Data Prep.Coordination

(C. Guyot,dep. A. Hoecker)

(next B. Heinemann)

PhysicsCoordination(T. LeCompte,dep. A. Nisati)

Muon Instrumentation

(L. Pontecorvo)

AdditionalMembers

(T. Kobayashi,M. Tuts, A. Zaitsev)

TriggerCoordination

(N. Ellis, dep. X. Wu)(next T. Wengler,

dep. S. Rajagopalan)

Upgrade SG Coordinator

(N. Hessey)

Trigger/DAQ( C. Bee)

P.Jenni ex-officio6 months as former SP

July 16, 2009 5Lankford - SLUO LHC Workshop

Page 6: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Individual Membership 1/2

For an individual at an existing ATLAS institution • ATLAS membership is open to members of ATLAS institutions.

– Requires approval by your institution's ATLAS "team leader". – See ATLAS home page for CERN / ATLAS registration information.

• ATLAS authorship requires qualification (recently revised). – Authorship privileges require an institutional commitment to:

• Sharing in shifts and other operations tasks, • Sharing of maintenance & operations expenses.

– Authorship privileges require individual qualification: • Obtaining qualification:

– ATLAS membership for at least one year, – Not to be an author of another major LHC collaboration, – At least 80 days and at least 50% research time during qualification year on ATLAS

technical activities.

• Continuing qualification: – Continued ATLAS membership, – At least 50% research time on ATLAS, – Not to be an author of another major LHC collaboration.

July 16, 2009 6Lankford - SLUO LHC Workshop

Page 7: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Individual Membership 2/2

For an individual at an existing ATLAS institution (Cont’d.)

– Policies exist concerning authorship of former ATLAS members and other exceptional circumstance.

• See ATLAS Authorship Policy: http://atlas.web.cern.ch/Atlas/private/ATLAS_CB/CB_Approved_Documents/A60_AUTHOR_policy_7%201.pdf

– ATLAS technical work is described in the appendix of the above document. – Lists of high priority qualification tasks are maintained on Twiki:

https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AuthorShipCommittee • Currently contains priority tasks in Activities. Projects to be added soon.• One may need help from an ATLAS member due to web protection.

• In summary: Contact your ATLAS team leader. • Individuals not at existing ATLAS institutions become ATLAS members by

affiliating through a member institution.• See subsequent slides regarding ATLAS institutional membership.

July 16, 2009 7Lankford - SLUO LHC Workshop

Page 8: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Institutional Membership 1/2

• ATLAS welcomes new institutions that are interested in and capable of substantive contributions to the ATLAS research program.

• Procedural overview: – An expression of interest is prepared in consultation with the

Spokesperson. – The expression of interest is presented by the Spokesperson to the

Collaboration Board at a CB meeting. – Membership is decided by Collaboration Board vote at a subsequent CB

meeting. – This process typically takes 0.5-1 year preceded by a period of contacts

and initial informal involvement.

• See following slide for alternative procedures.

July 16, 2009 8Lankford - SLUO LHC Workshop

Page 9: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Institutional Membership 2/2

• Two alternatives to typical procedure: 1. Association with an existing ATLAS institution.

• A new or small institute may join ATLAS in association with an existing ATLAS institution.

– Procedure is rather informal, and fully under the responsibility of the hosting ATLAS institution.

• Such an association may be permanent or temporary – (e.g. while ramping up and preparing an EoI for full membership)

• Recent examples: – UT Dallas: was associated with BNL; now UTD is an institution – U of Iowa: now associated with SLAC; EoI presented to CB at most recent meeting;

decision in October

2. Clusters of institutes: Small institutes may join ATLAS as a cluster. – Together they form an ATLAS "institution".

• In summary: Contact both: – ATLAS Spokesperson, Fabiola Gianotti – US ATLAS Program Manager, M. Tuts, & Institute Board Chair, A. Goshaw

July 16, 2009 9Lankford - SLUO LHC Workshop

Page 10: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS Operation Task Sharing

• ATLAS operation, from detector to data preparation and world-wide computing, requires 600-750 FTE (of ~2800 scientists).

– Fairly shared across the Collaboration• Proportional to the number of authors• Students are weighted 0.75• New institutions contribute more in the first two years (x 1.50, x 1.25)

• ~60% of these FTE are needed at CERN– Effort to reduce this fraction with time.

• In 2009, ~12% (21,000) are shifts in the Control Room or on-call expert shifts

– Effort to increase remote monitoring with time

• Allocations are made separately for shifts and other expert tasks. Institutions to cover both categories of activity.

• Required FTE and contributions updated & reviewed yearly.

July 16, 2009 Lankford - SLUO LHC Workshop 10

Page 11: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

LHC and ATLAS Schedule

• Repair of the damaged portions of the LHC has gone well.• Steps have been taken to avoid such catastrophic events in

the future.– Including a new quench protection system (QPS) completed and

recently tested.

• Work now focuses on problems discovered in splices in the copper bus bar in which the superconductor is embedded.

July 16, 2009 11Lankford - SLUO LHC Workshop

Page 12: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

•The enhanced quality assurance introduced during sector 3-4

repair has revealed new facts concerning the copper bus bar in

which the superconductor is embedded. •Tests have demonstrated that the process of soldering the

superconductor in the interconnecting high-current splices can

cause discontinuity of the copper part of the busbars and voids

which prevent contact between the superconducting cable and the

copper •Danger in case of a quench

•Studies are now going on to allow:

•To find a safe limit for the measured joint resistance as a function of the current in magnet circuits (max energy in the machine)

•Faster discharge of the energy from circuits

SummarySummary from S. Myers at ATLAS Plenary, 6 Jul 08

Page 13: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Strategy for Start-Up

• ~3 weeks delay with respect to baseline due to• R-long and R-16 measurements

• Splice repairs

• Delay in cool down of S12 and repairs of splices

• (Re-warming of S45)

• BUT the story of the copper stabilizers goes on• Need to measure the remaining sectors (S23, S78, and S81) ?at

80K

• Need to understand the extrapolation of measurements at 80K to 300K

– Measurement of variation of RRR with temperature

• Need to gain confidence in the simulations for safe current– Compare different simulation models/codes

13from S. Myers at ATLAS Plenary, 6 Jul 08

Page 14: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Strategy• Measure S45 at 300k (DONE)

– will be redone W28 (better temperature stability)• Measure remaining 3 sectors (at 80K); last one (81) presently foreseen

at beginning August• Measure variation of RRR with temperature during cool down• Update simulations (3 simulation models) of safe current vs resistance

of splices– Decay times of RB/RQ circuits following a quench (?quench all RQs)

• Determine which splices would need to be repaired as a function of safe current (beam energy)

• Evaluate time needed to heat up to 300K and repair these splices• Prepare scenarios of safe operating energy vs date of first beams• Discuss with Directorate and experiments and decide on best scenario.

– Preferred scenario :highest possible energy associated with earliest date • (what is the maximum energy with no repairs needed)

• At start-up confirm all splice resistance measurements at cold using new QPS

from S. Myers at ATLAS Plenary, 6 Jul 08

Page 15: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

15

4000

5000

6000

7000

8000

9000

10000

11000

12000

0 20 40 60 80 100 120 140

Max

. saf

e cu

rren

t [A

]

R_additional [microOhm]

RB, tau=100 s (normal)

RB, tau=68 s (fast)

RQ, tau=30 s (normal)

RQ, tau=15 s (fast)

Adiabatic conditions, without QPS delay, RRR=240,cable without bonding at one bus extremity, no contact between bus stabiliser and joint stabiliser.

Arjan Verweij, TE-MPE, 9 June 2009

46 54

4 TeV

5 TeV

Simulations: Maximum safe currents vs copper joint resistance

Warm (300K)

Page 16: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Latest News – LHC Vacuum Leaks

• Vacuum leaks found in two “cold” sectors (~80 K)– During preparation for electrical tests of Cu bus bar splices– Both at one end of sector, where electrical feedbox joins final magnet.

• Leak from helium circuit to insulating vacuum– No impact on beam pipe vacuum

• Repair requires partial warm-up– Warm up of end sub-sector to room temperature– Adjacent sub-sector “floats” in temperature– Remainder of sector kept at 80 K

• “It is now foreseen that the LHC will be closed up and ready for beam injection by mid-November.”– This delay does not affect the overall schedule strategy.

July 16, 2009 Lankford - SLUO LHC Workshop 16

Page 17: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

ATLAS position concerning running scenario (1)

As discussed at the June Open and Closed EB Since the LHC schedule is still uncertain, this is our present position

❶ The machine can run safely at 2 x 4-5 TeV start data taking in 2009 aiming at a long (~11 months) run (“Chamonix scenario”).❷ The machine can run safely at 2 x ? TeV (where ? < 4 TeV) start a “short” (few months ?) run in 2009, then shut down sometime in 2010 to prepare all sectors for 5 TeV beam operation. Data taking at 2 x 5 TeV could start in second half of 2010. ❸ Fix all bad splices to achieve 2x5 TeV operation, i.e. delay the LHC start-up to Feb/March 2010.

The following 3 scenarios were considered to be possible options (see S.Myers’s talk for updates) and were discussed in the EB:

Scenario 1 is still the current plan and everybody hopes that this will become reality.

Scenarios 2 and 3 are alternatives in case bad splices are found in the remaining three (cold) sectors. The exact energy in Scenario 2 will depend on the resistivity of the worst splices. Scenario 3 would likely give collisions at 2 x 5 TeV earlier than in Scenario 2 but would imply no collisions in 2009.

Page 18: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

ATLAS position concerning running scenario (2)

Main motivation: we need collision data as soon as possible to commission the experiment (detector, trigger, software, analysis procedures …) and to perform first long-awaited physics measurements. We also have a lot of students (~800 !) who need data to complete their theses

The EB reached (unanimously) the following conclusion:

Page 19: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

19

December 2008: “CSC book” (CSC=Computing and Software Commissioning) released

Most recent evaluation of expecteddetector performance and physics potential based on present software (Physics TDR used old fortran software)

Huge effort of the community:~2000 pages, collection of ~ 80 notes

Very useful reference for studies with LHC data

Exercised also internal reviewprocedure in preparation to future physics papers

arXiv:0901.0512

Preparation for physics

A useful resource for newcomers:

Page 20: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Focus on Readiness

As ATLAS awaits first LHC colliding beams, its activity is focused on readiness.

– Detector readiness, including trigger, data quality, etc.• Shutdown activities, combined running with cosmics, etc.

– Data processing readiness, incl. computing infrastructure, software, calibration & alignment, etc.

– Physics readiness, including object definition

Some current, key activities concerning readiness:– Cosmics analysis– Analysis Model for the First Year Task Force– Distributed analysis tests– End-to-end planning & walk-thru’s for first physics analyses

July 16, 2009 Lankford - SLUO LHC Workshop 20

Page 21: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

End of October 2008: ATLAS opened for shut-down activities

Examples of repairs, consolidation work, and achievements: Yearly maintenance of infrastructure (cryogenics, gas, ventilation, electrical power, etc.) Inner Detector: refurbishment of cooling system (compressors, distribution racks); now running with 203 cooling loops out of 204. LAr: 58 LVPS refurbished; 1 dead HEC LVPS repaired Dead FEB opto-transmitters (OTX) replaced; 6 died since (128 x 6 channels) Tilecal: 30 LVPS replaced (1 died since), 81 Super Drawers opened and refurbished Muon system: new rad-hard fibers in MDT wheels; gas leaks fixed; CSC new ROD firmware being debugged; RPC gas, HV, LVL1 firmware, timing; some EE chambers installed Magnets: consolidation (control, vacuum,..); all operated at full current 25/6-30/6

Schedule: Cosmics slice weeks started mid April Two weeks of global cosmics running completed this morning: ~100M events recorded July-August: continue shut-down/debugging activities (EE installation, shielding, complete ID cooling racks and install opto-transmitters, TDAQ tests, etc.) Start of global cosmics data taking delayed by 3 weeks to ~ 20 September (after discussion with CERN/LHC Management)

Detector status and shut-down activities

21

Page 22: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

Cosmics (and single-beam) analysis:

Effort ramping up Almost 300M events from 2008 run reprocessed twice and analyzed O(200) plots approved for talks at conferences Many notes in the pipeline, ~ 10 could become publications (but more people needed so we can complete studies before first beams) Cosmics Analysis Coordinator (Christian Schmitt) appointed to pull together efforts from various groups in a coherent way (e.g. simulation strategy) Achieved level of detector understanding is far better than expectations in many cases

Cosmics analysis is proving to be an effective step in commissioning ATLAS.

Page 23: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

reconstruction

analysis

Physicsanalysisat Tier2

Physicsanalysisat Tier2

Event Summary Data

rawdata

Reprocessingat Tier1

Reprocessingat Tier1

Simulationat Tier1/2

Simulationat Tier1/2

analysis objects(extracted by physics topic)

1st pass raw datareconstruction at Tier0 and export

1st pass raw datareconstruction at Tier0 and export

processeddata

simulation

interactivephysicsanalysis

4 main operations according to the Computing Model: First-pass processing of detector data at CERN Tier0 and data export to Tier-1s/Tier-2s Data re-processing at Tier-1s using updated calibration constants Production of Monte Carlo samples at Tier-1s and Tier-2s (Distributed) physics analysis at Tier-2s and at more local facilities (Tier-3s)

Actual Computing Model (CM) much more complex: includes data organization, placement and deletion strategy, disk space organization, database replication, bookkeeping, etc.

CM and above operations have been exercised and refined over the last years through functional tests and data challenges of increasing functionality, realism and size, including recent STEP09 challenge (involving grid operations of ATLAS and the other LHC expts).

Concern: distributed analysis not thoroughly tested yet (started a few months ago with robotic Hammer Cloud tests); now need “chaotic” massive access by real users .

U.S. has been playing a lead role in tests of data access for analysis.

Computing

Page 24: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

robotic test jobs running on some analysis queues since Fall08 (all analysis queues since April09)

these tests incorporate a handful of analyses job types from actual users

Such robotic tests were a major component of STEP09 tests in early June

Status & Plans for Readiness Testing

combined computing challenge for the LHC experimentsATLAS: reprocessing, data-dist, sim-prod, robotic user

These tests provide important benchmarks (and define our limitations) - for the most part the US cloud did well ( ~ 94%, highest in ATLAS)

Also need to test the system with actual users under battlefield conditions (will be much more chaotic than the robot tests)

Thanks to extensive pretesting by Nurcan

In addition to analysis queue infrastructure, need to test user ability to configure jobs,run jobs, and transfer job output to local work area

from J. Cochran

Page 25: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

many problems overcome

Evolving Plan

What we need to test: - D3PD production on T2s [primarily (sub)group designates, also some users]

- transfer of D3PDs from T2 to local work space [eventually for all users!!!]

Assume Early Analysis Model: physics (sub)groups produce D3PDs on T2s users pull D3PDs to their preferred interactive location (T3)

In ideal world, should have properly mixed samples corresponding to expected streams in both size & composition

Have made 5 copies (containers) of this sample; 2 copies sent to each US Tier2

users will be pointed to sets of 3 containers (roughly approximating a stream)

Generated (fast sim) 100M event multijet filtered sample which contains appropriate amounts of tt, W/Z, & prompt (500M such events ~48 pb-1) – very challenging generation

Expect 1B events total for run 1 - aiming for 500M event test

(may need to adjust when AMFY plan is released)

Such samples do not exist – as an alternative generate large sample and make multiple copies

from J. Cochran

Page 26: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

ATLAS has proposed a phased approach to such tests:

- study submission patterns of power users (learning where their jobs won’t run & why)

- then ask them to target specific T2s on specific days (sending multiple copies of their jobs)

- gradually opening up to more and more users (exercising more queues & more user disk space)

- (there is great sensitivity to not add significantly to user workload)

Start of such tests has not been uniform over the clouds (efforts primarily in US, UK, & Germany)

In US, as part of STEP09, asked 30 experienced users to run over the top-mixing sample (~1M evts) which had been replicated to all US T2s – running their own jobs – transferring their output to their local work area

used pandamonitor to determine most active (experienced) users

100M evt container & pre-test container were not yet available

AGLT2 MWT2 NET2 SLAC SWT2

59* 80 74 84 75

Job success efficiency (%)

* one user made many attempts here before overcoming user config issue

Info obtained by hand from pandamonitor dbMuch more metric info available – scripts are being developed to extract (almost ready) – work needed to more easily extract T2 T3 transfer rates

only 14 users were able to participate

from J. Cochran

Page 27: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Schedule

– 1st expert user is testing 100M event container now

– once ok we will pass on the US expert team (later this week) - will run their own jobs on 300M sample and transfer output to local area - we will use new scripts to tabulate metrics

– expand on to much larger US group (late July or early August) - more users running T2 jobs and all users doing T2 local area transfer - in particular need to include ESD (pDPD), cosmic, and db intensive jobs - possibly focus effort into 1-day blitz (late-August)

– with guidance/blessing of physics coordination, expand to all ATLAS clouds - will need to transfer 2 copies of 100M event container to each participating T2 - will allow studies of cross cloud analysis jobs (what we expect with real data) - panda monitoring will provide metrics for ganga users ? (saved by panda-ganga integration ?)

- transfer tests will likely need to be tailored to each cloud (local area may be T2 or T1)

– likely need/want to repeat the full ATLAS exercise a few times

from J. Cochran

Page 28: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Analysis Model for the First Year – Task Force Mandate

• The Analysis Model for the First Year (AMFY) task is to condense the current analysis model, building on existing work and including any necessary updates identified through the work of the TF, into concise recipes on how to do commissioning/ performance/ physics analysis in the first year. In particular it will:

• Identify the data samples needed for the first year, and how they derive from each other:– How much raw data access is needed (centrally provide/sub-system solutions)– How many different outputs and of what type will the Tier-0 produce– Expected re-generation cycles for ESD/AOD/DPDs– Types of processing to take place (ESD->PerfDPD, ESD->AOD, AOD-> AOD, AOD-> DPD, etc)

• Related to the items above, address the following points:– Are the Performance DPDs sufficient for all detector and trigger commissioning tasks (are changes to ESD

needed)?– What is the procedure for physics analysis in the first year in terms of data samples and common tools used

(down to and including PerfDnPDs and common D3PD generation tools), including both required and recommended items?

– How much physics will be done based on Performance DPDs?– How will tag information be used?– Every part of our processing chain needs validation, how will it be done?

• Scrutinise our current ability to do distributed analysis as defined in the computing model • Match the items above to available resources (CPU/ disk space / Tier-0/1/2 capabilities etc).

28

As listed on: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AnalysisModelFirstYear

From T. Wengler, ATLAS Open EB, June 2009

Page 29: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 6/7/2009

Goal : be ready to analyse first collision data fast and efficiently (lot of pressure from scientific community, “competition” with CMS, …)Process will be driven by Physics Coordination with support of and help from EB

Consider the basic list of analyses to be performed with first data: -- minimum-bias -- jets -- inclusive leptons -- di-leptons -- etc.

For each analysis, building on huge amount of existing work: -- prioritize goals for Winter 2010 conferences: define the results we want to produce minimally, while leaving the door open to more ambitious goals/ideas if time and people -- review the sequence of steps from detector, to calibration, trigger, data-quality, reconstruction, MC simulation, … needed to achieve the planned physics results -- make sure all steps are addressed in the right order, are covered by enough people, and that links and interfaces between steps are in place (“vertical integration”)

Physics

Recent proposal: “Analysis readiness walk-throughs”

The above information should be prepared, for each analysis, by a team of ~5 people (from systems, trigger, Data Preparation, Combined Performance and Physics WGs), with input from whole community, and presented to dedicated open meetings (0.5-1 day per analysis). A “review” group (including representatives from EB, Physics Coordination and community at large) will make recommendations and suggestions for improvements.Time scale: start end of August with 1-2 “guinea pig” analyses

Page 30: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

F.Gianotti, ATLAS week, 10/7/2009

Some aspects requiring special efforts and attention in the coming months (a non-exhaustive list … )

LHC schedule and our position for running scenarios (discussion with CERN and machine Management mid August)

Global cosmics runs: aim at smooth and “routinely” data-taking of the full detector with the highest possible efficiency

Detector evolution with time: plan for and develop back-up solutions for delicate components (LAr OTX, Tile LVPS, … etc.) for replacement during next/future shut-down

Cosmics analysis: learn as much as possible, finalize notes (and possibly publications) before beams

Software: releases consolidation, validation, robustness; technical performance (memory !)

Finalize the Analysis Model from detector commissioning to physics plots

Computing: stress-test distributed analysis at Tier-2s with massive participation of users

Complete simulation strategy: e.g. trigger; how to include corrections from first data (at all levels)

Analysis readiness walk-through’s

Upgrade (IBL, Phase 1 and 2): develop and consolidate startegies, plan for Interim MoU

Page 31: The ATLAS Collaboration A.J. Lankford ATLAS Deputy Spokesperson University of California, Irvine Many slides are drawn from recent ATLAS Plenary presentations,

Summary

• ATLAS is well prepared for first LHC colliding beams.

– We can efficiently record and process data now.

• Much can be done to prepare further for fast, efficient physics

analysis & results.

• Many areas exist for individual and institutional contributions.

• Best wishes to all the participants for a successful workshop.

July 16, 2009 Lankford - SLUO LHC Workshop 31