bebe data production status bebe 160/80/40: done software 12j, global key 11_012 set-up/log:...
TRANSCRIPT
BeBe data production status BeBe 160/80/40: done
Software 12J, global key 11_012Set-up/log: /afs/cern.ch/na61/Calibration/12J012/run*_m/
DSPACK: /castor/cern.ch/na61/11/prod/12J012/DSPACK/run*_m/ROOT: /castor/cern.ch/na61/11/prod/12J012/ROOT/run*_m/SHOE: /castor/cern.ch/na61/11/prod/12J012/SHOE/run*_m/
BeBe 13 (2012): done Software 12J, global key 12_005
Log/set-up: /afs/cern.ch/na61/Calibration/12J005/run*_mDSPACK: /castor/cern.ch/na61/12/prod/12J005/DSPACK/run*_mROOT: /castor/cern.ch/na61/12/prod/12J005/ROOT/run*_mSHOE: /castor/cern.ch/na61/12/prod/12J005/SHOE/run*_m
BeBe 20: done Software 12J, global key 13_001
Log/set-up: /afs/cern.ch/na61/Calibration/12J001/run*_mDSPACK: /castor/cern.ch/na61/12/prod/12J001/DSPACK/run*_mROOT: /castor/cern.ch/na61/12/prod/12J001/ROOT/run*_mSHOE: /castor/cern.ch/na61/12/prod/12J001/SHOE/run*_m
BeBe 30: done Software 12J, global key 13_001
Log/set-up: /afs/cern.ch/na61/Calibration/12J001/run*_mDSPACK: /castor/cern.ch/na61/13/prod/12J001/DSPACK/run*_mROOT: /castor/cern.ch/na61/13/prod/12J001/ROOT/run*_mSHOE: /castor/cern.ch/na61/13/prod/12J001/SHOE/run*_m
BeBe 13 (2013): done Software 12J, global key 13_001
Log/set-up: /afs/cern.ch/na61/Calibration/12J001/run*_mDSPACK: /castor/cern.ch/na61/13/prod/12J001/DSPACK/run*_mROOT: /castor/cern.ch/na61/13/prod/12J001/ROOT/run*_mSHOE: /castor/cern.ch/na61/13/prod/12J001/SHOE/run*_m (from daily SHINE trunk)
General
BeBe20 data was taken in 2013, but stored with raw data for 2012 on Castor Easy to move on Castor, but eLog/bookkeeping
also have to be updated Any immediate plans for further processing?
Running job variations
Large variations in number of parallel running jobs observed during Jan. – Feb. Variations: 0 – 1500 Often about ~10 Sometimes 0 for several hours Sometimes short spikes with 100s running jobs Usually slowest during morning (when QA processing takes place)
Mainly a problem for urgent jobs like QA 100 jobs needed for QA
100 jobs / 10jobs/hour = 10 hours -> need to submit jobs at 3am for run meeting... Generally OK for less urgent tasks
Help desk: priority depends on past resource consumption Possible solution: put limit on number of parallel running jobs
But then we will not be able to take advantage of idle LXBATCH... Short-term solution: prodna61 quota increased for last days of run
Should we request limit on number of running jobs? Too late for the data taking now, but there might be other urgent tasks
Running job variations – plots