computing at hephy
DESCRIPTION
Computing at HEPHY. Evaluation 2010. Overview. Computing group at HEPHY The Vienna Grid computing center Future plans. Mission. Provide infrastructure and support for general computing Operate and maintain a highly available and powerful Grid computing center. Computing Group. - PowerPoint PPT PresentationTRANSCRIPT
11 November 2010
Natascha Hörmann
Computing at HEPHYEvaluation 2010
Computing at HEPHY
Overview
Computing group at HEPHY
The Vienna Grid computing center
Future plans
2Natascha Hörmann11 November 2010
Computing at HEPHY
Mission
3Natascha Hörmann11 November 2010
Provide infrastructure and support for general computing
Operate and maintain a highly available and powerful Grid computing center
Computing at HEPHY
Computing Group
Leader: G. Walzel
Group: S. Fichtinger, U. Kwapil, D. Liko, N. Hörmann, B. Wimmer
General computing: 3.0 FTEGrid effort: 2.5 FTE
4Natascha Hörmann11 November 2010
Computing at HEPHY
Tasks of the Computing group Run the HEPHY computer infrastructure
general tasks (Network, Mail, Printer), Windows/Mac/Linux desktops, internet services, application server, file system services (AFS, NFS)…
Belle, Theory and CMS interactiv computing facility
5Natascha Hörmann11 November 2010
Computing at HEPHY
Tasks of the Computing group
Provide the infrastructure for the worldwide distributed Grid computing as a Tier-2 center
computing power and storage for the CMS experiment
essential computing resource for our physics analysis groups at HEPHY
resources also for other applications (e.g. non HEP)
6Natascha Hörmann11 November 2010
Computing at HEPHY
Resources at the HEPHY Grid computing center
7Natascha Hörmann11 November 2010
Available: 1000 CPUs, 500 TB disks
Presently operated:
730 CPUs, 320 TB disks In 2010 the Academy has decided to charge the costs for electricity to
our institute (15 % of the institute material budget); for budget reasons we operate only ⅔ of the equipment
The current hardware is four years old and has to be replaced starting 2012; funding is required but not yet secured
?
Computer: Sun 2x4 Core, Intel Xenon 2.5 GHz
Storage: Supermicro RAID, DataPool Manager (DPM)
Operating System: SLC5
Computing at HEPHY
The Grid Tier-2 @ HEPHY
8Natascha Hörmann11 November 2010
days
6.000
5.000
4.000
3.000
2.000
1.000
0
Jobs
Tier-2 @ HEPHY provides 2 % of overall CMS Tier-2 capacity
Average availability is 98 % among the best sites
About 2.400 Jobs/dayHEPHY
2.400 Jobs/day
AnalysisMC simulation
Job-Robot
Computing at HEPHY
Data transfer at the Tier-2 @ HEPHY
9Natascha Hörmann11 November 2010
Outgoing transfer – Vienna –> Tier-1/2s Incoming transfer – Tier-1/2s –>Vienna
22.9 MB/sec10.5 MB/sec
30
25
20
15
10
5
0
100
80
60
40
20
0
Meg
aByt
e/se
c
days days
Average transfer of about 2 TByte/day incoming and 1 TByte/day outgoing data
Computing at HEPHY
Grid infrastructure for scientific groups
10Natascha Hörmann11 November 2010
Tier-2HEPHY*
High Energy Physics - CMS
SUSYQuarkoniab-Tagging
TheoryCP violation
High Energy PhysicsBelle II
Current Projects Future Projects
* Federated Tier-2 together with the University of Innsbruck
Medical Application
Hadron tumor therapy studies
Stefan-Meyer Institute
Panda at FAIR
Computing at HEPHY
11Natascha Hörmann11 November 2010
Hadron tumor therapy studies using Grid@HEPHY
Results submitted to Z. Med. Phys.
Cooperation with Medical University of Vienna in connection with the radiation therapy center MedAustron in Wiener Neustadt
Simulation studies of energy deposition in material with heavy charged particles like protons or carbon/helium ions looking especially at the fractional tail
Computing at HEPHY
Advantages of running a Tier-2 @ HEPHY
Grid computing site for CMS physics groups
selected as one of the computing cluster for SUSY (1 of 5) and b-tagging (1 of 4)
the relevant datasets of these groups are stored and the analysis jobs are executed at our site
Infrastructure for our local physics analysis group
direct access to resources enhances successful contributions from HEPHY physics groups to SUSY, b-tagging and Quarkonia analysis
enough storage is available to store our own analysis results
control the usage of the computing power and storage in case of important findings or necessary resources for conference
12Natascha Hörmann11 November 2010
Computing at HEPHY
HEPHY CMS Center
Commissioned to run general shifts for the CMS experiment in Vienna (saves travel costs)
13Natascha Hörmann11 November 2010
Computing at HEPHY
Future requirements LHC computing requirements for the next years
until 2012: an increase of 60 % CPUs and 100 % disk space is needed
we assume replacement of equipment with constant budget (with the typical increase of performance every year)
14Natascha Hörmann11 November 2010
ICH
EP2
01
0, 2
8 Ju
l. 20
10
, Paris,
Pro
gre
ss in C
om
putin
g, Ia
n B
ird
Expected needs in 2011 & 2012
CPU
Disk
Computing at HEPHY
Future Austrian Computing Landscape
15Natascha Hörmann11 November 2010
Vienna Scientific Cluster (VSC)
will host our Tier-2 computing center through our connections to the Vienna UT
the installation with new equipment is planned to start in 2012 at the VSC location (technical stop of LHC); provided funding is secured
Austrian Center for Scientific Computing (ACSC)
initiative from several universities
is a common framework for computing cooperation between institutes in Austria and abroad
HEPHY intends to be part of ACSC via VSC
Computing at HEPHY
Summary & Outlook
The HEPHY Grid computing center is smoothly running and allows us to participate in the analysis of data in the first row
important for our physics analysis
important for our position in the CMS collaboration
Funding of the Grid Tier-2 upgrade by replacement of the hardware needs to be secured
HEPHY plans to install the new equipment of the Grid computing center at the Vienna Scientific Cluster (VSC) in 2012
HEPHY intends to participate in the Austrian Center for Scientific Computing (ACSC) which is important for our future computing interests
16Natascha Hörmann11 November 2010