groups and activities report 2019 - cern document server

68
Groups and Activities Report 2019

Upload: khangminh22

Post on 26-Feb-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Groups and Activities

Report 2019

2 | P a g eCERN IT Department Groups and Activities Report 2019

ISBN 978-92-9083-551-6

This report is released under a Creative Commons Attribution-NonCommercial-ShareAlike4.0 International License.

3 | P a g eCERN IT Department Groups and Activities Report 2019

CONTENTS

GROUPS REPORTS 2019

Collaborations, Devices & Applications (CDA) Group.......................................................................... 6

Communication Systems (CS) Group ................................................................................................ 15

Compute & Monitoring (CM) Group ................................................................................................ 20

Computing Facilities (CF) Group ....................................................................................................... 27

Databases (DB) Group ..................................................................................................................... 30

Departmental Infrastructure (DI) Group .......................................................................................... 37

Storage (ST) Group .......................................................................................................................... 38

ACTIVITIES AND PROJECTS REPORTS 2019

CERN openlab .................................................................................................................................. 49

CERN School of Computing (CSC) ..................................................................................................... 55

Computer Security........................................................................................................................... 58

Data Preservation ............................................................................................................................ 60

Externally Funded Projects .............................................................................................................. 61

Worldwide LHC Computing Grid (WLCG) ......................................................................................... 63

4 | P a g eCERN IT Department Groups and Activities Report 2019

CERN IT DepartmentGroups and Activities Report 2019

CERN IT department’s mission:

The IT Department provides the information technology required for the fulfilment ofCERN’s mission in an efficient and effective manner.

This includes data processing and storage, networks and support for the LHC and non-LHCexperimental programme, as well as services for the accelerator complex and for thewhole laboratory and its users.

We also provide a ground for advanced research and development of new ITtechnologies, with partners from other research institutions and industry.

This report aims at summarising the key accomplishments performed by the seven CERN ITdepartment groups in 2019, highlighting their contribution to the department’s mission. The reportalso highlights the contribution from the following projects and activities: CERN openlab, CERNSchool of Computing, Computer Security, Data Preservation, Externally Funded Projects, andWorldwide LHC Computing Grid (WLCG).

5 | P a g eCERN IT Department Groups and Activities Report 2019

Groups Reports 2019

6 | P a g eCERN IT Department Groups and Activities Report 2019

COLLABORATIONS, DEVICES & APPLICATIONS (CDA) GROUP

The group Collaboration, Devices and Applications (CDA) provides information services such as videoconferencing, webcast & recording, indico, the CERN Document Server (CDS), document conversion,printing, e-mail, IP telephony, the Invenio Digital Library Framework, Windows' environment tools,web services, authentication and authorisation services, e-publishing and software developmenttools on a range of devices, such as desktops, laptops, tablets and smartphones.

APPLICATIONS AND DEVICESBeing the final year of Windows 7 support, the big theme for us this year was the CERN-widemigration to Windows 10. For the first time in the history of Windows at CERN, existing installationswere automatically migrated to the newer OS version. The migration took part department bydepartment, preceded by dedicated information sessions.

In parallel, migration of the user home folders from DFS to CERNBox advanced, in coordination withthe user community. Overall, 1500 user home folders were migrated.

Starting in March 2019, we transitioned into the new Microsoft licence model, with mostapplications licensed per named user rather than for the entire campus. Transition was completedfor Microsoft Project and Visio, with alternatives well adopted by the user community and oldinstallations cleaned-up. Transition for Microsoft Office Suite is in progress: the installation base wasreduced and the work on alternative productivity suites is advancing.

The development of the multi-platform CERN Appstore advanced as well: for Windows we focusedon integration with the 3rd party application repository Cholocatey and gathering of metrics. ForAndroid, the CERN Appstore was deployed in production, offering curated selection of apps.

On the infrastructure side, it was the final year of Windows 2008, with an intensive migration effortto Windows Server – or Linux. A pilot Linux Terminal Server cluster was handed over to BEDepartment. For infrastructure management, two new systems were developed in house anddeployed in production:

· Winventory: microservice-based system for gathering information about Windowsconfigurations across CERN and gathering insight into specific usage patterns for selectedapplication.

· Chopin: Infrastructure Management System created to decrease dependence on expensivelicences and at the same time and improve monitoring and hardware incident follow-up onWindows servers.

On the engineering side, upon agreement with IT-CM group, responsibility for all engineeringlicences servers was placed in IT-CDA-AD and licences server infrastructure was renewed andinstallations consolidated. We actively assisted EN department in the PLM tendering process byinstallation of proof of concept configurations for PLM considered in the tender. Finally, a newANSYS contract was prepared and agreed by the Finance Committee.

7 | P a g eCERN IT Department Groups and Activities Report 2019

DIGITAL REPOSITORIESIn 2019, Invenio embarked on a new and exciting challenge. Thanks to the success of Zenodo (basedon Invenio v3) and the pressure from funding agencies on institutions to open and publish their data,many organisations started installing unsustainable versions of Zenodo. We’ve contacted some ofthose institutions and proposed to create InvenioRDM, an easy-to-install out-of-the-box Zenodo-likerepository which would be extensible and sustainable. CERN KT also supported this project, whichwas kicked off in June 2019 together with 10 partners from around the globe.

Figure 1. Rebranding of inveniosoftware.org - InvenioRDM product page

Zenodo keeps growing year after year. In 2019, Zenodo has doubled the number of visitors to a totalof 2.3 million visits. New features have been released like the new Software Citations Broker whichallows to see a roll-up of citations per publication. We have also established a CollaborationAgreement with DRYAD and together we’ve been awarded a grant by Alfred P. Sloan Foundation toimplement a strategic integration of our services. This collaboration will bring Zenodo closer tojournal’s workflows.

Figure 2. Zenodo citation box

In CDS, a lot of development effort has been invested in creating a new ILS (Integrated LibrarySystem) that will handle and manage the CERN library loan service. This implementation has beendone in collaboration with RERO, and released as Free Open Source Software, under the InvenioILSbrand. Besides, improvements have been done to our CDS Videos infrastructure and all our CDSservices have been adapted to become OC11-compliant.

During this year, CERN Open Data Portal grew to host more than two petabytes of LHC data. Itexpanded its frontiers to host datasets for Machine Learning in order to address the needs of thewider Data Science community. Besides, several new theoretical research papers using CMS opendata were published, validating the importance of open data and open science. In parallel, REANAReproducible Analysis platform became more flexible by supporting new compute backends

8 | P a g eCERN IT Department Groups and Activities Report 2019

(HTCondor and Slurm). The preservation of computational workflows for future analysisreproducibility has been mandated by ATLAS SUSY and Exotics group; and interest in REANA fromother communities has been increasing (e.g. New York University, Notre Dame University andNational Centre for Supercomputing Applications Illinois for running machine-learning pipelines).

CERN has never been this close to having all the historical multimedia analogue content (videos,slides, negatives, audio) fully digitised and safely preserved. The digitisation project is nearcompletion for all known analogue content and several gems have already been published online(e.g. videos for the first WWW94 conference). For the small percentage of content that cannot bedigitised, art continues to enlighten a bigger audience; VolMeur has been exhibited in New York atthe MFTA gallery and also, very successfully, during the CERN Open Days.

INTEGRATED COLLABORATIONA new email service based on the Kopano open-source software has been designed andimplemented as a successful inter-group collaboration effort and the migration of users has started.

A new Single-sign-on service based on the KeyCloak open-source framework has been setup andtested and the first services have migrated to it. A complete authorization service has beendeveloped with dedicated users, groups and applications portals. Tests have been carried out for anew identity management system and resource directory service based on the freeIPA open-sourceframework.

Site-access features have been improved on Indico allowing the registration of visitors’ licenseplates. New developments have started for the improvement of abstract and paper reviewworkflows in partnership with the JACoW1 collaboration, as well as for a new event schedulingapplication (Newdle). The Burotel interface has been completely revamped allowing the LHCexperiments and the BE and EP departments to streamline the booking of desks for users on short-term stays.

New open-source telephony clients have been developed on top of the open-source telephonyinfrastructure put in place by the CS group and migration has started for pilot users. Some 250displays have been rented and installed for the CERN Open Days visit points. A new monitoringframework based on open-source components has been designed and partially deployed to improve

1 http://www.jacow.org/

9 | P a g eCERN IT Department Groups and Activities Report 2019

the preventive maintenance of the CERN audio-visual infrastructure. Fourteen meeting rooms havebeen refurbished. New markdown-based documentation web sites have been conceived andpromoted and a study has started to improve the accessibility of multimedia documentation throughautomated transcription. And the e-fax service was closed on November 1st 2019.

WEB FRAMEWORKSThe Web Frameworks section continued the work of rebasing existing and new services on thecommon Platform as a Service solution based on OpenShift, which already hosts more than 1,000applications. In order to provide better performance and avoid single points of failure of thepreviously used NFS filers, CephFS support was added for persistent storage. In addition, support foralternate domain names such as *.docs.cern.ch and custom domains was added. In October a 2-dayworkshop “Kubernetes as a Container Application Platform” was organised, that attracted 150attendees.

The JIRA issue-tracking system was moved from VMs to a container-based solution and a forumsolution based on Discourse templates was deployed, now operating 31 instances and serving24,000 users. A novel user and service documentation system based MkDocs was developed, thatfacilitates the set-up of markdown-based static documentation sites.

The version control system Gitlab was promoted further beyond code hosting and collaborationfeatures, and GitLab’s Continuous Integration/Continuous Deployment functionality attracted moreand more developer and DevOps teams across CERN departments and experiments, whosuccessfully put it at the heart of their workflows.

The Authoring service based on Overleaf saw a constant usage grow throughout the year andreached more than 4,300 total users, with 700 daily active users working collaboratively on morethan 1,000 daily active projects. An Overleaf-CERN workshop “Overleaf at CERN: Supportingthousands of research collaborations” was organised in July and an Overleaf-CERN case studypublished.

Posters:

· S. Bukowiec, P. Gomulak, Winventory: microservices architecture case study, 24thInternational Conference on Computing in High Energy & Nuclear Physics (CHEP) in Adelaide,Australia, 4-8 November 2019,https://indico.cern.ch/event/773049/contributions/3473308/

· M. Fiascaris, M. Kwiatek, E. Eastwood-Barzdo, A culture shift: transforming learning at CERN,24th International Conference on Computing in High Energy & Nuclear Physics (CHEP) inAdelaide, Australia, 4-8 November 2019,https://indico.cern.ch/event/773049/contributions/3483372/

· C Bigarella/CERN, A Ioannidis/CERN, LH Nielsen/CERN, JB Gonzalez Lopez/CERN. UsageStatistics DO Count. Open Repositories 2019 (OR2019), Hamburg, Germany, 10/06/2019,https://doi.org/10.5281/zenodo.3553943

· R. Maciulaitis/CERN, P. Bremer/ND, S. Hampton/ND, M. Hildreth/ND, K.P. HurtadoAnampa/ND, I. Johnson/ND, C. Kankel/ND, J. Okraska/CERN, D. Rodriguez/CERN, T.Šimko/CERN. Support for HTCondor High-Throughput Computing Workflows in the REANAReusable Analysis Platform, IEEE eScience 2019, San Diego, USA, 24-27/92019,http://cds.cern.ch/record/2695179

10 | P a g eCERN IT Department Groups and Activities Report 2019

· S. Wunsch/CERN, T. Šimko/CERN, L. Serkin/CERN Overview of the CERN Open Data portal,Universal Science, Adelaide, Australia, 3 November 2019, http://cds.cern.ch/record/2698277

· D. Rodriguez/CERN, J. Okraska/CERN, R. Maciulaitis/CERN, T. Šimko/CERN, Hybrid analysispipelines in the REANA reproducible analysis platform, 24th International Conference onComputing in High-Energy and Nuclear Physics (CHEP), Adelaide, Australia, 4-8/11/2019,https://indico.cern.ch/event/773049/contributions/3476160/

Presentations:

· G. Lo Presti, M. Kwiatek, CERNBox as the CERN Apps Hub, CS3 Workshop 2019 in Rome, 28-30 January 2019, https://indico.cern.ch/event/726040/contributions/3260357/

· M. Kwiatek, S. Firoozbakht, Endpoint user device and app provisioning models, HEPiX Spring2019 at San Diego Supercomputing Center (SDSC) / University of California in San Diego, 25-29 March 2019, https://indico.cern.ch/event/765497/contributions/3348858/

· V. Bippus, A. Smyrnakis, G. Lo Presti, L. Mascetti, H. Gonzalez Labrador, M. Kwiatek, J.Moscicki, S. Bukowiec, Unified Home folders migration or chronicles of a complex datamigration, HEPiX Autumn 2019 at Nikhef, Amsterdam, The Netherlands, 14-18 October2019, https://indico.cern.ch/event/810635/contributions/3592918/

· V. Bippus, S. Dellabella, Zero-touch Windows 10 migration: dream or reality?, HEPiX Autumn2019 at Nikhef, Amsterdam, The Netherlands, 14-18 October 2019,https://indico.cern.ch/event/810635/contributions/3592919/

· H. Gonzalez Labrador, G. Lo Presti, E. Puentes, P. Seweryn, S. Dellabella, J. Mościcki, M.Kwiatek, Evolution of the CERNBox platform to support collaborative applications and MALT,24th International Conference on Computing in High Energy & Nuclear Physics (CHEP) inAdelaide, Australia, 4-8 November 2019,https://indico.cern.ch/event/773049/contributions/3474842/

· M. Alandes Pradillo, Migrating Engineering Windows HPC applications to Linux HTCondorand SLURM Clusters, 24th International Conference on Computing in High Energy & NuclearPhysics (CHEP) in Adelaide, Australia, 4-8 November 2019,https://indico.cern.ch/event/773049/contributions/3474842/

· M. Alandes Pradillo, Experience finding MS Project Alternatives at CERN, 24th InternationalConference on Computing in High Energy & Nuclear Physics (CHEP) in Adelaide, Australia, 4-8 November 2019, https://indico.cern.ch/event/773049/contributions/3474842/

· T. Bato, S. Bukowiec, M. Kwiatek, CERN AppStore: Development of a multi-platformapplication management system for BYOD devices at CERN, 24th International Conferenceon Computing in High Energy & Nuclear Physics (CHEP) in Adelaide, Australia, 4-8 November2019, https://indico.cern.ch/event/773049/contributions/3473314/

· M. Pacuszka, S. Bukowiec, G. Metral, Chopin Management System: improving Windowsinfrastructure monitoring and management, 24th International Conference on Computing inHigh Energy & Nuclear Physics (CHEP) in Adelaide, Australia, 4-8 November 2019,https://indico.cern.ch/event/773049/contributions/3473318/

· P. Grzywaczewski, Migration of CERN e-mail system to open source. Challenges andopportunities, CHEP, Adelaide, Australia, 04/11/2019,https://indico.cern.ch/event/773049/contributions/3474835/attachments/1937470/3213837/03.11_-_Migration_of_CERN_e-mail_system_to_open_source.pdf

· H. Short, Federated Identity Management Overview, ARCHIVER Open Market Consultation,Stanstead, 23/05/2019 https://indico.cern.ch/event/821976/

11 | P a g eCERN IT Department Groups and Activities Report 2019

· H. Short, Federated Identity Management, DEI Conference, Zagreb,02/04/2019, https://dei.srce.hr/sites/default/files/2019-04/Short-Srce-DEI-2019.pdf

· H. Short, FIM4R WLCG and CERN Community Update, FIM4R, Fermilab,12/09/2019, https://indico.cern.ch/event/834658/

· A. Ceccanti, H. Short et al, WLCG Authorization; from X.509 to Tokens, CHEP, Adelaide05/11/2019 https://indico.cern.ch/event/773049/contributions/3473383/

· P. Tedesco, A. Aguado Corman, D. Fernandez Rodriguez, M Georgiou, J. Rische, C. Schuszter,H. Short. CERN’S Identity and Access Management, a journey to Open Source, CHEP,Adelaide, 07/11/2019 https://indico.cern.ch/event/773049/contributions/3473385/

· A. Moennich, useFlask() - or how to use a React frontend for your Flask app, EuroPython2019, Basel, Switzerland, 12/07/2019 https://ep2019.europython.eu/talks/45EWZZz-useflask-or-how-to-use-a-react-frontend-for-your-flask-app/ Slides:https://ep2019.europython.eu/media/conference/slides/45EWZZz-useflask-or-how-to-use-a-react-frontend-for-your-flask-app.pdf

· R. Gaspar, T. Soulie, eXtreme monitoring: CERN video conference system and audio-visualIoT device infrastructure, CHEP 2019, Adelaide, Australia, 04/11/2019.https://indico.cern.ch/event/773049/contributions/3474843/

· G. Tenaglia, FOSS @ CERN, CH-Open Networking Dinner, Zurich, Switzerland, 29/11/2019,https://codimd.web.cern.ch/p/SkvCjD5nH#/

· T. Baron, R. Fernandez Sanchez, R. Sierra, J. Florencio, F. Valentin Vinagrero, CERN FixedTelephony Service Development, HEPiX Autumn 2019, Amsterdam, Netherlands,14/10/2019, https://indico.cern.ch/event/810635/contributions/3592960/

· T. Baron, R. Candido, Challenges and opportunities when migrating CERN e-mail system toopen source, HEPiX Autumn 2019, Amsterdam, Netherlands, 14/10/2019,https://indico.cern.ch/event/810635/contributions/3592963/

· JY. Le Meur/CERN, Associer Invenio avec Archivematica pour une archive numérique auCERN, Memoriav Workshop Comment sécuriser durablement les données numériques del’audiovisuel?, Nouveau Musée, Bienne, Switzerland,19/3/2019, https://indico.cern.ch/event/806295/contributions/3355821/attachments/1815429/2966857/2019-03-Memoriav-JYLM.pdf

· JY. Le Meur/CERN, M. Volpi/CERN, The VolMeur Collection, presented at ContemporaryReuse yearly exhibition, MFTA, New-York, US, 18/4/2019

· JY. Le Meur/CERN, CERN’s Digital Memory: When Patrimony Data meets Scientific Data,Archiving 2019: Digitization Preservation and Access, keynote speech, Lisboa, Portugal,17/5/2019, https://cds.cern.ch/record/2675051

· JY. Le Meur/CERN, J. van Kemenade/CERN, Benchmarking Archivematica for CERN scale,Archivematica Camp, Geneva, Switzerland, 23/10/2019,https://doi.org/10.5281/zenodo.3522398

· JY. Le Meur/CERN, The VolMeur Collection, Geneva Photographic Society, Geneva,Switzerland, 2/9/2019, https://volmeur.web.cern.ch/SGP2019

· K. Przerwa/CERN, CDS Videos - The new platform for CERN videos, Open Repositories 2019,University of Hamburg, Hamburg, Germany, 10-13/6/2019, https://cds.cern.ch/record/2635453?ln=en

· L.H. Nielsen/CERN, InvenioRDM, Northwestern University, Feinberg School of Medicine,Chicago, United States, 13/8/2019, https://doi.org/10.5281/zenodo.3585559

12 | P a g eCERN IT Department Groups and Activities Report 2019

· L.H. Nielsen/CERN, Tracking citations to research software via PIDs. Persistent Identifiers inResearch – Celebrating 10 Years of DOI Desk, ETH Zurich, Zurich, Switzerland, 13/9/2019,https://doi.org/10.5281/zenodo.3433149

· L.H. Nielsen/CERN. Zenodo: FAIR data in a generic data repository. Linking Open Science inAustria, Vienna, Austria, 24/4/2019, https://doi.org/10.5281/zenodo.2650088

· D. Wilcox/DuraSpace, M. Anez/Islandora Foundation, P.l Becker/The Library Code, M.Bussey/Data Curation Experts, W. Fyson/University of Southampton, L.H. Nielsen/CERN, RRuggaber/University of Virginia, Revenge of the Repository Rodeo, Open Repositories 2019(OR2019), Hamburg, Germany, 10/6/2019, https://doi.org/10.5281/zenodo.3554147

· A. Bollini/4Science, L.H. Nielsen/CERN, E. Tripp/DuraSpace, P. Walk/Antleaf, K.Shearer/COAR & E. Rodrigues/Univeristy of Minho, Adoption of NGR technologies - statusupdate, Open Repositories 2019 (OR2019), Hamburg, Germany, 10/6/2019,https://doi.org/10.5281/zenodo.3554330

· H. Cousijn/DataCIte, M. Fenner/DataCite, L.H. Nielsen/CERN; K. Garza/DataCIte, D.Lowenberg/California Digital Library, Measuring data reuse with the COUNTER Code ofPractice for Research Data, Open Repositories 2019 (OR2019), Hamburg, Germany,10/6/2019, https://doi.org/10.5281/zenodo.355400

· J.B. Gonzalez Lopez/CERN, L.H. Nielsen/CERN. Invenio User Group Workshop 2019, OpenRepositories 2019 (OR2019), Hamburg, Germany, 10/06/2019,https://doi.org/10.5281/zenodo.3554221

· D. Rodriguez/CERN, S. Dallmeier-Tiessen/CERN, S. Feger/CERN, P. Fokianos/CERN, D.Kousidis/CERN, A. Lavasa/CERN, R. Maciulaitis/CERN, J. Okraska/CERN, T. Šimko/CERN, A.Trzcinska/CERN, I. Tsanaktsidis/CERN, S. van de Sandt/CERN, Beyond repositories: enablingactionable FAIR open data reuse services in particle physics, Open Repositories 2019,Hamburg, Germany, 10-14/6/2019, http://cds.cern.ch/record/2679155

· S. Feger/CERN, S. Dallmeier-Tiessen/CERN, P. Fokianos/CERN, D. Kousidis/CERN, A.Lavasa/CERN, R. Maciulaitis/CERN, J. Okraska/CERN, D. Rodriguez/CERN, T. Šimko/CERN, A.Trzcinska/CERN, I. Tsanaktsidis/CERN, S. van de Sandt/CERN, More than preservation:Creating motivational designs and tailored incentives in research data repositories, OpenRepositories 2019, Hamburg, Germany, 10-14/6/2019, http://cds.cern.ch/record/2691945

· M. Hildreth/ND, K. P. Hurtado Anampa/ND, T. Šimko/CERN, P. Brenner/ND, S. Hampton/ND,C. Kankel/ND, I. Johnson/ND, Abstracting container technologies and transfer mechanisms inthe Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference(SCAILFIN) project, 24th International Conference on Computing in High-Energy and NuclearPhysics (CHEP), Adelaide, Australia, 4-8/11/2019,https://indico.cern.ch/event/773049/contributions/3473827

· P. Fokianos/CERN, A. Trzcinska/CERN, A. Lavasa/CERN, D. Rodriguez/CERN, J. Okraska/CERN,R. Maciulaitis/CERN, S. Feger/CERN, S. van de Sandt/CERN, T. Šimko/CERN, CERN analysispreservation and reuse framework: FAIR research data services for LHC experiments, 24thInternational Conference on Computing in High-Energy and Nuclear Physics (CHEP),Adelaide, Australia, 4-8/11/2019,https://indico.cern.ch/event/773049/contributions/3476165/

· G. De Lellis/INFN, S. Dmitrievsky/JINR, G. Galati/INFN, A. Lavasa/CERN, T. Šimko/CERN, I.Tsanaktsidis/CERN, A. Ustyuzhanin/Yandex, Dataset of tau neutrino interactions recorded byOPERA experiment, 24th International Conference on Computing in High-Energy and NuclearPhysics (CHEP), Adelaide, Australia, 4-8/112019,https://indico.cern.ch/event/773049/contributions/3474858/

13 | P a g eCERN IT Department Groups and Activities Report 2019

· M. Hildreth/ND, K.P. Hurtado Anampa/ND, C. Kankel/ND, S. Hampton/ND, P. Brenner/ND, I.Johnson/ND, T. Šimko/CERN, Large-scale HPC deployment of Scalable CyberInfrastructurefor Artificial Intelligence and Likelihood Free Inference (SCAILFIN), 24th InternationalConference on Computing in High-Energy and Nuclear Physics (CHEP), Adelaide, Australia, 4-8/11/2019, https://indico.cern.ch/event/773049/contributions/3474811/

· T. Šimko/CERN, H. Pascoal de Bittencourt/USP, E. Carrera/USFQ, C. Lange/CERN, K. Lassila-Perini/HIP, L. Lloret/UIMP-UC, T. McCauley/ND, J. Okraska/CERN, D. Prelipcean/CERN, M.Savaniakas/CERN, Open data provenance and reproducibility: a case study from publishingCMS open data, 24th International Conference on Computing in High-Energy and NuclearPhysics (CHEP), Adelaide, Australia, 4-8/11/2019,https://indico.cern.ch/event/773049/contributions/3474839/

· T. Šimko/CERN, Open is not enough: fostering reproducible and reusable particle physicsresearch, e-Infrastructure Reflection Group Workshop, Geneva, Switzerland, 20-21/5/2019,http://e-irg.eu/documents/10920/470912/presentation+Tibor+Šimko.pdf

· D. Rodriguez/CERN, CERN Open Data portal: Invenio meets big data, Invenio User GroupWorkshop 2019, Open Repositories 2019, Hamburg, Germany, 10/6/2019,https://indico.cern.ch/event/818650/contributions/3447485/

· T. Šimko/CERN, CERN Open Data, REANA Reusable Analyses, IRIS-HEP Analysis SystemsWorkshop, New York, NY, USA, 19-20/6/2019,https://indico.cern.ch/event/822074/contributions/3471466/

· T. Šimko/CERN, Open Data at CERN, BiblioSuisse at CERN, Geneva, Switzerland, 23/9/2019,https://indico.cern.ch/event/848425/

· A. Lossent, CERN IT-CDA-WF, Self-service web hosting made easy with containers andKubernetes Operators, HEPiX Autumn 2019 at Nikhef, Amsterdam, The Netherlands,14/10/2019https://indico.cern.ch/event/810635/contributions/3593265/

· P. Panero, I. Posada Trobo, C. de Oliveira Antunes, A. Wagner, CERN IT-CDA, Citadel Search:Open Source Enterprise Search, 23/10/2019,https://doi.org/10.5281/zenodo.3581157

· A. Wagner, CERN Data Centre Inside Out – Computing Infrastructure at CERN, Lecture at theCERN Spring Campus, Hamburg University of Technology (TUHH), Germany, 17/09/2019

· A. Wagner, 30 Years World Wide Web at CERN, Lecture at the CERN Spring Campus 2019,Hamburg University of Technology (TUHH), Germany, 19/09/2019

Publications:

· M. Altunay, B. Bockelman, A. Ceccanti, L. Cornwall, M. Crawford, D. Crooks, …, H. Short, …,R. Wartel, WLCG Common JWT Profiles (Version 1.0), 25/09/2019, Zenodo.http://doi.org/10.5281/zenodo.3460258

· S. van de Sandt/CERN, L.H. Nielsen/CERN, A. Ioannidis/CERN, A. Muench/AAS, E.Henneken/NASA ADS, A. Accomazzi/NASA ADS, C. Bigarella/CERN, J.B. GonzalezLopez/CERN, S. Dallmeier-Tiessen/CERN. Practice meets Principle: Tracking Software andData Citations to Zenodo DOIs. arXiv preprint, https://arxiv.org/abs/1911.00295

· S. Dallmeier-Tiessen/CERN, T. Šimko/CERN, Open science: A vision for collaborative,reproducible and reusable research, CERN Courier, 11/3/2019,https://cerncourier.com/a/open-science-a-vision-for-collaborative-reproducible-and-reusable-research/

14 | P a g eCERN IT Department Groups and Activities Report 2019

· R. Maciulaitis/CERN, P. Bremer/ND, S. Hampton/ND, M. Hildreth/ND, K.P. HurtadoAnampa/ND, I. Johnson/ND, C. Kankel/ND, J. Okraska/CERN, D. Rodriguez/CERN, T.Šimko/CERN, Support for HTCondor High-Throughput Computing Workflows in the REANAReusable Analysis Platform, IEEE eScience 2019, San Diego, USA, 24-27/9/2019,http://cds.cern.ch/record/2696223

· P. Panero, I. Posada Trobo, C. de Oliveira Antunes, A. Wagner, CERN IT-CDA, Citadel Search:Open Source Enterprise Search, 23/10/2019,https://doi.org/10.5281/zenodo.3581157

· OverLeaf case study: Overleaf at CERN: Supporting Thousands of Research Collaborationshttps://www.overleaf.com/blog/overleaf-at-cern-supporting-thousands-of-research-collaborations

· Nature Blog article: Three ways to collaborate on writinghttps://www.natureindex.com/news-blog/three-ways-to-collaborate-on-writing

Book Chapters:

· A. Rao/CERN, S. Dallmeier-Tiessen/CERN, K. Lassila-Perini/HIP, T. McCauley/, T. Šimko/CERN:"Chapter 8: Early Experience with Open Data from CERN's Large Hadron Collider". In: "DigitalInnovation: Harnessing the Value of Open Data (Open Innovation: Bridging Theory andPractice Book 4)", edited by Anne-Laure Mention, World Scientific Publishing Company, July2019, pp 227-245. ISBN: 978-981-3271-63-0 (hardcover), ISBN: 978-981-3271-65-4 (ebook).DOI: https://doi.org/10.1142/9789813271647_0008

Reports:

· D. Prelipcean/CERN, Physics Examples for Reproducible Analysis, 20/9/2019, CERN-STUDENTS-Note-2019-217, https://cds.cern.ch/record/2690231

· L. Farias Wanderley/CERN, Continuous integration for containerized scientific workflows,6/12/2019, https://doi.org/10.5281/zenodo.3565666

15 | P a g eCERN IT Department Groups and Activities Report 2019

COMMUNICATION SYSTEMS (CS) GROUP

The Communication Systems (CS) Group is responsible for networking and telecommunicationsservices for the Laboratory. For networking, we provide a campus network, including full Wi-Ficoverage, for general purpose connectivity, a technical network to support accelerator operationsand critical laboratory infrastructure and, not least, a high-performance date centre network—including high-bandwidth connections to computing facilities around the world—to support physicscomputing. On the telephony side, in addition to fixed and mobile telephony services we support aTETRA digital radio services for the Fire and Rescue Services and a LoRaWAN low-power wide-areanetwork to support a small—but growing—number of “connected devices”.

As for many groups at CERN, a major focus of our activities during 2019 was to ensure timelyprogress of Long Shutdown 2 (LS2) related work. Here, in addition to providing the networkinfrastructure to support equipment being installed by other teams, the priorities for the Groupwere the upgrade of the technical network infrastructure and the replacement of the radiating cablethat enables transmission of mobile telephony and digital radio services in CERN’s extensive tunnelnetwork.

For the technical network, the primary goal for 2019 was the replacement of the ageing routers withthe newer models that, as reported last year, were identified as being able to meet the need forfiner-grained control over access to the critical equipment served by this network. A secondary goalwas the installation of the electrical and fibre infrastructure to support an additional router in eachstarpoint. These additional routers will improve overall network reliability as well as enabling us tofirmware upgrades during LHC operation thus ensuring that the technical network is as up-to-dateand secure as possible. Progress on both activities was smooth: all the replacement routers weredeployed by mid-October and we hope to complete installation of the additional routers for thestart-up of the injector chain during 2020.

In addition to the planned LS2-related activities, many technical teams asked for the recentlydeployed Wi-Fi infrastructure to be extended to technical areas. Whilst these requests weregratifying, since they show how much the new service is appreciated and relied upon, incorporatingthe necessary work into our planning was a challenge. Fortunately, the network deployment team iswell used to meeting such challenges and nearly 300 additional access points were installed duringthe year.

Moving back to planned work, another goal for the group was replacing the radiating cable in theSPS and associated transfer tunnels. This cable transmits mobile telephony and TETRA signals in theunderground area but radiation exposure leads to reduced performance as well as increased fragilityand hence risk of catastrophic failure. Replacement is no easy matter: the 3cm diameter cable is stiffand, as telephony and TETRA communications are unavailable during the work, other activities mustbe suspended. Fortunately, the replacement work has, with much help from our EN/EL colleagues,advanced well: preparatory works have essentially been completed and cable deployment hasstarted and is on track to be completed by the end of LS2.

In a related development, we have installed infrastructure in one of the LHC arcs to test thefeasibility of using the radiating cable there to extend coverage of the LoRaWAN to the undergroundareas. Introduced just last year, LoRaWAN supports data transfer from low-power sensors that canoperate for up to 5 years on battery power. Use of the technology above ground is growing for uses

16 | P a g eCERN IT Department Groups and Activities Report 2019

ranging from the critical—immediate detection should radioactive material be placed in ordinarywaste bins—to the simple—buttons to automate request for replacement of printer tonercartridges. The advantages for environmental monitoring in the underground areas are evident andcolleagues from EN have developed radiation-hard sensors that will be tested in this LHC arc during2020.

In terms of telephony, the Group’s priorities for 2019 were to migrate to Swisscom’s new 5G-capablemobile telephony service, to identify options to provide a seamless cross-border mobile telephonyservice and to start the migration of users from fixed phones to a softphone alternative. Here,unfortunately, progress was less satisfactory. On the positive side, CERN switched to the new“NatelGo” offering from Swisscom at the end of November. Although there are some indicationsthat users now have a better service in the Pays-de-Gex area, it is too early to assess the impact ofthis change. Looking at mobile telephone services more generally, a series of meetings with hoststate representatives and mobile telephony service providers identified two possible technicaloptions for delivering a seamless cross-border service but the commercial viability of these options isunclear. Establishing a proof-of-concept service is a key goal for 2020 to ensure we have a viablemodel in mind as we prepare to retender the mobile telephony contract during 2021.

Now that we have switched to the more data-oriented NatelGo service, use of a softphone clientrather than native mobile telephony voice services could also be an option to improve cross-bordercommunications. Unfortunately, after licensing questions precluded adoption of Microsoft’sSkype4Business solution during 2018, the effort required to identify non-proprietary email andauthentication service options delayed development of softphone clients during 2019. To remedythis situation, all telephony effort has been gathered within the Communication Systems Group for2020 and delivery of softphone clients, particularly for mobile phones, will be the key priority for thegroup in the coming year.

Turning to R&D efforts, there were two main areas during 2019: investigations of ways to optimisethe wide-area transport of experimental data and studies of options to use Fibre-to-the-Office(FTTO) technology to improve campus network services.

For the first, following the successful proof-of-concept reported last year, Project NOTED, forNetwork Optimised Transfer of Experimental Data, made excellent progress. This project aims toenable dynamic reconfiguration of network infrastructure to allow the transfer of data acrossmultiple paths in parallel, exploiting backup routes to deliver additional capacity. During the year a“transfer broker” was developed to interface to both Rucio, the LHC experiments’ Scientific DataManagement System, and the FTS file transfer service in order to identify ongoing large-scale datatransfers. This led to semi-automated reconfiguration of CERN’s external networking infrastructureduring tests in October between CERN and Amsterdam and then in December between CERN,Amsterdam and Karlsruhe. The figure below shows how additional capacity was delivered, andexploited, during the December tests.

17 | P a g eCERN IT Department Groups and Activities Report 2019

Demonstration of the impact of network load balancing during data transfer to Nikhef and KIT. The impact is most clearly seen at theend of the “on” period—data transfer is still underway but the LHCONE route can no longer be used so traffic reverts to the primary

LHCOPN path. Network capacity is not fully utilised due to a bottleneck at the storage server layer.

On the surface, the FTTO technology evaluation was less successful. Our hope had been that wecould deploy FTTO equipment as part of the East Area renovation project in order to move networkequipment closer to the end users, so reducing their reliance on our services for any localreconfigurations. Unfortunately, our tests brought to light technical limitations with the technologyand that extensive effort would be required to integrate FTTO devices into our managementinfrastructure. Nevertheless, the evaluation effort led to a reflection on how our campus networkdesign could be re-architected to meet the same goals using traditional network switches androuters. An initial proof-of-concept will be deployed in the renovated East Area early in 2020. Thiswill then led to a plan for the complete refurbishment of the general campus network infrastructure,some of which dates back more than 10 years, once the modernisation of the technical network hascompleted during LS2.

Presentations:

· Q. Barrand/CERN, CERN DNS and DHCP service improvement plans, HEPiX Spring 2019,SDSC, San Diego, USA, 26/03/2019,https://indico.cern.ch/event/765497/contributions/3348833/

18 | P a g eCERN IT Department Groups and Activities Report 2019

· C. Busse-Grawitz/CERN & ETH Zurich, Notes on NOTED, LHCOPN-LHCONE meeting #42,Umeå University, Umeå, Sweden, 05/06/2019,https://indico.cern.ch/event/772031/contributions/3428931/

· C. Busse-Grawitz/CERN & ETH Zurich, Notes on NOTED, HEPiX, Nikhef, Amsterdam,Netherlands, 16/10/2019,presentation: https://indico.cern.ch/event/810635/contributions/3592922/ ;recording: https://www.youtube.com/watch?v=MXWhfr65pV4&list=PL2tOT1Z8xPRyO6uLfeQGg3miazbRmOwp9&index=5&t=20027s

· C. Busse-Grawitz/CERN & ETH Zurich, T. Cass/CERN, E. Martelli/CERN et al, The NOTEDsoftware tool-set enables improved exploitation of WAN bandwidth for Rucio data transfersvia FTS, CHEP 2019, Adelaide, Australia, 05/11/2019,https://indico.cern.ch/event/773049/contributions/3473789/

· T. Cass/CERN, LHC Computing, ATCF5, TIFR, Mumbai, India, 24/10/2019,https://indico.cern.ch/event/817781

· V. Ducret/CERN, CERN Computer Center Network evolution, HEPiX Autumn 2019, NiKHef,Amsterdam, Netherlands, 16/10/19,presentation: https://indico.cern.ch/event/810635/contributions/3592876 ;recording: https://www.youtube.com/watch?v=MXWhfr65pV4&list=PL2tOT1Z8xPRyO6uLfeQGg3miazbRmOwp9&index=6&t=6h20m34s

· E. Martelli/CERN, Technology Watch on networking, HSF-OSG-WLCG workshop, NewportNews, US, 19/03/2019, https://indico.cern.ch/event/759388/contributions/3326344

· E. Martelli/CERN, LHCOPN update, LHCOPN/ONE meeting, Umeå, Sweden, 04/06/2019,https://indico.cern.ch/event/772031/contributions/3357233

· E. Martelli/CERN, CERN Tier-0 update, LHCOPN/ONE meeting, Umeå, Sweden, 04/06/2019,https://indico.cern.ch/event/772031/contributions/3357239

· E. Martelli/CERN, The LHCONE looking glass, LHCOPN/ONE meeting, Umeå, Sweden,04/06/2019, https://indico.cern.ch/event/772031/contributions/3357234

· E. Martelli/CERN,LHCONE-LHCOPN Workshop report, GDB, CERN, Geneva, Switzerland,12/06/2019, https://indico.cern.ch/event/739879/contributions/3448310

· E. Martelli/CERN, WLCG network activities, ESCAPE workshop, Amsterdam (remotepresentation from CERN), Netherlands, 01/07/2019,https://indico.in2p3.fr/event/19214/contributions/73468

· E. Martelli/CERN, Introduction of multiONE, GDB, FNAL, Batavia, US, 11/09/2019,https://indico.cern.ch/event/739882/contributions/3520004

· E. Martelli/CERN, LHCOPN and LHCONE update, ATCF5, TIFR, Mumbai, India, 25/10/2019,https://indico.cern.ch/event/817781/contributions/3606409

· E. Martelli/CERN, WLCG DOMA overview, ATCF5, TIFR, Mumbai, India, 25/10/2019,https://indico.cern.ch/event/817781/contributions/3606677

· E. Martelli/CERN, WLCG network activities update, DOMA meeting, CERN, Geneva,Switzerland, 27/11/2019,https://indico.cern.ch/event/866244/contributions/3650023/attachments/1952343/3241530/20191127-network-update-for-DOMA.pdf

· D. Pomponi/CERN, LHCb containers and DWDM—Network overview, HEPiX Autumn 2019,NiKHef, Amsterdam, Netherlands, 16/10/19,presentation: https://indico.cern.ch/event/810635/contributions/3593358/ ;recording: https://youtu.be/MXWhfr65pV4?t=25022)

19 | P a g eCERN IT Department Groups and Activities Report 2019

· R. Sierra/CERN, Readying CERN for the Connected Device Era, CHEP 2019, Adelaide,Australia, 07/11/2019, https://indico.cern.ch/event/773049/contributions/3473793/

· R. Sierra/CERN, F. Valentin Vinagrero/CERN et al, CERN Fixed Telephony ServiceDevelopment, HEPiX Autumn 2019, NiKHef, Amsterdam, Netherlands, 14/10/19,https://indico.cern.ch/event/810635/contributions/3592960/

Publications:

· E. Martelli/CERN, M. Collignon/CERN, T. Cass/CERN et al, LHCb High Level Trigger in aremote IT datacentre, CHEP 2018, Sofia, Bulgaria, 17/09/2019, https://www.epj-conferences.org/articles/epjconf/abs/2019/19/epjconf_chep2018_08007/epjconf_chep2018_08007.html

Reports:

· C. Kishimoto Bisbe/CERN & T. Cass/CERN, CERN, CERN Campus Network upgrade,CERN IT Note, CERN, 15/02/2019, http://cds.cern.ch/record/2665902

· C. Kishimoto Bisbe/CERN & T. Cass/CERN, CERN, CERN Technical Network upgrade,CERN IT Note, CERN, 19/02/2019, http://cds.cern.ch/record/2665715

· E. Martelli/CERN & T. Cass/CERN, CERN, External Network evolution for LHC Run3 and Run4,CERN IT Note, CERN, 21/02/2019, https://cds.cern.ch/record/2661746

20 | P a g eCERN IT Department Groups and Activities Report 2019

COMPUTE & MONITORING (CM) GROUP

The Compute and Monitoring group is responsible for the service delivery and evolution ofCompute, Monitoring and Infrastructure services for the CERN Tier-0 Data Centre and for theWorldwide LHC Computing Grid (WLCG). In order to deliver these services, we work closely withother grid sites and open source communities to jointly develop and enhance the tools for the endusers and service managers.

LINUXWith over 50,000 servers running one of CERN’s Linux distributions, careful planning of new releasesand retirement of older ones is needed. CERN CentOS 7 became the default distribution for physicsduring 2019 with some remaining applications running Scientific Linux 6 which are migrating beforethe end of life in December 2020.

A further security vulnerability was discovered during 2019 which required prompt patching andrebooting to address.

A CentOS 8 test environment was made available with the aim for a production quality release in thefirst half of 2020. This provides a significant upgrade of the Linux Kernel and many improvements tothe tools such as the general availability of Python 3.

PUPPET CONFIGURATION MANAGEMENTThe Puppet configuration management system allows for consistent management of servers, backedby quality assurance of changes through automatic testing. With over 34,000 servers now managedby Puppet handling 35 million configuration values, the move to the latest version (Puppet 5) wasachieved smoothly. A user survey on how to manage secrets, such as passwords, in the computecentre will lead to further work in this area in 2020. The system that monitors the data centre powerand notifies services to prepare to shutdown had an unexpected test during a site wide power cut inOctober. Fortunately, the power was restored before the shutdown procedures were needing to beexecuted.

21 | P a g eCERN IT Department Groups and Activities Report 2019

PUBLIC LOGIN SERVICESThe lxplus service provides a large scale, efficient, general purpose linux service for CERN users. Inline with the experiment migration to CentOS 7, the lxplus service was migrated from Scientific Linux6. Automatic notifications were enabled for programs aborting due to code problems or due toexcessive usage of the interactive compute facilities where batch would be more appropriate.

CLOUD SERVICESOver 90% of the compute resources in the data centre are provided through a private cloud basedon OpenStack. With the growth of the computing needs of the CERN experiments and services, thisprovides over 250,000 cores. Capabilities such as Bare Metal management and the integration ofCeph file shares and S3 object stores from the storage group were important new functionalitiesadded. The cloud containers-as-a-service usage showed a clear move to Kubernetes with over 500clusters now in use by the user communities and significant enhancements in service functionalitywith auto healing and auto scaling being contributed to the upstream open source project.Kubernetes workload portability was demonstrated with the Higgs Boson analysis being tested onthe CERN cloud and then reproduced in a few minutes using the CERN open data portal and cloudresources from Google.

The pre-emptible virtual machine solutions which have been developed through a collaboration withHuawei in CERN openlab has been deployed in pilot mode during 2019.

22 | P a g eCERN IT Department Groups and Activities Report 2019

BATCHAfter 20 years of service, the Platform LSF batch system was decommissioned and replacedHTCondor, an open source batch system from the University of Wisconsin which has become widelyadopted across high energy physics and has shown significant scalability improvements compared toLSF.

With the retirement of the Wigner data centre, new resources were made available in thecontainers at LHC Point 8 to continue to deliver the pledge to the experiments.

Graphics Processing Unit (GPU) resources have now been made available through the batch systemfor applications which can benefit. These specialized accelerators can provide significant speed upfor some applications and are one of the focus areas to address the resource needs for the LHC Run4.

HIGH PERFORMANCE COMPUTINGWhile most of the computing done at CERN can be separated into individual batch jobs and thenfarmed across a large number of servers, some applications require multiple machines to becombined to deliver a high-performance single application. The Health and Safety, Theory andBeams departments are now using this facility. With large memory configurations up to 1TB nowavailable, the Engineering community has migrated completely from the previous Windows basedservice to Linux leading to improved compute utilisation.

23 | P a g eCERN IT Department Groups and Activities Report 2019

VOLUNTEER COMPUTINGVolunteer computing has continued to provide significant capacity during 2019 for the experimentsand CERN departments, reaching over 600,000 jobs, a new record for the LHC@Home project.

ELASTICSEARCHElasticsearch is a search and analytics store used widely to understand complex data stores. Thereare now over 30 clusters for use cases such as monitoring, security, repository search, INSPIRE HEPdocument search and log analysis with billions of documents being stored and indexed per day.

MONITORINGThe unified monitoring infrastructure (MONIT) collects monitoring metrics and logs from the CERNData Centres, IT Services and the Worldwide LHC Computing Grid (WLCG). From the initial 200GB/day of monitoring data, the volume is now of 3 TB/day with over 1,500 users. The newinfrastructure monitoring is in place using the open source collectd solution to replace the CERNLemon tool used since 2003. Alarms can now be raised so Service Now tickets are automaticallycreated when problems occur. Several WLCG critical services dashboards have now been migratedto the new infrastructure.

Several WLCG critical services dashboards have now been migrated to the new infrastructure.

24 | P a g eCERN IT Department Groups and Activities Report 2019

To help understand service availability and growth, a new set of dashboards showing KeyPerformance and Service Level Indicators (KPIs/SLIs2) were made available for IT services.

WLCG network and site monitoring continue to evolve with the latest version of Perfsonar nowmonitoring over 200 production sites.

2 https://monit-grafana.cern.ch/d/LEtWjfaik/overview-slis?orgId=1&refresh=1m

25 | P a g eCERN IT Department Groups and Activities Report 2019

Presentations:

· R. Rocha,"Kubernetes at CERN: Use Cases, Integration and Challenges", Container StackZurich, 30/01/2019 Slides

· T. Oulevey,"Building RPMs with Gitlab and Koji", CentOS Dojo Brussels 2019, 01/02/2019· Carles Garcia,"Secrets management with Puppet at CERN", Config Management Camp Ghent

2019, 04/02/2019 Slides· T. Bell,"Big data keeping up with big science", The Register evening event, 14/03/2019 Slides

Video· G. McCance, "Building access to cloud and HPC resources", WLCG OSF Workshop,

18/03/2019-22/03/2019 Slides· B. Moreira, "6 years of CERN Cloud - From 0 to 300k cores", HEPiX Spring 2019 Workshop,

25/03/2019-29/03/2019 Slides PPTX· S. Trigazis, "Container use cases and developments at CERN", Open Infra Days UK,

01/04/2019 Slides· S. Trigazis, "Magnum PTG Interview", OpenStack Summit Denver, 29/04/2019-

01/05/2019, Video· S. Trigazis, "Magnum - Project Onboarding", OpenStack Summit Denver, 29/04/2019-

01/05/2019, Slides· S. Trigazis,"Magnum - Project Update", OpenStack Summit Denver, 29/04/2019-

01/05/2019 Slides Video· S. Seetharaman,"What’s new in Nova Cellsv2? ", OpenStack Summit Denver, 29/04/2019-

01/05/2019 Slides· T. Bell,"Container use-cases and developments at the CERN cloud", OpenStack Summit

Denver,29/04/2019-01/05/2019 Slides Video· J. Castro Leon, "Improving resource availability in CERN private Cloud", OpenStack Summit

Denver,29/04/2019-01/05/2019 Slides Video· D. Abad, "The CERN Cloud Infrastructure: 5 years in 15 minutes",Voxxed Days

CERN,01/05/2019 Slides· D. Moreno, "Puppet + Containers = Multi-Tenancy", Voxxed Days CERN,01/05/2019 Slides· G. McCance,"Demonstrators for HEP", BDEC Poznan, 15/05/2019· R. Rocha,"Reperforming the Nobel Prize", Kubecon Barcelona 2019,21/05/2019 Video· T. Bell, "Challenges of the LHC",OpenStack Day CERN, 27/05/2019 Slides Video· L. Magnoni,"A dashboard is worth a thousand words", SRECon Asia, 12/06/2019-

14/06/2019 Slides· T. Bell,"Accelerating Containers at CERN", VHPC '19, 20/06/2019 Slides· G. Massullo, "Status of LHC@Home", BOINC Workshop 2019, 09/07/2019-10/07/2019 Slides· G. Massullo, "BOINC server update", BOINC Workshop 2019, 09/07/2019-10/07/2019 Slides· S. Trigazis, "Managing secrets and charts at CERN", Helm Summit, 12/092019 Video· R. Rocha, "Scientific Workloads on Kubernetes", Google Cloud Summit, 19/09/2019 Slides· B. Jones, "Edge cases at CERN", HTCondor European Workshop, 24/09/2019-

27/09/2019 Slides· L. Alvarez, "Automation & Monitoring in the CERN Condor pool",HTCondor European

Workshop,24/09/2019-27/09/2019 Slides· J. Van Eldik,"Running OpenStack in production at CERN",OpenInfra Day

Milan,02/10/2019 Slides· N. Tsvetkov,"Monitoring with no limits",Sphere.it,07/10/2019-09/10/2019 Slides· D. Moreno,"Puppet Server and Containers: A Multi-tenancy Deployment",Puppetize

PDX,10/10/2019 Slides

26 | P a g eCERN IT Department Groups and Activities Report 2019

· C. Lukken, "Effects of Openstack Watcher Deployment in a Large Scale Cloud", Faculty DigitalMedia & Creative Industry, Amsterdam, 11/10/2019 Thesis

· D. Abad, "CERN Cloud Infrastructure update", HEPiX Autumn 2019 Workshop,14/10/2019-18/10/2019 Slides

· D. Abad, "CERN Linux services status update", HEPiX Autumn 2019 Workshop,14/10/2019-18/10/2019 Slides

· A. Wiebalck, "From Hire to Retire - OpenStack Ironic",OpenStack Summit Shanghai,04/11/2019-06/11/2019 Slides

· S. Seetharaman,"Roller coaster ride in OpenStack",OpenStack Summit Shanghai,04/11/2019-06/11/2019 Slides Video

· B. Moreira, "VMs to Kubernetes", OpenStack Summit Shanghai, 04/11/2019-06/11/2019 Slides Video

· T. Bell, "Open Science and Open Infrastructure", OpenStack Summit Shanghai, 04/11/2019-06/11/2019 Slides Video

· R. Rocha, "Reperforming a Nobel Prize discovery", CHEP 2019 Adelaide, 04/11/2019-08/11/2019 Slides

· M. Babik,"Network Capabilities for the HL-LHC Era", CHEP 2019 Adelaide, 04/11/2019-08/11/2019 Slides

· D. Giordano, "Using HEP experiment workflows for the benchmarking and accounting ofcomputing resources", CHEP 2019 Adelaide, 04/11/2019-08/11/2019 Slides

· P. Llopis, "Large Elasticsearch cluster management, CHEP 2019 Adelaide, 04/11/2019-08/11/2019 Slides

· L. Alvarez, "Managing the CERN Batch System with Kubernetes", CHEP 2019 Adelaide,04/11/2019-08/11/2019 Slides

· P. Andrade, "WLCG Dashboards with unified monitoring", CHEP 2019 Adelaide, 04/11/2019-08/11/2019 Slides

· R. Rocha, "Managing Helm deployments with GitOps at CERN", Kubecon San Diego2019,19/11/2019 Slides Video

· D. Giordano, "Benchmarking", Journées LCG-France, CC-IN2P3, 12/12/2019 Slides

27 | P a g eCERN IT Department Groups and Activities Report 2019

COMPUTING FACILITIES (CF) GROUP

The IT Computing Facilities (CF) group contributes in the following areas: Data Centre (DC) Operationsand System Administration, Service Management Support and Hardware Evaluation and Procurement.

At the end of 2019 CERN finished its excellent 7-year collaboration with the Wigner Data Centre atwhich it had been hosting a significant amount of IT equipment providing capacity to CERN Tier0 DataCentre for the Worldwide LHC Computing Grid (WLCG). Due to the needs of meeting the Tier0 pledgesfor Run 3, it was necessary to repatriate the vast majority of this equipment. Hence, this constituteda major activity and challenge for the CF group, amongst others in the department, during 2019. Intotal 2,628 servers and 440 JBODs were repatriated in five shipments starting in February andcontinuing to October. An additional shipment was made to repatriate equipment to be donated tothe LHCb experiment in December. The equipment remaining after this at the Wigner Data Centre wasdonated to the ALICE experiment. Another major activity for the CF group was the installation,recommissioning and allocation of the CPU severs that were repatriated, into the two containers atPoint 8 that the LHCb experiment had kindly offered to IT to use until the end of Run 3. By the end ofthe year, one of the two containers was completed, filled, and the second partially filled with acombination of the repatriated servers as well as some new purchases. The disk storage (JBODs) welargely installed and recommissioned in the Meyrin Data Centre, with about 10 racks of equipmentinstalled in the 2nd Network Hub for disaster recovery reasons.

The figure above shows the variation in power usage of one of the LHCb containers during thevarious stages of the installation period.

To further improve the reliability of the Data Centre in Meyrin, as well as to increase its capacity, acampaign to replace the active water-cooled doors of racks in the Vault to more modern passivewater-cooled doors was started, as well as to increase the number of racks by 8. By the end of 2019,44 out of 88 racks had been modified. Additional tasks for the DC Operations Team, were to preparean additional room to support the CIXP clients, to assist in the replacement of ageing access controlcard readers by a new system, to participate in the upgrading of the 48V UPS system for the Telecoms

Short burn-in of600 servers

600 servers inOpenstack

600 nodes inbatch

Burn-in of 140servers

740 nodesin batch

Firmwareupgrade

Burn-in of 444servers

1184 nodesin batch

Burn-in of 548servers

1732 nodesin batch

28 | P a g eCERN IT Department Groups and Activities Report 2019

rooms as well as to investigate a number of commercial and Open Source Data Centre InfrastructureManagement (DCIM) products. Finally, in addition to the donation of equipment to the LHCb and ALICEexperiments, a further donation was made to the An-Najah National University in Palestine.

Donation to An-Najah National University in Palestine made by the Director for Computing andResearch, Eckhard Elsen.

In terms of procurement, the team conducted two tenders, which were adjudicated at the JuneFinance Committee. One was for storage (192 JBOD, 55PB raw for IT) and the other one for compute(about 150 quads, 450kHS06 for IT). In addition, a tender for storage drives only was done to upgradesome of the disk arrays that were repatriated from Wigner. This allowed to triple the capacity at amuch reduced cost compared with purchasing complete new JBODs. A further tender was conducted,based on a double adjudication at September Finance Committee. Subsequently to this adjudication,six blanket contracts have been setup with five different companies for a total of about 15.5MCHF. Asin previous years, the team also assisted BE-CO, ATLAS, ALICE, CMS and LHCb with their procurements,and this activity will continue into the future.

Another activity that was performed by the CF group was the management of the challengingtransition from the previous Operations contract to the new one. This was challenging for two reasons;firstly the contract working model was modified from the previous contract, and also, there had beena large change in the contract personnel between the two contracts.

On the side of development, in-line with the department strategy, the team successfully migrated thecomplete hardware monitoring from lemon to collectd. In addition, there was a significantenhancement in the centralized IPMI monitoring, which now processes the data from all of our serversin less than 4 minutes. Finally, a new tool has been developed that allows to monitor closely the usageof all (Linux) hardware resources. This data will be very valuable for the elaboration of the nexttechnical specifications for the next tenders.

For the Service Management team this year, in addition to providing the day-to-day ServiceManagement support to the departments as well as maintaining the ServiceNow platform, the teamperformed a number of other activities, some of which are highlighted below.

• Continual consolidation of IT 2nd Line Computing Support, including monthly reports on userfeedback, new activities.

29 | P a g eCERN IT Department Groups and Activities Report 2019

• Sustained collaboration with SMB to improve the Service Desk delivery for IT tickets.• Follow-up and reporting on negative feedback on all IT tickets.• Regular newsletter to all Snow users.• Continual training of IT 2nd line.• Automatic detection and follow-up of Priority 1 IT incidents.• Analysis and implementation of Record Producers for new Service Portal.• Management of CERN Service Catalogue with regards to Data Privacy implementation.• OLAs for various IT contracts and groups.• Central IT CMDB project.• Moderation of IT SSB entries.• Management of S182 Computing Support Contract.• Management of S220 Operations contract.• Participation to IN2P3 workshop• ServiceNow User Group Suisse Romande. Attendance and contribution.• Contribution CERN Open Days

Finally, the group is working on a tender for a turnkey data centre to be built on the Prévessin siteclose to the CERN Control Centre. This tender is expected to be completed with adjudication at theDecember Finance Committee and construction to start in 2021.

Presentations:

· F. Trevisani/CERN, "Gestion du changement”, IN2P3 Collaboration Workshop, 26th June,CERN, Switzerland, https://indico.in2p3.fr/event/19054/timetable/#20190626.detailed

· B. Clement, “Control room de l'IT”, IN2P3 Collaboration Workshop, 26th June, CERN,Switzerland, https://indico.in2p3.fr/event/19054/timetable/#20190626.detailed

· W. Salter/CERN, “How to provide the resources for RUN3 and RUN4”, HEPiX Autumn,Amsterdam, Holland, 14-18 October,https://indico.cern.ch/event/810635/timetable/#20191016.detailed

· L. Atzori/CERN, “Open Compute Project 2019 Global Summit Report”, HEPiX Autumn,Amsterdam, Holland, 14-18 October,https://indico.cern.ch/event/810635/timetable/#20191016.detailed

· E. Bonfillou/CERN, “Collaboration with industry in the future upgrades of computinginfrastructure and services at CERN”, Industrial Opportunies Days, Naples, Italy, 7th of June2019.

30 | P a g eCERN IT Department Groups and Activities Report 2019

DATABASES (DB) GROUP

The Database Services Group is responsible for current and future databases and their platform foraccelerators, experiments and administrative services. It is also in charge of hosting Java Webapplications as well as critical engineering and administration applications at CERN scale. Moreover,the group provides scale-out analytics services including Hadoop, Spark and Kafka for CERN and theLHC experiments.

LONG SHUTDOWN 2 AND GENERAL DEVELOPMENTSWorking with many teams all around CERN and the experiments to prepare for the Run3 has beenthe common theme of the IT database services for 2019. For this, services had to purchase or havepurchased by the IT procurement the necessary hardware. And then, they had to deploy or preparethe deployment of new hardware and software. Additional work was dedicated to renewal ofdatabase storage procurement contract framework enabling storage hardware renewals for theduration of Run3. Like all services at CERN, 2019 has also seen a significant amount of work relatedto data privacy in the IT database services group, leading to all Records of Processing Operationdocuments to be ready at the end of 2019.

DATABASE ON DEMAND (DBOD)The DBoD service has an incredible acceptance by CERN community, supporting many applicationsand critical services. During this year, we have reached more than 800 instances deployed in alldepartments and added features and enhancements based on the CERN community continuousfeedback and needs. Key elements of this year include the introduction, validation and integration ofnew versions as requested by the user community: InfluxDB 1.7.7, PostgreSQL 9.6.14 and 11.4,MySQL 5.7.26 and 8.0.16. Another highlight has been the development of a high-availability solutionfor the new mail project Kopano pilot deployment. A lot of automation work has been added to theproject, like the introduction of a revised resource portal. Additionally, a lot of work has been doneto migrate to new hardware.

ORACLE DATABASESThe focus during 2019 has been on migrations to the Oracle database versions 18c and 19c(depending on the testing and certification for commercial applications). We have introduced twonew deployment environments and a very intensive migration and consolidation work for hardwarerefresh has been done. In order to provide a suitable disaster recovery framework, the standbydatabases have been migrated to the “second network hub”; the standby databases are nowmanaged using the “DataGuard broker” which enables further automation and simplification. TheOracle Forms solution has been upgraded to the latest release (version 12).

REPORTING (PENTAHO) SERVICE AND APPLICATIONSIn 2018, following a review of the situation and the CERN-wide needs of a central service, the ITdepartment had launched the creation of a central Reporting service based on Pentaho. During2019, the service has been officially launched, with the input of the users, the deploymentpossibilities have been studied, a strategy and a production model defined. The first productiondeployments have been made and the contract (license) has been renewed.

31 | P a g eCERN IT Department Groups and Activities Report 2019

The year 2019 has been a year of major change for the Java application deployments with the launchof Apache Tomcat and Oracle WebLogic in Kubernetes. The deployment has started with themigration of two of the most challenging applications: EDH and EDMS. The collaborative work withthe application developers has well progressed, all technical issues have been resolved andmigration is planned for the first half of 2020. The Oracle WebLogic deployment in Kubernetes hasbeen validated and the migration of “development” and “test” systems is on-going (25 applicationsalready migrated).

HADOOP, SPARK AND SWANDuring 2019, the number of users has doubled, and the data ingestion rate has also doubled and isnow reaching 2TB per day. The hardware for the « Analytix » and the « NxCALS » has been renewedto be able to cope with the increasing needs. A new monitoring system has been developed. Itmimics the typical actions of the users and helps to identify potential problems which mightotherwise not appear in the “system” type of monitoring. This new monitoring system has beenintegrated with the new IT monitoring based on “collectd”. A significant effort has been made toperform consolidation around the “HBase” technology as a “database” for Hadoop, it is now used bythe “NxCALS” (new accelerator logging system) and evaluated by ATLAS for the EventIndex. Atraining course has been developed and given with the technical training: “Introduction to Big Datatechnologies”.

The notebook prototype service SWAN, developed and maintained with EP-SFT and IT-ST, hasreceived the support of a Google Summer of Code intern. The intern developed the possibility tomake use of a user-managed Kubernetes cluster in order to launch large computations. Anothersignificant development has been the development of using the IT container service. Trainings havebeen organised and the first SWAN user forum has been organised to gather feedback and input, ithas been very widely attended with users from many departments and experiments.

STREAMINGThis new service based on Apache Kafka is now in production. 2019 has been a consolidation yearwith the introduction of a complete documentation and the coverage of a best effort for the service;the monitoring is using the central IT monitoring based on “collectd”. A major change has been doneto migrate to the Kafka version 2.0 and the move to the IT storage service “CEPH”. New users havejoined the service like the radio-protection REMUS project. The service has introduced, in pilotphase, new connectors to ease the collection of data: HDFS, MQTT and InfluxDB.

CERN OPENLABSeveral activities have been done in CERN openlab spanning different projects.

As a first example, during 2019 the team has worked with Oracle Autonomous Databasetechnologies focusing on three main aspects: scaling data volumes, improving efficiency of thepotential solutions in term of automatisations and operational costs, and finally increasing dataretrieval/analytics complexity using real life scenarios. Regarding data volumes, we started tomigrate to the Oracle object storage cloud, one of the most complex and largest control datasets atCERN - about 1PB. Due to the large data volume involved, different solutions based on standardnetwork, GEANT and Oracle appliance-based data transfer solution were tested. At the same time,the team worked together with Oracle development and management teams to define the best datamodel strategy to reduce associated costs and improve efficiency. Following this, a hybrid model was

32 | P a g eCERN IT Department Groups and Activities Report 2019

put in place. This model emphasizes the benefits of object storage by using transparently externaltables based on Apache Parquet files for less accessed data and regular database tables for data thatrequire almost real-time response time. To assess this, once a representative amount of data wasavailable, real-life data loads were captured and simulated on the Oracle Autonomous solution.

As a second example, we have explored Deep Learning training with the various tools andenvironments, we have worked on a use case on physics at scale (CMS) using industry standard tools(https://github.com/cerndb/SparkDLTrigger), developed tooling for performance analysis(sparkMeasure), and upstream collaboration (eg: SPARK-28091) as well as tooling for TensorFlowbootstrap (eg: https://github.com/cerndb/tf-spawner).

Deep learning pipeline, Luca Canali, CERN-EP/IT Data Science Seminar, “Big Data Tools and Pipelinesfor Machine Learning in HEP”

CERN OPEN DAYS REGISTRATION APPLICATIONFollowing the request from the CERN OpenDays core team, the IT-DB group has developed ascalable, multi-language application for the registration for the Open Days. It has been prepared andvalidated to be able to sustain a large simultaneous load based on the experience of the previousOpen Days.

Statistics on the registration for the CERN Open Days 2019

1

10

100

1000

10000

Jul Aug SepDay at which people have registered

Num

ber o

f per

sons

regi

ster

ed p

er d

ay

CERN Open Days 2019 registration: number of persons registered aggregated per day of their registration

0

25000

50000

75000

Jul Aug SepDay at which people have registered

Cum

ulat

ive

regi

stra

tions

Cumulative number of registrants versus the day of their registration: 89496 persons registered

33 | P a g eCERN IT Department Groups and Activities Report 2019

Posters:

· M. Zanetti/Universita e INFN Padova, Matteo Migliorini/Universita eINFN Padova, LucaCanali/CERN, Machine Learning Pipelines for HEP Using Big Data Tools Applied to ImprovingEvent Filtering, International Conference Computing in High Energy and Nuclear Physics, 4-8November 2019, Adelaide, https://indico.cern.ch/event/773049/contributions/3476052/

· L. Caldara/CERN, CERNDB1 graph of schema dependencies, CERN· E. Motesnitsalis/CERN, V. Khristenko/CERN, M. Migliorini/CERN, R. Castellotti/CERN, L.

Canali/CERN, M. Girone/CERN,D. Olivito/CERN, M. Cremonesi/Fermilab, J. Pivarski/PrincetonUniversity, Physics data analysis and data reduction at scale with Apache Spark, CERNopenlab technical workshop 2019, 23-24 January 2019

· M. Migliorini/CERN-UNIPD, V. Khristenko/CERN, Machine learning pipelines with ApacheSpark and Intel BigDL, CERN openlab technical workshop 2019, 23-24 January 2019

· R. Castellotti/CERN, E. Motesnitsalis/CERN, M. Migliorini/CERN, L. Canali/CERN, Physics dataprocessing and machine learning in the Cloud, CERN openlab technical workshop 2019, 23-24 January 2019

· V. Kozlovszky/CERN, Monitoring Java application servers, CERN openlab technical workshop2019, 23-24 January 2019

Presentations:

· L. Canali, Big Data Tools and Pipelines for Machine Learning in HEP, EP-IT Data scienceseminars, CERN, 4 December 2019, https://indico.cern.ch/event/859119/

· L. Caldara/CERN, Let Your DBAs get Some REST(api) , UKOUG Techfest 2019, 4 December2019

· L. Caldara/CERN, Long Live to CMAN! Or Oracle Still Cares About CMAN: You Should Do ItToo!, UKOUG Techfest 2019, 3 December 2019

· F. Pachot/CERN, 19c Automatic Indexing Demystified, UKOUG 2019, 2 December 2019· M. Martin/CERN, M. Connaughton/Oracle, Big Data Analytics and the Large Hadron Collider,

National Analytics Summit 2019, Dublin, 27 November 2019· M. Martin/CERN, Big Data, AI and Machine Learning at CERN, Trinity College Dublin and

ADAPT Center, Dublin, 27 November 2019· M. Martin/CERN, M. Connaughton/Oracle, Big Data Analytics and the Large Hadron Collider,

Oracle Digital Days 2019, Dublin, 26 November 2019· F. Pachot/CERN, 19 Features You Will Miss If You Leave Oracle Database, DOAG 2019, 19

November 2019· F. Pachot / CERN, A l'heure du « serverless», le futur va-t-il aller aux bases de données

distribuées?, SOUG Day, Swiss Oracle User Group (Lausanne), 14 November 2019· S. Masson/CERN, M. Martin/CERN, Oracle Autonomous Data Warehouse and CERN

Accelerator Control Systems, Modern Cloud Day, Paris, 25 November 2019· P. Kothuri/CERN, Open Source Big Data Tools accelerating physics research at CERN,

ApacheCon Europe, Berlin, 24 October 2019, https://indico.cern.ch/event/882109/· L. Canali/CERN, Performance Troubleshooting Using Apache Spark Metrics, Spark Summit

Europe 2019, Amsterdam, 17 October 2019https://databricks.com/session_eu19/performance-troubleshooting-using-apache-spark-metrics

· L. Canali/CERN, Deep Learning Pipelines for High Energy Physics using Apache Spark withDistributed Keras on Analytics Zoo, Spark Summit Europe 2019, Amsterdam, 16 October

34 | P a g eCERN IT Department Groups and Activities Report 2019

2019, https://databricks.com/session_eu19/deep-learning-pipelines-for-high-energy-physics-using-apache-spark-with-distributed-keras-on-analytics-zoo

· L. Caldara/CERN, Let your DBAs get some REST(api), Groundbreakers EMEA 2019 Tour,Rovinj, Croatia, 16 October 2019

· F. Pachot/CERN, An Oracle DBA approach to troubleshoot PostgreSQL applicationperformance, PgConf.eu, 16 October 2019

· L. Caldara/CERN, Long live to CMAN! Or Oracle still cares about CMAN: you should do it too!,Groundbreakers EMEA 2019 Tour, Portorož, Slovenia, 15 October 2019

· I. Coterillo/CERN, CERN Database on Demand Update, HEPiX Autumn 2019 Workshop, 14-18October 2019, https://indico.cern.ch/event/810635/contributions/3592953/

· L. Caldara/CERN, Oracle Drivers configuration for High Availability is it a developer's job?,Groundbreakers EMEA 2019 Tour, Baku Azerbaijan, 13 October 2019

· F. Pachot/CERN, SQL Join: methods and optimization, Trivadis Performance Days 2019, 26September 2019

· F. Pachot/CERN, Do You Gather the Statistics in the Way the Optimizer Expects Them?,Trivadis Performance Days 2019, 26 September 2019

· L. Canali/CERN, V. Motesnitsalis/CERN, O. Gutsche/Fermilab, “Big Data In HEP” - PhysicsData Analysis, Machine Learning and Data Reduction at Scale with Apache Spark, IXPUGAnnual Conference 2019, 24 September 2019, https://indico.cern.ch/event/851006/

· S. Masson/CERN. Modern Cloud Day 2019, Paris, 19 September 2019,https://indico.cern.ch/event/866690/

· Luis Rodriguez Fernandez, Building Secure REST Architectures With ORDS, CERN SpringCampus, Hamburg University of Technology (TUHH), Hamburg, Germany, 19 September2019, https://indico.cern.ch/event/803156/contributions/3386541/

· F. Pachot/CERN, Oracle Database 19c Automatic Indexing Demystified, Oracle Open World2019, 19 September 2019

· E. Grancher/CERN, 11 Months with Oracle Autonomous Transaction Processing, OracleOpenWorld 2019 San Francisco, 18 September 2019, https://indico.cern.ch/event/839681/

· F. Pachot/CERN, Twenty Features You Will Miss If You Leave Oracle Database, Oracle CodeOne 2019, 18 September 2019

· M. Martin/CERN, J. Abel/Oracle, Enterprise Challenges and Outcomes, Oracle OpenWorld2019, London, 17 September 2019

· R. Swonger/Oracle, M. Dietrich/Oracle, L. Caldara, AutoUpgrade hundreds of OracleDatabases with a single command, Oracle OpenWorld 2019, 17 September 2019

· A. Nappi/CERN, Kubernetes: The Glue Between Oracle Cloud and CERN Private Cloud, OracleOpenWorld 2019 San Francisco, 17 September 2019, https://indico.cern.ch/event/851883/

· F. Pachot/CERN, Oracle Active Data Guard: Best Practices and New Features Deep Dive,Oracle Open World 2019, 17 September 2019

· S. Masson/CERN, M. Martin/CERN, Managing one of the most largest IoT systems in theworld with Oracle Autonomous Technologies, Oracle OpenWorld 2019, San Francisco, 17September 2019

· M. Martin/CERN, R. Zimmermann/Oracle, J. Otto/IDS GmbH, Oracle Autonomous DataWarehouse: Customer Panel, Presented at Oracle OpenWorld 2019, San Francisco, 17September 2019

· D. Ebert/Oracle, M. Martin/CERN, A. Nappi/CERN, Advancing research with Oracle Cloud,Presented at Oracle OpenWorld 2019, San Francisco, 17 September 2019

35 | P a g eCERN IT Department Groups and Activities Report 2019

· A. Tsouvelekakis/CERN, CERN: Monitoring Infrastructure with Oracle Management Cloud,Oracle OpenWorld 2019 San Francisco, 16 September 2019,https://indico.cern.ch/event/851627/

· E. Grancher/CERN, V. Kozlovszky (CERN), M. Martín Márquez/CERN, A. Nappi/CERN, F.Pachot/CERN, L. Rodríguez Fernández/CERN, A. Tsouvelekakis/CERN, A. Wiecek/CERN, T.Holene Loekkeborg/CERN, N. Matos De Barros, C. Pedregal/Oracle 100% Oracle Cloud:Registering 90,000 People for CERN Open Days, Oracle OpenWorld 2019 San Francisco, 16September 2019, https://indico.cern.ch/event/836830/

· A. Nappi/CERN, One Tool to Rule Them All: How CERN Runs Application Servers onKubernetes, Cloud One 2019, 16 September 2019, https://indico.cern.ch/event/851885/

· L. Caldara/CERN, Long live to CMAN! Or Oracle still cares about CMAN: you should do it too!,POUG 2019, Wrozlaw, Poland, 7 September 2019

· F. Pachot/CERN, From Transportable Tablespaces to Pluggable Databases, POUG 2019,Wrozlaw, Poland, 6 September 2019

· M. Bień, Machine Learning on Big Datasets, 28 August 2019, CMS ML Forum Meeting,https://indico.cern.ch/event/844285/

· F. Pachot/CERN, An Oracle DBA approach to troubleshoot PostgreSQL applicationperformance, 3 July 2019, PostgresLondon 2019, https://indico.cern.ch/event/832846/

· F. Pachot/CERN, Microservices: Get Rid of Your DBA and Send the DB into Burnout, Riga DevDays, 31 May 2019

· F. Pachot/CERN, Microservices et database: où sont les données et où exécuter le code?,SOUG Day Romandie 2019, 21 May 2019

· F. Pachot/CERN, Live-Demo! From Transportable Tablespace to Pluggable Database, AOUG2019 Anwenderkonferenz, 15 May 2019

· T. Holene Loekkeborg/CERN, Docker image testing in GitLab CI, CERN Voxxed Days, Geneva,CERN, Switzerland, 01 May 2019, https://www.youtube.com/watch?v=I7uHAuU-p8M&t=132s

· F. Pachot/CERN, Polyglot programming in the database with GraalVM, Voxxed Days CERN 1May 2019

· M. Migliorini/CERN-UNIPD, R. Castellotti/CERN, V. Khristenko/CERN, M. Girone/CERN, R.Castellotti/CERN, L. Canali/CERN, Deep Learning on Apache Spark at CERN Large HardonCollider with Intel Technologies, Spark+AI summit 2019, San Francisco, 24 April 2019,https://indico.cern.ch/event/815022/

· A. Tsouvelekakis/CERN, Oracle Enterprise Manager and Management Cloud CustomerAdvisory Board, Oracle Headquarters, Redwood Shores, USA, 16 April 2019

· F. Pachot/CERN, Microservices: Get Rid of Your DBA and Send the DB into Burnout, BelgianTech Days, 8 February 2019

· F. Pachot/CERN, Join Methods : nested loop, hash sort, merge, adaptive, Belgian Tech Days7 February 2019

· A. Tsouvelekakis/CERN, Oracle Management Cloud: A unified monitoring platform, CERNopenlab Technical Workshop, CERN, 23 January 2019,https://indico.cern.ch/event/755842/contributions/3242729/

· A. Nappi/CERN, Running JAVA application servers on Kubernetes, openlab TechnicalWorkshop, CERN, 23 January 2019,https://indico.cern.ch/event/755842/contributions/3242741/

36 | P a g eCERN IT Department Groups and Activities Report 2019

· M. Martin Marquez/CERN, Oracle Data Analytics and Autonomous Data Warehouse serviceon the Cloud, CERN openlab Technical Workshop, CERN, 23 January 2019,https://indico.cern.ch/event/755842/contributions/3242697/

· E. Grancher/CERN, M. Martin Marquez/CERN, Research Analytics at Scale: CERN’sExperience with Oracle’s Cloud Solutions, OpenWorld London, 17 January 2019,https://indico.cern.ch/event/789142/

· M. Martin/CERN, J. Abel/Oracle, Enterprise Challenges and Outcomes (17 January), OracleOpenWorld 2019, London, 17 January 2019

· A. Mendelsohn/Oracle, E. Grancher/CERN, M. Martin/CERN, Oracle Autonomous DatabaseKeynote, Oracle OpenWorld 2019, London, 16 January 2019

Publications:

· M. Migliorini/CERN-UNIPD, R. Castellotti/CERN, L. Canali/CERN, M. Zanetti/CERN-UNIPD,Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics, 24September 2019, https://zenodo.org/record/3560829#.XicmbdHjIYs

· M. Cremonesi et al., Using Big Data Technologies for HEP Analysis, EPJ Web of Conferences214, 06030 (2019), https://doi.org/10.1051/epjconf/201921406030

· Z. Baranowski et al., Evolution of the Hadoop Platform and Ecosystem for High EnergyPhysics, EPJ Web of Conferences 214, 04058 (2019),https://doi.org/10.1051/epjconf/201921404058

· Z. Baranowski et al., A prototype for the evolution of ATLAS EventIndex based on ApacheKudu storage, EPJ Web of Conferences 214, 04057 (2019),https://doi.org/10.1051/epjconf/201921404057

Reports:

· Iheb Eddine IMAD, Building effective Restful APIs with Oracle Rest Data Services 19,https://zenodo.org/record/3560829

· A. Tsouvelekakis/CERN, Oracle Management Cloud Second Evaluation Phase, CERN, 18December 2019

37 | P a g eCERN IT Department Groups and Activities Report 2019

DEPARTMENTAL INFRASTRUCTURE (DI) GROUP

The Department Infrastructure (DI) group provides and maintains the administrative andinfrastructure support for the IT Department, i.e. the planning of departmental resources (Budgetand Personnel) and the general services such as the financial administration (follow-up of invoices,contracts, requests for funds, inventory, etc.), and the IT Car Pool.

As well as these administrative activities, the group hosts the following activities:

- CERN openlab- Computer security office- Externally funded projects- IT secretariat- UNOSAT- Worldwide LHC Computing Grid (WLCG) project

Most of these activities are detailed in the ‘Activities and Projects Reports 2019’ section of thisreport.

38 | P a g eCERN IT Department Groups and Activities Report 2019

STORAGE (ST) GROUP

The ST group is responsible for storage services for the CERN physics program (notably EOS, CASTOR,CTA, FTS, CVMFS, AFS, DPM) and infrastructure services (S3, CERNBox, Ceph, Backup and FILER).

During 2019, the tape infrastructure was significantly upgraded. Legacy Oracle enterprise tapelibraries were drained and decommissioned, which involved migrating over 80 PB of data. In parallel,the capacity of our LTO (Linear-Tape Open) library – a technology validated in 2018 for physicsstorage - was significantly augmented to over 20’000 tape slots, and additional media waspurchased. Following a competitive tender, an additional LTO library, extending capacity by another15’000 slots, has been selected and will be commissioned in Q1 2020.

The new CTA (CERN Tape Archive) software entered production status and will replace CASTOR forLHC experiments during 2020. Dedicated experiment instances were deployed and were subject toextensive testing. In particular, a CTA instance providing access to all ATLAS CASTOR data wassuccessfully used for an extensive ATLAS reprocessing campaign. Further tests are planned in Q12020, including a comprehensive ATLAS Run-2 reprocessing in January.

Significant improvements have been implemented in the CVMFS service in collaboration with EP-SFT. After the internal S3 bucket indices upgrade to a new all-flash rocksdb solution, 10xperformance increase was seen in the CVMFS benchmark. This allowed to drastically improve thepublication time with prompt distribution of Worldwide LHC Computing Grid (WLCG) andexperiment software distributions, and has allowed the software development teams in the LHCexperiments to distribute the release updates every day (or even more frequently if required).

The File Transfer Service (FTS) is distributing the majority of the LHC data across the WLCGinfrastructure and, in 2019, it has transferred more than 800 million files and a total of 0.95 exabyteof data. It is used by more than 25 experiments at CERN and in other data-intensive sciences outsideof the LHC and even the High Energy Physics domain. In 2019, the FTS team performed severalsignificant performance improvements to its core to prepare for the LHC Run 3 data challenges. TheFTS service is now also supporting the new CERN Tape Archival (CTA) system which has been stresstested by the ATLAS Data Carousel activity. Supported by the XDC project, the FTS team added moreuser-friendly authentication and delegation methods based on OAuth2 access tokens and supportingthe Third Party Copy WLCG DOMA activity.

39 | P a g eCERN IT Department Groups and Activities Report 2019

40 | P a g eCERN IT Department Groups and Activities Report 2019

During 2019, all the instances of the EOS infrastructure were updated with the latest generation ofnamespace technology, moving from an in-memory file catalogue to a disk-resident key-value store(QuarkDB). This reduced the booting time of the EOS namespace from several minutes to fewseconds and significantly increased the availability of the service. Configurable in-memory caching inEOS reduced the overall RAM usage. These developments allowed to move from a scale uparchitecture of the EOS namespaces to a scale out one.

From mid 2018 the EOS team started the roll-out in production of the latest generation of fusemount (FUSEX), starting from the EOSHOME instances, and which continued in 2019 where it wasextended to the EOSPROJECT instances, to EOSPUBLIC and EOSLHCb. This new technology is inextensive use for general storage instances (cf. figure [EOSHOME]) with daily traffic approaching 500TB. Similarly, in physics storage instances (LHC and non-LHC experiments) the daily traffic via FUSEXis reaching peaks of more than 900TB read per day.

Although 2019 was part of the “long shutdown 2” period, the EOS infrastructure has seen quite ahigh rate of data movement with regular peaks above 150GB/s in the physics analysis infrastructure.

Another important milestone for the EOS service in 2019 was the successful ramp down of theWigner Data Centre in Budapest. With the conclusion of the contract with Wigner at the end of2019, the EOS team followed up the emptying of 90 PB of storage capacity during the last year. Allthe storage servers were drained in several stages and data was re-replicated in the Meyrin DataCentre at CERN. Almost half of the Wigner storage capacity was dismantled and shipped back onCERN site, to then be reinjected in our storage systems. The entire operation was performed withminimal resources overhead and guaranteeing uninterrupted data availability. The monitoringinfrastructure for all storage services have also been updated and feedback provided for continuousimprovement of the system.

The EOS technology is getting traction outside CERN. To respond to this demand the ST grouporganized the 3rd EOS workshop at CERN in February 2019(https://indico.cern.ch/event/775181/timetable/#all.detailed). The workshop hosted 86 registeredparticipants from 25 institutions, 28 oral presentations and 4 interactive tutorials. For the first timethe workshop was available via Webcast and Vidyo, two of the presentations were successfullypresented remotely. The workshop was very well received by the participants and got a lot ofpositive feedback.

The external AFS disconnection test was successful: no dependencies on critical workflows weremade evident. The test generated a small number of inquiries by individual users. ST continued toshrink down AFS service with minimal disruption for the users.

CEPH Block Storage has provided a reliable infrastructure for OpenStack-based services: 4900 systemimages and 6300 attached volumes. Migration of FILER users to CEPH-FS has continued (262 Manilashares in production).

The ST group organised the Ceph Day for Science (https://indico.cern.ch/e/ceph2019), which washosted in the CERN Main Auditorium on 17 September 2019. The event was a meeting of scientific,governmental, and other non-profit users of Ceph, an open source storage system. CERN is a majoruser of Ceph, and represents this community on the Ceph Foundation Board. The event attracted153 attendees and featured 15 talks from SKA, NASA, the Flatiron Institute, the Interior Ministry ofFrance, WLCG sites, and others.

41 | P a g eCERN IT Department Groups and Activities Report 2019

In addition to the talks, the event provided ample time for attendees to meet each other and sharetheir interesting Ceph experiences. Scientific user interest in Ceph is still increasing; the user groupmeets for a monthly call to discuss Ceph operations at scale.

Last but not least, many attendees were able to profit from the CERN Open Days to learn moreabout the CERN facilities.

A second region for the S3 is being set up in the the Prevessin Network Hub for disk backup of IT-services critical data and for disaster recovery.

The ST group provided extensive support for the MALT project. A dedicated CEPH storage clusterwas provided for the MALT/Kopano team for the mail attachment storage. The CERNBox and EOSteam provided support and collaborated on the DFS home directory migration and enablingalternative collaborative applications in the CERNBox web platform. A new cluster of HighAvailability Samba gateways has been deployed to remove a single point of failure of the previoussetup and primarily targets access from Windows Terminal Servers and guests Windows PCs.

With ever increasing popularity of CERNBox (17K accounts, 120K shares and 440 project spaces), theteam prepared several training sessions for CERN departments and user groups (e.g. HR benefits, DGsecretariat, EN, IT secretariat, etc.).

The ST group coordinated preparation of a new EC-funded project, CS3MESH4EOSC, to boost on-premise cloud storage technology and collaboration services in Europe. The project has beenapproved by the European Commission with a score 14.5/15 points. The total envelope of theproject is 6M EUR and CERN IT will receive 1.6M for 36 month funding as a project coordinator. Theproject consortium consists of 12 partners from the CS3 community. The project will integrate syncand share cloud storage services and applications into a Pan-European network delivered as part ofEuropean Open Science Cloud (EOSC). It will provide interoperable, integrated researchenvironment, including at its core the CERN technology such as ScienceBox (CERNBox, EOS, SWAN,CVMFS, etc.) with close collaboration with industry and other partners.

Following requests from the community, the CERNBox team has proposed CS3APIs aimed atincreasing interoperability across entire CS3 community (software providers: Owncloud, Nextcloud,Seafile, Powerfolder, Pydio, etc.) and application extensions (Office, Jupyter, Filesender, etc.).CS3APIs and a reference implementation based on gRPC and Golang (REVA) have been released withApache 2 license after a Technology Transfer request. This has been done in in the frameworkprovided by the Knowledge Transfer group.

The CS3APIs were presented at the 5th CS3 Conference which took place in Rome 28-30 January(http://www.cs3community.org). The CS3 is co-organized by the IT-ST group. More than 150participants, including around 20 companies, were present. Dropbox participated for the secondtime. AWS was represented by a senior engineer from their R&D and HPC division.

Following the recent press articles on CERN’s strategy towards Microsoft, Owncloud has expressedsupport for the MALT project. Nextcloud also contacted the group to show interest in implementingthe REVA/CS3 API. In particular, at the TNC2019 conference in Tallin and at the eResearchAustralasia conference in Brisbane the CS3MESH4EOSC project was discussed to integrate allrelevant software stacks (ownCloud, NextCloud, etc.) and relevant MALT-like applicationcomponents to further streamline such integrations via an open community effort in a longer term.ST also organized a two-day seminar on storage and cloud technology, focusing on CEPH and MALT,

42 | P a g eCERN IT Department Groups and Activities Report 2019

with the French DSIC (Ministère de l'Intérieur) with discussions and presentations from nearly all ITgroups.

ST group co-organized the first SWAN user workshop which took place in the IT Auditorium on Friday11 October 2019. This event was organised jointly by EP-SFT, IT-ST and IT-DB. A total of 19presentations (1 remote) evidenced a wide range of existing use-cases for the SWAN service at CERNin several areas:

· End-user Data Analysis (ALICE, CMSSW, COMPASS, AWAKE)· Experiment online operations (ATLAS TDAQ, CMS HGCAL)· LHC beam operations and machine studies (NXCALS, Signal Monitoring, superconducting

magnets simulation, etc.)· IT systems analysis and monitoring (tape server logs, etc.)· Documentation of Statistical toolsets for Particle Physics & Machine Learning Workshops· Education&Outreach (Open Data, S’cool Lab).

Posters:

· J. Leduc, C. Caffy, E. Cano, G. Cancio Melia, M. Davis, S. Murray, V. Bahyl, System testingCERN physics archival software using Docker and Kubernetes, CHEP 2019, Adelaide, AU,5/11/2019, https://indico.cern.ch/event/773049/contributions/3473307/

· A.J. Peters, E. Sindrilaru, G. Bitzes, F. Luchetti, M. Patrascoiu/CERN, Code health in EOS:Improving test infrastructure and overall service quality, 24th International Conference onComputing in High Energy & Nuclear Physics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3473251/

· A.J. Peters, G. Lo Presti, R. Toebbicke/CERN, Using the RichACL Standard for Access Controlin EOS, 24th International Conference on Computing in High Energy & Nuclear Physics (CHEP2019), Adelaide, Australia, 4-8/Nov/2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3474476/

· A.J. Peters, G. Bitzes, M.K. Simon, R.Toebbicke/CERN, Evolution of the filesystem interface ofthe EOS Open Storage system, 24th International Conference on Computing in High Energy& Nuclear Physics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3474462/

· L. Mascetti, G. Lo Presti, S. Bukowiec, H.G. Labrador, V.N. Bippus, A. Smyrnakis, M. Kwiatek,J. Moscicki/CERN, CERNBox as the hyper-converged storage space at CERN: integrating DFSuse-cases and home directories, 24th International Conference on Computing in High Energy& Nuclear Physics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3474470/

· L. Mascetti, M. Lamanna, E. Karavakis, J. Moscicki, H.G. Labrador, A.J. Peters/CERN,Migration of user and project spaces with EOS\CERNBox: experience on scaling and large-scale operations, 24th International Conference on Computing in High Energy & NuclearPhysics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3474465/

Presentations:

43 | P a g eCERN IT Department Groups and Activities Report 2019

· J. T. Moscicki, Keynote Address: Data, storage & information management: collaboration atthe frontiers of science and technology, eResearch Australasia 22 October 2019, Brisbane;https://conference.eresearch.edu.au/2019-program ;https://conference.eresearch.edu.au/wp-content/uploads/2019/10/2019-eResearch-Jakub-Moscicki.pdf

· J. T. Moscicki, G.Kennedy: Towards globally connected research data cloud infrastructure, ,eResearch Australasia 2019, 22 October 2019, Brisbane;https://conference.eresearch.edu.au/2019-program ;https://conference.eresearch.edu.au/wp-content/uploads/2019/08/2019_eResearch_78_Towards-globally-connected-research-data-cloud-infrastructure.pdf

· J. T. Moscicki, G.Aben: Going FAIR at the rate of n × log n -- Interlinking Synch&Share storesto benefit from Metcalfe’s Law; Opening Data Silos at TNC 2019, Tallin, 17 June 2019,https://tnc19.geant.org/programme/#Monday

· J. T.Moscicki et al.: Open Data Science Mesh: friction-free collaboration for researchersbridging High-Energy Physics and European Open Science Cloud; CHEP 2019, Adeleide 5November 2019; https://indico.cern.ch/event/773049/contributions/3474850/

· J. T. Moscicki et al: Evolution of web-based analysis for Machine Learning and LHCExperiments: power of integrating storage, interactivity and collaboration with JupyterLab inSWAN; CHEP 2019, Adeleide 4 November 2019;https://indico.cern.ch/event/773049/contributions/3476172/

· J. T. Moscicki, CS3 Science Mesh and The future of Synchronization and Sharinginfrastructures ; DISSCO Round Table on Developing and implementing efficient Partnershipsframeworks for European distributed research infrastructures ; Brussels, 23 September2019

· J. Collet, Cloud storage performance in a Ceph Cluster, Openlab Technical Workshop, CERN,Geneva, 24/01/2019,https://indico.cern.ch/event/755842/contributions/3243386/attachments/1784159/2904041/2019-jcollet-openlab.pdf

· J. Collet, Characterization of OSD Performance in a Ceph Cluster, Per3S (Performance andScalability of Storage Systems), INRIA Bordeaux, Talence, France, 25/01/2019,https://per3s.sciencesconf.org/data/pages/2019_per3s_jcollet.pdf

· J. Collet, Ceph at CERN, French Ministere de l'interieur (DSIC) Visit: Workshop on storage andcloud technology, CERN, Geneva, 02/07/2019, https://indico.cern.ch/event/828727/

· G. Cancio, M. Gasthuber (DESY), K. Leffhalm (DESY), S. Misawa (BNL), H. Newman (Caltech),V. Sapunenko (INFN), L. Tortay (IN2P3), HEPiX TechWatch WG: Storage, 2019 JointHSF/OSG/WLCG workshop, Jefferson National Accelerator Facility, Newport News VA, USA,19/03/2019, https://indico.cern.ch/event/759388/contributions/3326348/

· V. Bahyl, C. Caffy, G. Cancio, E. Cano, M. Davis, D. Fernandez Alvarez, J. Leduc, S. Murray,Current status of tape storage at CERN, HEPiX Autumn 2019 workshop, NIKHEF, Amsterdam,NL, 17/10/2019, https://indico.cern.ch/event/810635/contributions/3592930/

· E. Cano, V. Bahyl, C. Caffy, G. Cancio, M. Davis, V. Kotlyar (IHEP-RU), J. Leduc, G. Lo Presti,Tao Lin (IHEP-CN), S. Murray: CERN Tape Archive: production status, migration from CASTORand new features, CHEP 2019, Adelaide, AU, 7/11/2019,https://indico.cern.ch/event/773049/contributions/3474415/

· S. Berry, J. S. Bonilla (University of Oregon (US), A. Chavez, L. Deparis, M. Gaillard, M.Garlasche, O. Keller (Institut fur Kernchemie (DE)), U. Kose (Universita e INFN, Bologna (IT)),J. Leduc, E. Özçeşmeci (Ankara Universitesi (TU), M. Sharma, H. Toivonen (Helsinki Universityof Technology (FI)), Fluidic Data: When Art Meets CERN Data Flows, CHEP 2019, Adelaide,AU, 7/11/2019, https://indico.cern.ch/event/773049/contributions/3474847/

44 | P a g eCERN IT Department Groups and Activities Report 2019

· H. Rousseau/CERN & C. Contescu/CERN, EOS operations @ CERN, EOS Workshop 2019,Geneva, Switzerland, 04/02/2019;https://indico.cern.ch/event/775181/contributions/3288739/attachments/1789795/2915516/EOS_Workshop_2019_-_EOS_operations.pdf; https://cds.cern.ch/record/2658172

· J. Collet, Cloud storage performance in a Ceph Cluster, Openlab Technical Workshop, CERN,Geneva, 24/01/2019

· J. Collet, Characterization of OSD Performance in a Ceph Cluster, Per3S (Performance andScalability of Storage Systems), INRIA Bordeaux, Talence, France, 25/01/2019

· H. Labrador, A journey to the mesh : microservices-based sync and share using gRPC andProtobuf, CS3 2019 Rome; https://indico.cern.ch/event/726040/contributions/3252073/

· H. Labrador , CS3APIS and REVA: Cloud Interoperability and User Freedom; ownCloudConference 2019; https://conference.owncloud.org/

· H. Labrador, Evolution of the CERNBox platform to support collaborative applications andMALT; CHEP 2019; https://indico.cern.ch/event/773049/contributions/3474851/

· Increasing inter-operability for research clouds: CS3APIs for connecting sync&share storage,applications and science environments; CHEP 2019;https://indico.cern.ch/event/773049/contributions/3473799

· G. Lo Presti, CERNBox as the CERN Apps Hub, CS3 Workshop, CNR, Roma, 27 Jan 2019;https://indico.cern.ch/event/726040

· G. Lo Presti, CERN Storage Evolution, HEPiX Fall 2019 Workshop, Science Park, Amsterdam,16 Oct 2019; https://indico.cern.ch/event/726040

· R. Valverde/University of Oviedo, "EOS-Home Backup Prototype using restic", EOSWorkshop, CERN, Geneva, Switzerland.; cdsvideo: http://cds.cern.ch/record/2659420?ln=en

· D. Van der Ster/CERN, T.Mouratidis/CERN, R.Valverde/CERN, J.Blomer/CERN, "Applicationsof Ceph @ CERN", Ceph Day for Science, CERN, Geneva, Switzerland;https://indico.cern.ch/event/765214/

· D. Castro/CERN, “SWAN and its analysis ecosystem”, CS3 2018, CNR, Rome, Italy,29/01/2019, https://indico.cern.ch/event/726040/contributions/3252085/

· D. Castro/CERN, Hugo. Gonzales/CERN, “Summary of CS3 Rome 2019 Workshop”, ITTF,CERN, Geneva, Switzerland, 01/03/2019, https://indico.cern.ch/event/795790/

· D. Castro/CERN, “CERNBox: past, present and future”, ASDF, CERN, Geneva, Switzerland,04/07/2019, https://indico.cern.ch/event/827708/

· D. Castro/CERN, “SWAN: interactive data analysis on the web”, PyHEP, Abingdon, U.K.,17/10/2019, https://indico.cern.ch/event/833895/contributions/3579240/

· D. Castro/CERN, “Jupyter on Earth: how SWAN is powering CERN use cases”, ITTF, CERN,Geneva, Switzerland, 22/11/2019, https://indico.cern.ch/event/857270/

· E. Bocchi/CERN, Storage services at CERN, HEPiX Spring 2019 Workshop, San DiegoSupercomputing Center (SDSC), San Diego, Unites States, 25/03/2019,https://indico.cern.ch/event/765497/contributions/3351198/

· E. Bocchi/CERN, Evolution of CernVM-FS Infrastructure at CERN, CernVM Users Workshop2019, CERN, Geneva, Switzerland, 03/06/2019,https://indico.cern.ch/event/757415/contributions/3421573/

· E. Bocchi/CERN, ScienceBox, SWAN Users' Workshop, CERN, Geneva, Switzerland,11/10/2019, https://indico.cern.ch/event/834069/contributions/3495324/

· E. Bocchi/CERN, Science Box: Converging to Kubernetes containers in production for on-premise and hybrid clouds for CERNBox, SWAN, and EOS, 24th International Conference onComputing in High-Energy and Nuclear Physics (CHEP 2019), Adelaide Convention Centre,Adelaide, Australia, 04/11/2019,https://indico.cern.ch/event/773049/contributions/3474412/

45 | P a g eCERN IT Department Groups and Activities Report 2019

· E. Bocchi/CERN, Evolution of the S3 service at CERN as a storage backend for infrastructureservices and software repositories, 24th International Conference on Computing in High-Energy and Nuclear Physics (CHEP 2019), Adelaide Convention Centre, Adelaide, Australia,04/11/2019, https://indico.cern.ch/event/773049/contributions/3474412/

· D. van der Ster, Ceph@CERN 2019 Update, Openstack / Ceph at RAL Workshop, UK, 14-15March 2019, https://indico.cern.ch/event/803456/

· D. van der Ster & T. Mouratidis, Ceph Operations at CERN: Where do we go from here?,Cephalocon 2019 Barcelona. 19-20 May 2019. https://sched.co/M7ihhttps://youtu.be/0i7ew3XXb7Q

· A. Chiusole, D. van der Ster, et al, An I/O analysis of HPC workloads on CephFS and Lustre,HPC-IODC, Frankfurt, 20 June 2019, https://hps.vi4io.org/events/2019/iodc

· A.J. Peters/CERN, Erasure Coding for production in the EOS Open Storage system, 24thInternational Conference on Computing in High Energy & Nuclear Physics (CHEP 2019),Adelaide, Australia, 4-8/Nov/2019,https://indico.cern.ch/event/773049/contributions/3474421/

· A.J. Peters/CERN, EOS architectural evolution and strategic development directions, 24thInternational Conference on Computing in High Energy & Nuclear Physics (CHEP 2019),Adelaide, Australia, 4-8/Nov/2019,https://indico.cern.ch/event/773049/contributions/3474409/

· A.B. Hanushevsky/SLAC, M.K.Simon/CERN, XRootD 5.0.0: encryption and beyond, 24thInternational Conference on Computing in High Energy & Nuclear Physics (CHEP 2019),Adelaide, Australia, 4-8/Nov/2019,https://indico.cern.ch/event/773049/contributions/3474422/

· A.B. Hanushevsky/SLAC, M.K.Simon/CERN, EOS Erasure Coding plug-in as a case study forthe XRootD client declarative API, 24th International Conference on Computing in HighEnergy & Nuclear Physics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019,https://indico.cern.ch/event/773049/contributions/3473276/

· E. Karavakis/CERN, FTS plans in 2020 for ATLAS, 64th ATLAS Software & Computing Week,CERN, Geneva, Switzerland, 05/Dec/2019,https://indico.cern.ch/event/823341/contributions/3650698/

· L. Mascetti/CERN, EOS Report, WLCG Operations Coordination, CERN, Geneva, Switzerland,12/Dec/2019, https://indico.cern.ch/event/869667/contributions/3670019/

· L. Mascetti/CERN, COMTRADE EOS Productization, CERN openlab Technical Workshop,CERN, Switzerland, 23-24/Jan/2019,https://indico.cern.ch/event/755842/contributions/3242742/

· G. Bitzes/CERN, New namespace in production: An overview, and future plans, EOSWorkshop 2019, CERN, Switzerland, 4-5/Feb/2019,https://indico.cern.ch/event/775181/contributions/3286834/

· G. Bitzes/CERN, Setting up and operating a new-namespace EOS instance, EOS Workshop2019, CERN, Switzerland, 4-5/Feb/2019,https://indico.cern.ch/event/775181/contributions/3288760/

· E. Sindrilaru/CERN,EOS Citrine updates and developments, EOS Workshop 2019, CERN,Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3288737/

· A. J. Peters/CERN, Status report of eosxd as EOS filesystem interface, EOS Workshop 2019,CERN, Switzerland, 4-5/Feb/2019,https://indico.cern.ch/event/775181/contributions/3285864/

· R. Toebbicke/CERN, EOS ACL enhancements, EOS Workshop 2019, CERN, Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3286826/

· F. Lucchetti/CERN, EOS Testing Service development: leveraging CI + Kubernetes, EOSWorkshop 2019, CERN, Switzerland, 4-5/Feb/2019,https://indico.cern.ch/event/775181/contributions/3292097/

46 | P a g eCERN IT Department Groups and Activities Report 2019

· M. Patrascoiu/CERN, EOS XDC Developments, EOS Workshop 2019, CERN, Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3287634/

· M. K. Simon/CERN, XRootD - Releases, Status & Planning 2019, EOS Workshop 2019, CERN,Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3291703/

· L. Mascetti/CERN, Migration from EOSUSER to EOSHOME, EOS Workshop 2019, CERN,Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3288736/

· C. Contescu, H.Rousseau/CERN, EOSOps@CERN, EOS Workshop 2019, CERN, Switzerland, 4-5/Feb/2019, https://indico.cern.ch/event/775181/contributions/3288739/

· M. K. Simon/CERN, XRootD client status update: latest developments and plans, XrootdWorkshop CC-IN2P3, Lyon, France, 11-12/Jun/19,https://indico.cern.ch/event/727208/contributions/3444599/

· M. K. Simon/CERN, Metalinks & XRootD, Xrootd Workshop CC-IN2P3, Lyon, France, 11-12/Jun/19, https://indico.cern.ch/event/727208/contributions/3444612/

· M. K. Simon/CERN, Client side implementation of secure xroot/root protocol, XrootdWorkshop CC-IN2P3, Lyon, France, 11-12/Jun/19,https://indico.cern.ch/event/727208/contributions/3444616/

· M. K. Simon/CERN, The new XRootD client declarative API, Xrootd Workshop CC-IN2P3,Lyon, France, 11-12/Jun/19, https://indico.cern.ch/event/727208/contributions/3444629/

· L. Mascetti/CERN, Partnership CERN-COMTRADE and EOS at CERN, EOS COMTRADEMeeting, Ljubljana, Slovenia, 12/Sep/2019

· A. J. Peters/CERN, EOS Distributed Storage System, CERN Euclide Technical Meeting, CERN,Switzerland, 12/Sep/2019, https://indico.cern.ch/event/802498/

· A. J. Peters/CERN, Storage for High Energy Physics, Keynote at Ceph Day for Research andNon-Profits, CERN, Switzerland, 17/Sep/2019,https://indico.cern.ch/event/765214/contributions/3517139/

· E. Karavakis/CERN, FTS improvements for LHC Run-3 and beyond, 24th InternationalConference on Computing in High Energy & Nuclear Physics (CHEP 2019), Adelaide,Australia, 4-8/Nov/2019, https://indico.cern.ch/event/773049/contributions/3474419/

· L. Mascetti/CERN, CERN Disk Storage Services: report from last data taking, evolution andfuture outlook towards Exabyte-scale storage, 24th International Conference on Computingin High Energy & Nuclear Physics (CHEP 2019), Adelaide, Australia, 4-8/Nov/2019,https://indico.cern.ch/event/773049/contributions/3474428/

Publications:

· H. G. Labrador, et al. "CERNBox: the CERN cloud storage hub." EPJ Web of Conferences. Vol.214. EDP Sciences, 2019; https://www.epj-conferences.org/articles/epjconf/pdf/2019/19/epjconf_chep2018_04038.pdf

· V. Avati et al., Big Data Tools and Cloud Services for High Energy Physics Analysis in TOTEMExperiment, 2018 IEEE/ACM International Conference on Utility and Cloud ComputingCompanion (UCC Companion), Technopark Zurich, Zurich, Switzerland, 17/12/2018,https://ieeexplore.ieee.org/document/8605741

· V. Avati et al., Declarative Big Data Analysis for High-Energy Physics: TOTEM Use Case, Euro-Par 2019: Parallel Processing, Göttingen, Germany, 26/08/2019,https://www.springerprofessional.de/en/declarative-big-data-analysis-for-high-energy-physics-totem-use-/17080636

47 | P a g eCERN IT Department Groups and Activities Report 2019

· A. Chiusole, D. van der Ster, et al. An I/O Analysis of HPC Workloads on CephFS and Lustre.ISC High Performance 2019: High Performance Computing pp 300-316. DOI: 10.1007/978-3-030-34356-9_24; https://link.springer.com/chapter/10.1007%2F978-3-030-34356-9_24

· G. Bitzes, E. Sindrilaru, A. Peters/CERN, Scaling the EOS namespace – new developments,and performance optimizations, 23rd International Conference on Computing in High Energyand Nuclear Physics (CHEP 2018), Sofia, Bulgaria,https://doi.org/10.1051/epjconf/201921404019

Reports:

· G. Cancio (CERN), M. Gasthuber (DESY), K. Leffhalm (DESY), S. Misawa (BNL), H. Newman(Caltech), V. Sapunenko (INFN), L. Tortay (IN2P3), HEPiX TechWatch WG: Storage,https://docs.google.com/document/d/1IS4_raw7PE0wVTNWDJmUGmneV1zpg9-29vz7XP4ChRA/edit

48 | P a g eCERN IT Department Groups and Activities Report 2019

Activities and Projects Reports 2019

49 | P a g eCERN IT Department Groups and Activities Report 2019

CERN OPENLAB

CERN openlab is a unique public-private partnership, through which CERN collaborates with leadingICT companies and other research organisations. Together we work to accelerate the developmentof cutting-edge ICT solutions for the research community.

2019 marked the second year of CERN openlab’s sixth phase (2018-2020). While having the projectsthat had started in the beginning of phase six in full swing, CERN openlab began looking into newResearch & Development topics. Additionally, it has launched new projects beyond the domain ofparticle physics.

CERN STARTS INVESTIGATIONS IN THE FIELD OF QUANTUM COMPUTINGCERN openlab launched four quantum-computing projects in 2019, following up on its prolificquantum-computing workshop, which took place in the end of 2018. Quantum-computingtechnologies hold great potential in providing solutions to CERN’s future ICT-challenges. Having bigplayers of this promising field of research such as Google and IBM at its side, CERN openlab isassessing how CERN and the LHC experiments could benefit from those emerging technologies.

IBM and CERN are investigating the use of quantum support vector machines (QSVMs) for theclassification of particle collision events that produce a certain type of decay for the Higgs boson,which is rare and therefore challenging to understand.

The collaboration with Google aims to develop quantum algorithms to help optimise how data isdistributed for storage in the Worldwide LHC Computing Grid (WLCG). This would improve resourceallocation and usage, thus leading to increased efficiency in the broader data-handling workflow.

NEW MEMBERS JOIN TO DEVELOP MEDICAL APPLICATIONSSamara University and be-studys joined CERN openlab in 2019. Supported by CERN’s KnowledgeTransfer group, CERN openlab works with both newcomers on projects related to medicalapplications, developing and implementing AI-technologies.

Whereas the joint project with be-studys aims to develop a machine-learning platform for large-scale systems biology studies, the project with Samara University focuses on improving medical andscientific linear accelerators, allowing anomaly detection and maintenance planning.

As of the end of 2019, the list of members was as follows:

Partners: Google, Intel, Micron, Oracle, Siemens

Contributors: be-studys, E4, IBM

Associates: Comtrade, Open Systems

Research members: Eindhoven University of Technology, Fermilab, the Italian National Institute forNuclear Physics (INFN), King’s College London, Newcastle University, Samara University, SCimPULSEFoundation

50 | P a g eCERN IT Department Groups and Activities Report 2019

MAJOR EVENTSCERN hosted a first-of-its-kind workshop on big data in medicine. The event organised by CERNopenlab marked the conclusion of a two-year pilot investigation into how CERN-developedtechnologies and techniques related to computing and big data could potentially be used to addresschallenges faced in biomedicine.

Gul Rukh Khattak (CERN openlab) gives a talk at the Intel eXtreme Performance Users Group (IXPUG)Annual Conference 2019 about speeding-up detector simulations using Generative AdversarialNetworks (GANs).

The Intel eXtreme Performance Users Group (IXPUG) Annual Conference 2019 was hosted by CERNopenlab. The conference addressed a wide array of topics related to the adoption and deploymentof state-of-the-art data-processing technologies and techniques, with a view to achieving optimalapplication execution.

SUMMER-STUDENT SUCCESSAlso in 2019, the CERN openlab summer-student programme attracted numerous students from allover the world. A total of 1660 students applied to the programme. The 40 students selected camefrom 19 different countries.

51 | P a g eCERN IT Department Groups and Activities Report 2019

Participants of the 2019 CERN openlab summer-student programme.

They worked on hands-on projects with the latest ICT solutions, took part in a series of lecturesgiven by CERN experts, developed innovative web applications during the Webfest hackathon, andvisited sites at CERN and beyond. Find out more about CERN openlab’s summer-student programme,its joint R&D projects, and other interesting initiatives on the collaboration’s website.

Presentations

· E. Grancher/CERN, M. Martin/CERN, S. Masson/CERN, , Research Analytics at Scale: CERN’sExperience with Oracle Cloud Solutions, Oracle OpenWorld 2019, London, UK, 16 January2019

· A. Mendelsohn/Oracle, E. Grancher/CERN, M. Martin/CERN, Oracle Autonomous DatabaseKeynote, Oracle OpenWorld 2019, London, UK, 16 January 2019

· M. Martin/CERN, J. Abel/Oracle, Enterprise Challenges and Outcomes, Oracle OpenWorld2019, London, UK, 17 January 2019

· A. Tsouvelekakis/CERN, Oracle Management Cloud: A unified monitoring platform, CERNopenlab Technical Workshop, Geneva, Switzerland, 23 January 2019

· F. Rademakers/CERN, BioDynaMo, CERN openlab workshop, Geneva, Switzerland, 23January 2019

· L. Mascetti/CERN, EOS Productisation, CERN openlab workshop, Geneva, Switzerland, 23January 2019

· F. M. Tilaro/CERN, R. Kulaga/CERN, Siemens Data Analytics and SCADA evolution statusreport, CERN openlab Technical Workshop, Geneva, Switzerland, 23 January 2019

· G. Molan/Comtrade, EOS Documentation and Tesla Box, EOS Workshop, Geneva,Switzerland, 4 February 2019

52 | P a g eCERN IT Department Groups and Activities Report 2019

· F. Carminati/CERN, Quantum Computing for High Energy Physics Applications, PhD Courseon Quantum Computing at University of Pavia, Pavia, Italy, 21 February 2019

· A. Tsouvelekakis/CERN, EnterpriseManager and Management Cloud CAB, Oracle CustomerAdvisory Board, Redwood Shores, USA, April 2019

· J. Radtke/Unlisted, A Key-Value Store for Data Acquisition Systems, SPDK, PMDK andVTune(tm) Summit 04'19, Santa Clara, USA, April 2019

· R. Rocha/CERN, L. Heinrich/CERN, Reperforming a Nobel Prize Discovery on Kubernetes,· Kubecon Europe 2019, Barcelona, Spain, 21 May 2019· F. Rademakers/CERN, BioDynaMo, Hôpitaux Universitaires de Genève, Geneva, Switzerland,

20 June 2019· R. Rocha/CERN, L. Heinrich/CERN, Deep Dive into the Kubecon Higgs Analysis Demo, CERN IT

Technical Forum, Geneva, Switzerland, 5 July 2019· F. Fracas/CERN, From bit to qubit: the future of information technology also passes from

CERN openlab, Italia Campus Party convention, Milan, Italy, 25 July 2019· G. De Toni/University of Trento, Improvements on BioDynaMo Build System, CERN openlab

summer student lightning talk session, Geneva, Switzerland, 13 August 2019· Y. Boget/ Université de Neuchâtel, ProGAN on Satellite images, CERN openlab summer

student lightning talk session, Geneva, Switzerland, 15 August 2019· E. Grancher/CERN, V. Kozlovszky/CERN, 100% Oracle Cloud: Registering 90,000 People for

CERN Open Days, Oracle OpenWorld 2019, San Francisco, USA, 16 September 2019· M. Martin/CERN, R. Zimmermann/Oracle, Joerg Otto/IDS GmbH, Oracle Autonomous Data

Warehouse: Customer Panel, Oracle OpenWorld 2019, San Francisco, USA, 17 September2019

· D. Ebert/Oracle, M. Martin/CERN, Antonio Nappi/CERN, Advancing research with OracleCloud, Oracle OpenWorld 2019, San Francisco, USA, 17 September 2019

· S. Masson/CERN, M. Martin/CERN, Managing one of the most largest IoT systems in theworld with Oracle Autonomous Technologies, San Francisco, USA, 17 September 2019

· E. Grancher/CERN, 11 Months with Oracle Autonomous Transaction Processing, OracleOpenWorld 2019, San Francisco, USA, 18 September 2019

· R. Rocha/CERN, L. Heinrich/CERN Higgs Analysis on Kubernetes using GCP, Google CloudSummit, Munich, Germany, 19 September 2019

· L. Canali/CERN, “Big Data In HEP” - Physics Data Analysis, Machine learning and DataReduction at Scale with Apache Spark, IXPUG 2019 Annual Conference, Geneva, Switzerland,24 September 2019

· M. Maciejewski/Unlisted, Persistent Memory based Key-Value Store for Data AcquisitionSystems, IXPUG 2019 Annual Conference, Geneva, Switzerland, 25 September 2019

· L. Breitwieser/CERN, BioDynaMo Project Update, CERN Medical Application Project Forum,Geneva, Switzerland, 26 September 2019

· A. Hesam/CERN, Simulation Master Class , CERN's Dutch Language Teachers Programme,Geneva, Switzerland, 26 September 2019

· A. Tsouvelekakis/CERN, CERN: Monitoring Infrastructure with Oracle Management Cloud,Oracle OpenWorld 2019, San Francisco, USA, September 2019

· Y. Donon/Samara University, Smart Anomaly Detection and Maintenance Planning Platformfor Linear Accelerators, 27th International Symposium Nuclear Electronics and Computing(NEC’2019), Budva, Montenegro, 3 October 2019

· A. Hesam/CERN, Simulation Master Class, CERN's Dutch Language Students Programme(NVV Profielwerkstukreis), Geneva, Switzerland, 11 October 2019

53 | P a g eCERN IT Department Groups and Activities Report 2019

· L. Canali/CERN, Deep Learning Pipelines for High Energy Physics using Apache Spark withDistributed Keras on Analytics Zoo. Spark Summit Europe, Amsterdam, Netherlands, 16October 2019

· L. Breitwieser/CERN, A. Hesam/CERN, The BioDynaMo Project, EmLife Meeting, Geneva,Switzerland, 21 October 2019

· G. Jereczek/Unlisted, Let's get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage, 4th International Conference on Computing in High-Energy and Nuclear Physics (CHEP), Adelaide, Australia, 5 November 2019

· R. Rocha/CERN, L. Heinrich/CERN, Reperforming a Nobel Prize Discovery on Kubernetes,· 4th International Conference on Computing in High-Energy and Nuclear Physics (CHEP),

Adelaide, Australia, 7 November 2019· F. Carminati/CERN, Particle Track Reconstruction with Quantum Algorithms, 4th

International Conference on Computing in High-Energy and Nuclear Physics (CHEP),Adelaide, Australia, 7 November 2019

· S. Masson/CERN, M. Martin/CERN, Oracle Autonomous Data Warehouse and CERNAccelerator Control Systems, Modern Cloud Day, Paris, France, 25 November

· M. Martin/CERN, M. Connaughton/ Oracle, Big Data Analytics and the Large Hadron Collider,Oracle Digital Days 2019, Dublin, Ireland, 26 November 2019

· M. Martin/CERN, M. Connaughton/ Oracle, Big Data Analytics and the Large Hadron Collider,National Analytics Summit 2019, Dublin, Ireland, 27 November 2019

· M. Martin/CERN, Big Data, AI and Machine Learning at CERN, Trinity College Dublin andADAPT Center, Dublin, Ireland, 27 November 2019

· L. Breitwieser/CERN, The BioDynaMo Software Part I, BioDynaMo Collaboration Meeting,Zurich, Switzerland, 2 December 2019

· A. Hesam/CERN, The BioDynaMo Software Part II, BioDynaMo Collaboration Meeting,Zurich, Switzerland, 2 December 2019

Publications

· M. Migliorini/CERN, R. Castellotti/CERN, L. Canali/CERN, M. Zanetti/CERN, Machine LearningPipelines with Modern Big Data Tools for High Energy Physics, arXiv e-prints, 27 September2019

· Y. Donon/Samara University, A. Kupriyanov/Samara University, D. Kirsh/Samara University,A. Di Meglio/CERN, R. Paringer/Samara University, P. Serafimovich/Samara University, S.Syomic/Samara University, Anomaly detection and breakdown prediction in RF powersource output: a review of approaches, CEUR Workshop proceedings, 27th Symposium onNuclear Electronics and Computing, Budva, Montenegro, 3 October 2019

· V. Kozlovszky/CERN, Open Days reservation system's high level overview – 2019, CERN DB-Blog, Geneva, Switzerland, 29 October 2019

· P. Golonka/CERN, F. Varela-Rodriguez/CERN, Consolidation and Redesign of CERN IndustrialControls Frameworks, Proc. 17th Biennial International Conference on Accelerator and LargeExperimental Physics Control Systems, New York, USA, October 2019

· V. Kozlovszky/CERN, Internationalization of the 2019 Open Days reservation system, CERNDB-Blog, Geneva, Switzerland, 12 November 2019

· D. Cicalese/CERN, G. Jereczek/Unlisted, F. Le Goff/CERN, G. Lehmann Miotto/CERN, J. Love/Argonne, M. Maciejewski/Unlisted, R. K. Mommsen/Fermilab, J. Radtke/Unlisted, J.

54 | P a g eCERN IT Department Groups and Activities Report 2019

Schmiegel/Unlisted, M. Szychowska/Unlisted, The design of a distributed key-value store forpetascale hot storage in data acquisition systems, EPJ Web Conf. 214, 19 November 2019

Reports

All reports from CERN openlab summer students 2019 are listed here:https://zenodo.org/communities/cernopenlab/ (reports uploaded starting from 22 November 2019)

Additional reports, publications, posters, and presentations related to the CERN openlab R&Dprojects active in 2019 can be found on the CERN openlab website: https://openlab.cern/

55 | P a g eCERN IT Department Groups and Activities Report 2019

CERN SCHOOL OF COMPUTING (CSC)

The CERN School of Computing (CSC) fosters the dissemination of knowledge and learning in thefield of scientific computing. Its mission is to create a common culture in scientific computing amongscientists and engineers involved in particle physics and other sciences. The CSC, along with theCERN Schools of Physics and the CERN Accelerator School, are the three schools that CERN has setup to help train the next generation of researchers across the laboratory’s main scientific andtechnical domains. Since the first CSC in Italy in 1970, the school has visited 23 countries and beenattended by more than 2800 students from 5 continents and 80 nationalities. Participants comefrom many different backgrounds, but all share a passion for computing and science.

The CSC is composed of three schools per year, each one characterized by its own flavour andspecific goals: the Main School, the Thematic School, and the Inverted School.

The Main School takes two weeks in late summer, and usually welcomes 60 to 80 students. It is averitable "Summer University", which offers both formal lectures and hands-on exercises. Thepractical part, where students work in pairs on common projects, is an important component of thelearning process. Since 2002, the school offers a CSC diploma upon successful completion of anoptional exam. Since 2008, the university hosting the CSC audits its academic programme andawards successful students five or six ECTS (European Credit Transfer System) credit points that arerecognised across Europe for any doctoral and master programme. Additionally, various social andsport activities are proposed to students to socialise and establish lifelong links that will be usefulthroughout their careers. In particular, a daily optional sports programme helps maintaining ahealthy work-life balance, while offering additional opportunities for interactions between students,lecturers and organisers.

Students, lecturers and organisers of the CERN School of Computing 2019. (Photo: N.Kasioumis/CERN)

56 | P a g eCERN IT Department Groups and Activities Report 2019

The 2019 CERN School of Computing (http://indico.cern.ch/e/CSC-2019) took place in Cluj Napoca,Romania on September 15-28, and was organised together with Babeș-Bolyai University (UBB). Theschool has welcomed 70 students (11 of which were female) from 44 different universities andinstitutes based on 24 countries worldwide. Representing 31 nationalities, these students wereselected from a record number of 104 applicants. This year, the usual intensive academicprogramme (55 hours of lectures and exercises covering physics computing, software engineering,and data technologies) was complemented by a number of optional activities, such as scientificvisits, guest lectures by Romanian speakers, and special evening lectures (History of Internet; Futureof Humanity and of the Universe). In addition, a rich social programme was offered – an excursion toTurda salt mine and Sighișoara town was clearly its highlight. At the end of the school, 67 studentspassed the optional exam – 13 of them with distinctions! - and were awarded 5 ECTS credits by UBB.

The Thematic School (tCSC), organized since 2013, is a smaller school (one week, 20-30 students)focused on a specific, more advanced topic. In 2019, this topic was "High Throughput DistributedProcessing of Future HEP Data". The 2019 Thematic CSC (http://indico.cern.ch/e/tCSC-2019) washeld on May 12-18 in Split, Croatia. 21 students from 16 different institutes have followed a richprogramme, consisting of 26 hours of lectures and exercises.

Focused students during one of the CSC lectures. (Photo: N. Kasioumis/CERN)

Finally, the Inverted School (iCSC), is a place where "students turn into lecturers": former CSC andtCSC attendees are invited to become teachers themselves and prepare series of lectures on topicswithin their respective domains of expertise. This event, held in winter at CERN, usually lasts two tofour days. Attendance to this event is open to anyone at CERN, and the lectures are also webcast.The 2019 Inverted CSC (http://indico.cern.ch/e/iCSC-2019) took place at CERN on March 4-7. As

57 | P a g eCERN IT Department Groups and Activities Report 2019

many as 417 people have registered – an absolute record! The event was very well attended, withpeaks up well over 100 attendees (in person + via webcast). 8 lecturers delivered 21 hours oflectures and exercises, on various topics including Artificial Intelligence and Machine Learning, BigData, Container Orchestration, Pattern Recognition, Tensor Networks, Computational Physics,Numerical Analysis, Track Finding, and more. During hands-on exercises, attendees worked withtechnologies such as Keras and TensorFlow, Hadoop and Spark, Kubernetes, and FPGAs.

The practical hands-on part, where students work in pairs on common projects, is an importantcomponent of the learning process. (Photo: N. Kasioumis/CERN)

In 2020, three CSC schools are planned:

· the main CERN School of Computing 2020 (August 23 - September 5 in Krakow, Poland),organized jointly with AGH University of Science and Technology (AGH) and Institute ofNuclear Physics, Polish Academy of Sciences (IFJ PAN);

· Thematic CSC 2020 (June 7-13 in Split, Croatia), organized together with the University ofSplit; and

· Inverted CSC 2020 (March 16-19 at CERN).

More information at http://cern.ch/csc.

58 | P a g eCERN IT Department Groups and Activities Report 2019

COMPUTER SECURITY

The mandate of the Computer Security Team is to “protect the operations and reputation of CERNagainst cyber-threats”.

Apart from the dismissal of two students deliberately running so-called crypto-currency miningsoftware using CERN’s central batch computing resources and the wide-spread attack of the “RockeGroup” against Jenkins and Redis instances world-wide, CERN ”Computer Security” has beensufficiently calm throughout 2019 without any other major incidents but still with lots of proactivework. On the response side, many freshly published vulnerabilities of hardware and softwareproducts also used by CERN needed to be fixed or mitigated under the coordination by the CERNComputer Security Team. Thanks to the activeness of the corresponding service managers, anyvulnerable exposure was kept to a minimum and fixes were applied within a short delay only. Inparticular the need to fully reboot the CERN data centre to protect it against the newest securityweakness in Intel chip sets (the so-called “MDS Side-Channel Attack”) was challenging. In parallel,the Computer Security Team has continued raising awareness within the CERN user community andits staff, but also within the international collaborations of the Worldwide LHC Computing Grid andthe High Energy Physics community. The annual “clicking campaign” intended to sensitise people tothe potential risks of unsolicited emails was very successful as was the in-depth hands-on training forstaff & users on becoming more experienced penetration testers and exploiters of vulnerabilities:Today more than 140 people use their skills to hack into their own system in order to improve itssecurity footprint. Still, however, the particular highlight here was the team winning the “EuropeanCISO-of-the-year 2019” award issued annually by the renowned SC Magazine.

Photo credit: SC Award 2019.

The overall computer security situation is well documented in the monthly reports of the ComputerSecurity Team (see https://cern.ch/security/reports/en/monthly_reports.shtml).

Presentations:

59 | P a g eCERN IT Department Groups and Activities Report 2019

§ "Why (Control System) Cyber-Security sucks" (Control System Cyber-Security Workshop,New York/USA, 2019/10/6, pptx/pdf)

§ "Finding the Balance" (CEA Saclay IT Seminar, Paris/FR 2019/9/30, pdf/pptx)§ "Why (Control System) Cyber-Security sucks" (CTA Control System Cyber-Security

workshop, Zeuthen/D, 2019/6/20, pptx/pdf)§ "CERN Computer Security" (Lectures at FH Nordwest Schweiz, Windisch/CH,

2019/6/17, Intro.pptx/Intro.pdf, WhiteHatChallenge.pptx/WhiteHatChallenge.pdf)§ "Computer Security in 2019" (CLUSIS conference, Lausanne/CH 2019/04/30, pdf/pptx)§ "Situational Awareness: Security (& Privacy)" (HEPix Spring 2019 workshop, La Jolla

CA/USA, 2019/3/25-29, pptx/pdf)§ "Finding the Balance" (Airbus Security Symposium, Munich/DE 2019/2/14, pdf/pptx)§ "Computer Security in 2019" (Cyber Security Day, Beirut/LB 2019/01/18, pdf/pptx)

Publications:

· The CERN Computer Security Team: “CERN Articles on Computer Security”(https://security.web.cern.ch/security/training/en/CERN%20Articles%20On%20Computer%20Security.pdf)

· "What do apartments and computers have in common?" (CERN Bulletin 42-43,2019/10/17)

· "Computer security articles turn 200" (CERN Bulletin 40-41, 2019/10/3)· "Un-confidentiality when using external e-mail" (CERN Bulletin 38-39, 2019/9/18)· "Professional access to private devices" (CERN Bulletin 35-37, 2019/9/4)· "Click me – NOT!" (CERN Bulletin 32-34, 2019/8/14)· "When your mike spies on you" (CERN Bulletin 30-31, 2019/7/23)· "Welcome Summer Students!" (CERN Bulletin 28-29, 2019/7/10)· "Serious gaming… for your own good" (CERN Bulletin 26-27, 2019/6/26)· "Software Bugs: What if?" (CERN Bulletin 24-25, 2019/6/12)· "Go clever! Go central!" (CERN Bulletin 22-23, 2019/5/31)· "Browsing securely and privately" (CERN Bulletin 20-21, 2019/5/14)· "Computer Security vs Academic Freedom" (CERN Bulletin 17-19, 2019/4/30 )· "I love you" (CERN Bulletin 15-16, 2019/4/10)· "Digital Broken Windows Theory" (CERN Bulletin 13-14, 2019/3/27)· "A "file drop" for confidential data" (CERN Bulletin 11-12, 2019/3/13)· "Fatal dependencies" (CERN Bulletin 9-10, 2019/2/27)· "Negative legacy when moving on?" (CERN Bulletin 7-8, 2019/2/13)· "When "free" gets even more restrictive" (CERN Bulletin 5-6, 2019/1/30)· "Fun Facts: Did you know?" (CERN Bulletin 52-4, 2019/1/16 )

60 | P a g eCERN IT Department Groups and Activities Report 2019

DATA PRESERVATION

2019 was a special year in a number of senses regarding the long-term preservation of CERN (andHigh Energy Physics) data for future re-use and re-analysis. Not only did it mark 30 years since thestart-up of the former Large Electron-Positron collider (LEP) – whose data, software anddocumentation are not only (largely) still available but also still used for analyses and publications.The data volume is – by today’s standards – rather small (some 100TB per experiment) but this wasdefinitely “big data” during the data-taking period (1989 – 2000). Some of the issues related to thenow legacy software of the LEP era, particularly pertaining to that of the CERN Program Library(CERNLIB) were presented to a workshop on Sustainable Software Sustainability (sic) in the Hague inApril 2019.

The associated report and much background information can be found athttps://indico.cern.ch/event/801649/.

A second important event was the Open Symposium held in Granada on the 2019 / 2020 update ofthe European Strategy for Particle Physics. Whereas the strategy itself still has to be finalised andapproved, the overall outlook reaches out until almost the end of the current century and includespotential new machines that could be considered a “super-LEP” and / or a “super-LHC”. Preservingstill existing data (and software and documentation and knowledge) until such machines might enteroperation would be an enormous challenge that was summarised in a lightning talk to iPRES 2019.

At the symposium itself, the relevance of Data Management Plans (DMPs) – increasingly required byFunding Agencies – was presented and the need for these as part of update of the GeneralConditions for CERN Experiments began to be discussed. A paper summarising the goals and purposeof DMPs was written and is referenced in the draft update (10.5281/zenodo.3479129).

Finally, preparations for the 10th conference on Preserving and Adding Value to data – that will beheld at CERN in May 2020 – were well under way. With one exception, this conference series hasalways been hosted by a “Space Agency”, e.g. ESA, and the fact that it will be held at CERN istestament to how our data preservation activities are viewed in the wider world.

61 | P a g eCERN IT Department Groups and Activities Report 2019

EXTERNALLY FUNDED PROJECTS

The Externally Funded Projects (EFP) section of DI group hosts the IT EC Project Office that performsoversight and support functions necessary to coordinate IT department’s engagement in EuropeanUnion projects. The objective of CERN’s participation in the Horizon 2020 work programme is todevelop policies, technologies and services that can support the Organization’s scientificprogramme, promote open science and expand the impact of fundamental research on society andthe economy. IT department is actively engaged in 14 Horizon 2020 projects.

CERN collected the 2019 Procura+ Award ‘outstanding innovation procurement in ICT’ for the HelixNebula Science Cloud Pre-Commercial Procurement (HNSciCloud). In this EU funded PCP, 10 publicresearch entities from 7 EU countries kick-started the uptake of new cloud-based systems with bigdata storage and analysis tools needed by large scientific projects.

Photo credit: Procura+ Award 2019,

CERN is the coordinator of the CS3MESH project that starts in January 2020. CS3MESH empowersservice providers in delivering state-of-the art, connected infrastructure to boost effective scientificcollaboration across the entire federation and data sharing according to FAIR principles. The projectdelivers the core of a scientific and educational infrastructure for cloud storage services in Europethrough a lightweight federation of existing sync/share services (CS3) and integration withmultidisciplinary application workflows.

62 | P a g eCERN IT Department Groups and Activities Report 2019

All of these projects are contributing to establishing the European Open Science Cloud (EOSC).The EOSC initiative has been proposed in 2016 by the European Commission as part of the EuropeanCloud Initiative to build a competitive data and knowledge economy in Europe. The EOSC will offer1.7 million European researchers and 70 million professionals in science, technology, the humanitiesand social sciences a virtual environment with open and seamless services for storage, management,analysis and re-use of research data, across borders and scientific disciplines by federating existingscientific data infrastructures, currently dispersed across disciplines and the EU Member States.

Figure 1 The European Open Science Cloud

63 | P a g eCERN IT Department Groups and Activities Report 2019

WORLDWIDE LHC COMPUTING GRID (WLCG)

This year was the first full year of Long Shutdown 2 (LS2) of the LHC. While not taking data, the fourLHC experiments nevertheless continued to use all of the grid services and resources at full capacity.In fact, the levels of CPU time used reached new peaks during the year, shown in Figu 1. The levelof use shown in the figure is that of pledged resources, which together with an additional amount ofopportunistic resource amounts to the equivalent continuous use of some 1 million CPU cores. Thetotal amount of storage used globally is around 1 Exabyte, split approximately equally between tapeand disk media.

Figure 2 shows the overall data transfer rates across the WLCG infrastructure. The sustained level isabove 50 GB/s, illustrating the high level of activity on the distributed computing service. Themajority of the key services for WLCG are operated by CERN as part of the Tier 0 commitment, andthe service provided was extremely stable and reliable, supporting the timely delivery of high-qualityphysics results by the experiment collaborations. The global operation of WLCG is coordinated bythe operations team within the WLCG CERN team.

In preparing for the future, the WLCG team in IT has been working closely with the experiments tounderstand the computing and storage needs for both Run 3 and Run 4 (High Lumi-LHC). While Run3 will mostly be similar to Run 2 in terms of the computing models, High Lumi-LHC presents asignificant increase in scale and the potential for adapting the computing to take advantage oftechnology changes in the recent years. The WLCG team has led the DOMA 3(Data Organisation,Management, Access) project to prototype the concepts of a “data lake”, where data can bestreamed on demand to processing centres rather than pre-placed at those centres. A number ofcaching and networking technologies are being demonstrated to support this, and several relatedR&D activities are being managed by the team. The data management models of LHC and theseR&D activities, are also of interest to several other communities in Astronomy, Astroparticle physicsand related fields. The CERN team leads a work package in the EU-funded ESCAPE 4project todemonstrate this “data lake” technology for those sciences. That project started in early 2019, andis already operating a set of prototypes that will form the basis of future development together withthose sciences.

The other aspect of preparation for the longer term addressed by the WLCG team, is that ofperformance of the software. In that area a number of activities related to understanding thebottlenecks to performance gains, and understanding how to benefit from new architectures, inparticular GPUs and HPC systems5 have been the subject of several studies. A lot of the work inthese forward-looking areas has been presented in detail at the CHEP6 conference in November, and

3 https://twiki.cern.ch/twiki/bin/view/LCG/DomaActivities

4 https://projectescape.eu/

5 http://wlcg-docs.web.cern.ch/wlcg-docs/technical_documents/HPCIntegration.pdf

6 https://chep2019.org/index.html

64 | P a g eCERN IT Department Groups and Activities Report 2019

at the large WLCG/HSF workshop7 in March at JLAB, and a smaller workshop preceding CHEP. The ITteam has been leading many of these activities.

Figure 1: CPU delivered across WLCG

Figure 2: Global LHC data transfer rates

Posters:

· D. Smith / CERN-IT-DI-LCG, “Test traffic for an xcache”, CHEP2019, Adelaide, Australia, 4-8November 2019, https://indico.cern.ch/event/773049/contributions/3474478

7 https://indico.cern.ch/event/759388/overview

0

1

2

3

4

5

6

2010

Jan

2010

Mar

2010

May

2010

Jul

2010

Sep

2010

Nov

2011

Jan

2011

Mar

2011

May

2011

Jul

2011

Sep

2011

Nov

2012

Jan

2012

Mar

2012

May

2012

Jul

2012

Sep

2012

Nov

2013

Jan

2013

Mar

2013

May

2013

Jul

2013

Sep

2013

Nov

2014

Jan

2014

Mar

2014

May

2014

Jul

2014

Sep

2014

Nov

2015

Jan

2015

Mar

2015

May

2015

Jul

2015

Sep

2015

Nov

2016

Jan

2016

Mar

2016

May

2016

Jul

2016

Sep

2016

Nov

2017

Jan

2017

Mar

2017

May

2017

Jul

2017

Sep

2017

Nov

2018

Jan

2018

Mar

2018

May

2018

Jul

2018

Sep

2018

Nov

2019

Jan

2019

Mar

2019

May

2019

Jul

2019

Sep

Billi

on H

S06-

hour

s CPU Delivered: HS06-hours/month

ALICE ATLAS CMS LHCb

65 | P a g eCERN IT Department Groups and Activities Report 2019

Presentations:

· A. Valassi / CERN-IT-DI-LCG, "LHCB standalone workload", Benchmarking WG meeting, CERN,Geneva, Switzerland, 1 February 2019, https://indico.cern.ch/e/782598

· A. Valassi / CERN-IT-DI-LCG, "LHCB standalone workload", Systems performance modellingWG meeting, CERN, Geneva, Switzerland, 6 February 2019, https://indico.cern.ch/e/772025

· D. Giordano / CERN-IT-CM, A. Valassi / CERN-IT-DI-LCG, "Using HEP workloads forbenchmarking: Gauss", Computing session of LHCb week, CERN, Geneva, Switzerland, 4March 2019, https://indico.cern.ch/event/802489

· S. Muralidharan / CERN-IT-DI-LCG, A. Valassi / CERN-IT-DI-LCG, "Trident analysis of LHCbGEN/SIM benchmarking workload", Systems performance modelling WG meeting, CERN,Geneva, Switzerland, 6 March 2019, https://indico.cern.ch/event/772026

· A. Valassi / CERN-IT-DI-LCG, "LHCb workloads on HPC, LHCb A&S week", CERN, Geneva,Switzerland, 9 May 2019, https://indico.cern.ch/event/818754

· A. Valassi / CERN-IT-DI-LCG, "GaussMP for CINECA HPC? - Status of multi-process LHCbsimulation software", LHCb Simulation Meeting, CERN, Geneva, Switzerland, 27 August2019, https://indico.cern.ch/event/803977/contributions/3542109

· A. Valassi / CERN-IT-DI-LCG, "Summary of some recent work on the benchmarking suite",Benchmarking Meeting, CERN, Geneva, Switzerland, 30 August 2019,https://indico.cern.ch/event/782626

· A. Valassi / CERN-IT-DI-LCG, "GaussMP for CINECA HPC? - Update on multi-process LHCbsimulation software", LHCb Simulation Meeting, CERN, Geneva, Switzerland, 10 September2019, https://indico.cern.ch/event/803979/contributions/3560285

· S. Roiser / CERN-IT-DI-LCG, I. Bird / CERN-IT-DI-LCG, S. Campana / CERN-IT-DI-LCG, P. Mato/ CERN-EP-SFT, M. Schulz / CERN-IT-DI-LCG, G. Stewart / CERN-EP-SFT, A. Valassi / CERN-IT-DI-LCG, "The Setup of an Institute for Scientific Software", RSEConUK 2019, Birmingham, UK,18 September 2019, https://rseconuk2019.sched.com/event/QSRK

· G. Corti / CERN-EP-LBD, D. Muller / CERN-EP-LBD, D. Popov / Birmingham, A. Valassi / CERN-IT-DI-LCG, "LHCb simulation performance", 24th Geant4 Collaboration Meeting, JLAB, USA,26 September 2019, https://indico.cern.ch/event/825306/contributions/3565311

· A. Valassi / CERN-IT-DI-LCG, "Benchmarking WLCG resources using HEP experimentworkflows", NEC 2019 Conference, Budva, Montenegro, 2 October 2019,https://indico.jinr.ru/event/738/session/2/contribution/164

· A. Valassi / CERN-IT-DI-LCG, "Infrastructure of container build", pre-GDB on benchmarking,CERN, Geneva, Switzerland, 8 October 2019,https://indico.cern.ch/event/739897/contributions/3558848

· A. Valassi / CERN-IT-DI-LCG, "LHCb workloads in the benchmarking suite", pre-GDB onbenchmarking, CERN, Geneva, Switzerland, 8 October 2019,https://indico.cern.ch/event/739897/contributions/3558849/subcontributions/288483

· A. Valassi / CERN-IT-DI-LCG, "Overview of GPU efforts for WLCG production workloads", pre-GDB on benchmarking, CERN, Geneva, Switzerland, 8 October 2019,https://indico.cern.ch/event/739897/contributions/3559134

· F. Stagni / CERN-EP-LBD, A. Valassi / CERN-IT-DI-LCG, V. Romanovsky / Protvino, "IntegratingLHCb workflows on HPC resources: status and strategies, CHEP2019, Adelaide, Australia, 4November 2019, https://indico.cern.ch/event/773049/contributions/3474807

· T. Boccali / INFN, A. Valassi / CERN-IT-DI-LCG et al., "Extension of the INFN Tier-1 on a HPCsystem", CHEP2019, Adelaide, Australia, 4 November 2019,https://indico.cern.ch/event/773049/contributions/3474805

· A. Valassi / CERN-IT-DI-LCG, "Using HEP experiment workflows for the benchmarking andaccounting of computing resources", CHEP2019, Adelaide, Australia, 4 November 2019,https://indico.cern.ch/event/773049/contributions/3473809

66 | P a g eCERN IT Department Groups and Activities Report 2019

· A. Valassi / CERN-IT-DI-LCG, "Optimising HEP parameter fits through MC weight derivativeregression", CHEP2019, Adelaide, Australia, 7 November 2019,https://indico.cern.ch/event/773049/contributions/3476059

· S. Roiser / CERN-IT-DI-LCG, I. Bird / CERN-IT-DI-LCG, S. Campana / CERN-IT-DI-LCG, P. Mato/ CERN-EP-SFT, M. Schulz / CERN-IT-DI-LCG, G. Stewart / CERN-EP-SFT, A. Valassi / CERN-IT-DI-LCG, "SIDIS an Institute for Scientific Software, bringing together Applied Computing andData Intensive Science", CHEP2019, Adelaide, Australia, 7 November 2019,https://indico.cern.ch/event/773049/contributions/3474841

· A. Valassi / CERN-IT-DI-LCG, "HSF Event Generator WG activities", 12th LHCb ComputingWorkshop, CERN, Geneva, Switzerland, 20 November 2019,https://indico.cern.ch/event/831054/contributions/3647475/attachments/1948111/3232656/20191120-LHCb-HSFgenerators-v3.pdf

· A. Valassi / CERN-IT-DI-LCG, "HPCs and multi-process MonteCarlo jobs", 12th LHCbComputing Workshop, CERN, Geneva, Switzerland, 21 November 2019,https://indico.cern.ch/event/831054/contributions/3604477/attachments/1948700/3234204/20191120-LHCb-HPCs-ComputingWS-AV-v2a.pdf

· H. Meinhard / CERN-IT-DI-LCG, “The Technology Watch working group”, HSF-OSG-WLCGworkshop, Jefferson Lab, Newport News (VA), USA, 18-22 March 2019,https://indico.cern.ch/event/759388/contributions/3326337/attachments/1813920/2963870/2019-03-19-HOW2019-TechwatchWG.pdf

· H. Meinhard / CERN-IT-DI-LCG, “The Technology Watch working group”, HEPiX workshop,San Diego Supercomputing Center at University of California in San Diego (CA), USA, 25-29March 2019,https://indico.cern.ch/event/765497/contributions/3348835/attachments/1820004/2976502/2019-03-28-HEPiX-TechwatchWG.pdf

· H. Meinhard / CERN-IT-DI-LCG, “Trends in Computing Technologies and Markets: The HEPiXTechWatch WG”, EGI Conference, Amsterdam, The Netherlands, 06-18 May 2019,https://indico.egi.eu/event/4431/contributions/10111/attachments/9846/11318/2019-05-08-EGIConf-HEPiXTechWatch.pdf

· H. Meinhard / CERN-IT-DI-LCG, “Procedures in CERN-IT: Selected topics”, Journées du CC-IN2P3, Aix-les-Bains, France, 26-28 June 2019,https://cernbox.cern.ch/index.php/s/dXARrCZHWkVIHpG

· S. Campana / CERN-IT-DI-LCG, “Computing Challenges of the Future”, Open Symposium forthe European Particle Physics Strategy Update, Granada, Spain, 13-16 May 2019,https://indico.cern.ch/event/808335/

· S. Campana / CERN-IT-DI-LCG et al., "ESCAPE prototypes a Data Infrastructure for OpenScience", CHEP2019, Adelaide, Australia, 4 November 2019,https://indico.cern.ch/event/773049/contributions/3474431/

· I. Bird/CERN-IT-DI-LCG, S. Campana/IT-DI-LCG, “Evolution of Scientific Computing in the NextDecade: HEP and Beyond”, HOW2019 (WLCG/HSF Workshop), 18 March 2019, Jefferson Lab,Newport News, Virginia, USA.https://indico.cern.ch/event/759388/sessions/295225/attachments/1813716/2963439/WLCGEvolutionJLAB.pdf

· I. Bird/CERN-IT-DI-LCG, “WLCG: Evolving Distributed Computing for the LHC”, 6th May 2019,Keynote at the EGI Conference, Amsterdam,NL. https://indico.egi.eu/event/4431/contributions/10215/attachments/9796/11264/OP-02_IBird_LHC_Computing.pptx

· I. Bird/CERN-IT-DI-LCG, “Current HEP Computing Models”, 15 May 2019, Open Symposium -Update of the European Strategy for Particle Physics, Granada, Spain.https://indico.cern.ch/event/808335/contributions/3365098/attachments/1842604/3024260/ESPP-CurrentComputingModels-150519.pdf

67 | P a g eCERN IT Department Groups and Activities Report 2019

· I. Bird/CERN-IT-DI-LCG, “Computing Challenges of LHC”, 11th November 2019, AENEAS All-Hands Meeting, Utrecht, NL.https://indico.astron.nl/getFile.py/access?contribId=5&resId=0&materialId=slides&confId=210

· I. Bird/CERN-IT-DI-LCG, “Evolving Data Management for the LHC”, 21 November 2019, 2019National e-Science Symposium, Amsterdam, NL. https://esciencesymposium2019.nl

· S. Roiser / CERN-IT-DI-LCG, I. Bird / CERN-IT-DI-LCG, S. Campana / CERN-IT-DI-LCG, P. Mato/ CERN-EP-SFT, M. Schulz / CERN-IT-DI-LCG, G. Stewart / CERN-EP-SFT, A. Valassi / CERN-IT-DI-LCG, “The Setup of an Institute for Scientific Software, connecting Applied Computing andData Intensive Sciences”, RSEConUK 2019, Birmingham, UK, 17 - 19 September 2019

· M. Sharma / CERN-IT-DI-LCG et al, "Lightweight Sites - SIMPLE framework", HSF-OSG-WLCGworkshop, Jefferson Lab, Newport News (VA), USA, 18-22 March 2019,https://indico.cern.ch/event/759388/contributions/3361776/

· L. Betev / EP-AIP-GTP, M. Litmaath / CERN-IT-DI-LCG, "ALICE Upgrade report", HSF-OSG-WLCG workshop, Jefferson Lab, Newport News (VA), USA, 18-22 March 2019,https://indico.cern.ch/event/759388/contributions/3302194/

· M. Litmaath / CERN-IT-DI-LCG, "ALICE WLCG operations report and middleware", ALICE Tier-1/Tier-2 Workshop, University Politehnica of Bucharest, Romania, 14-16 May 2019,https://indico.cern.ch/event/778465/contributions/3239157/

· J. Andreeva / CERN-IT-DI-LCG, "WLCG Information System Evolution", HSF-OSG-WLCGworkshop, Jefferson Lab, Newport News (VA), USA, 18-22 March 2019,https://indico.cern.ch/event/759388/contributions/3322821/

· J. Andreeva / CERN-IT-DI-LCG , M. Sharma / CERN-IT-DI-LCG , Litmaath / CERN-IT-DI-LCG,"The SIMPLE Framework for deploying containerized grid services", CHEP 2019, 4-8November 2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3473817/

· J. Andreeva / CERN-IT-DI-LCG et al “Evolution of the WLCG Information Infrastructure",CHEP 2019, 4-8 November 2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3473370/

· A. Anisenkov / Budker Institute Novosibirsk, J. Andreeva / CERN-IT-DI-LCG, A. Di Girolamo /CERN-IT-DI-LCG, P. Paparrigopoulos / CERN-IT-DI-LCG, “CRIC: Computing ResourceInformation Catalogue as a unified topology system for a large scale , heterogeneous anddynamic computing infrastructure", CHEP 2019, 4-8 November 2019, Adelaide, Australia,https://indico.cern.ch/event/773049/contributions/3473389/

· D. Smith / CERN-IT-DI-LCG, “XCache exploration”, pre-GDB on XCache, CERN, Geneva,Switzerland, 8 July 2019, https://indico.cern.ch/event/827556/contributions/3480201

· D. Smith / CERN-IT-DI-LCG, “Xcache testing infrastructure”, DOMA / ACCESS WG meeting,CERN, Geneva, Switzerland, 10 December 2019,https://indico.cern.ch/event/866486/contributions/3666092

· A. Di Girolamo / CERN-IT-DI-LCG, "Operational Intelligence", HSF-OSG-WLCG workshop,Jefferson Lab, Newport News (VA), USA, 18-22 March 2019,https://indico.cern.ch/event/759388/contributions/3322821/

Publications:

· A. Valassi / CERN-IT-DI-LCG, "Binary classifier metrics for optimizing HEP event selection",CHEP2018 proceedings, September 2019, https://doi.org/10.1051/epjconf/201921406004

· S. Roiser / CERN-IT-DI-LCG, C. Bozzi / CERN-EP-ULB, “Towards a computing model for theLHCb Upgrade”, CHEP 2018 proceedings, September 2019,https://doi.org/10.1051/epjconf/201921403045

68 | P a g eCERN IT Department Groups and Activities Report 2019

· C. Bozzi / CERN-EP-ULB, S. Ponce / CERN-EP-LBC, S. Roiser / CERN-IT-DI-LCG, “The coresoftware framework for the LHCb Upgrade”, CHEP 2018 proceedings, September 2018,https://doi.org/10.1051/epjconf/201921405040

· M. Litmaath / CERN-IT-DI-LCG et al (editors), "Proceedings, 23rd International Conference onComputing in High Energy and Nuclear Physics (CHEP 2018) : Sofia, Bulgaria, July 9-13,2018", EPJ Web Conf. 214 (2019)

· J. Andreeva / CERN-IT-DI-LCG et al (editors), "Proceedings, 23rd International Conference onComputing in High Energy and Nuclear Physics (CHEP 2018) : Sofia, Bulgaria, July 9-13,2018", EPJ Web Conf. 214 (2019)

· M. Sharma / CERN-IT-DI-LCG, M. Litmaath / CERN-IT-DI-LCG et al, "Lightweight WLCG sites :The SIMPLE Grid Framework", EPJ Web Conf. 214 (2019) 07019

· M. Storetvedt / CERN-EP-AIP-GTP, M. Litmaath / CERN-IT-DI-LCG et al, "Grid services in abox: container management in ALICE", EPJ Web Conf. 214 (2019) 07018

· A. Anisenkov / Budker Institute Novosibirsk, J. Andreeva / CERN-IT-DI-LCG, A. Di Girolamo /CERN-IT-DI-LCG, P. Paparrigopoulos / CERN-IT-DI-LCG,“CRIC : A unified information systemfor WLCG and beyond", EPJ Web Conf. 214 (2019) 07019

Reports:

· R. Hawkings / CERN-EP-ADP, A. Valassi / CERN-IT-DI-LCG et al., "Report of the Follow-upReview ATLAS conditions data infrastructure for Run-3", ATL-COM-SOFT-2019-002, January2019, https://cds.cern.ch/record/2655111

· A. Valassi / CERN-IT-DI-LCG (editor), "WLCG Status and Progress Report (October 2018-March 2019)", April 2019, http://wlcg-docs.web.cern.ch/wlcg-docs/reporting/status_progress/Quarterly-Reports/2019/WLCG-Report-2019-H1.pdf

· T. Boccali / Pisa, C. Bozzi / CERN-EP-ULB, J. Catmore / CERN-EP-ADP, D. Costanzo / Sheffield,M. Klute / MIT, A. Valassi / CERN-IT-DI-LCG, "Summary of the May 10 cross-experimentmeeting about HPCs", June 2019,https://indico.cern.ch/event/811997/attachments/1862943

· A. Valassi / CERN-IT-DI-LCG (editor), "WLCG Status and Progress Report (April 2019-September 2019)", October 2019, http://wlcg-docs.web.cern.ch/wlcg-docs/reporting/status_progress/Quarterly-Reports/2019/WLCG-Report-2019-H2.pdf

· S. Roiser / CERN-IT-DI-LCG, I. Bird / CERN-IT-DI-LCG, S. Campana / CERN-IT-DI-LCG, P. Mato/ CERN-EP-SFT, M. Schulz / CERN-IT-DI-LCG, G. Stewart / CERN-EP-SFT, A. Valassi / CERN-IT-DI-LCG, "A plan to set up an institute for software in data intensive sciences", October 2019,https://doi.org/10.5281/zenodo.3466586

· I. Bird/CERN-IT-DI-LCG, Status of the WLCG project, LHC Computing Resources Review Board,CERN-RRB-2019-053, 17 April 2019, https://cds.cern.ch/record/2660332/files/CERN-RRB-2019-053.pdf

· I. Bird/CERN-IT-DI-LCG, Status of the WLCG project, LHC Computing Resources Review Board,CERN-RRB-2019-122, 29 October 2019, https://cds.cern.ch/record/2687836/files/CERN-RRB-2019-122.pdf