cracow grid workshop, november 5-6, 2001 overview of the crossgrid project marian bubak institute of...
Post on 21-Dec-2015
214 views
TRANSCRIPT
X#X#
Cracow Grid Workshop, November 5-6, 2001
Overview of the CrossGrid Project
Marian Bubak
Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland
andMichał Turała
Institute of Nuclear Physics, Cracow, Poland
X#X#
Cracow Grid Workshop, November 5-6, 2001
Towards the CrossGrid
– 1st meeting January 24, 2001, to join DataGrid– CPA9 Call– Extended collaboration meeting at GGF1 (March 7)
• 23 partners• New type of applications
– Proposal submitted – April 22, 2001; 22 partners– Comments of reviewers and PO– Negotiations October 24, 2001; 21 partners– ...
X#X#
Cracow Grid Workshop, November 5-6, 2001
CrossGrid Collaboration
Poland:Cyfronet & INP CracowPSNC PoznanICM & IPJ Warsaw
Portugal:LIP Lisbon
Spain:CSIC SantanderValencia & RedIrisUAB BarcelonaUSC Santiago & CESGA
Ireland:TCD Dublin
Italy:DATAMAT
Netherlands:UvA Amsterdam
Germany:FZK KarlsruheTUM MunichUSTU Stuttgart
Slovakia:II SAS Bratislava
Greece:AlgosystemsDemo AthensAuTh Thessaloniki
Cyprus:UCY Nikosia
Austria:U.Linz
X#X#
Cracow Grid Workshop, November 5-6, 2001
Main Objectives
– New category of Grid enabled applications• computing and data intensive• distributed• near real time response (a person in a loop)• layered
– New programming tools– Grid more user friendly, secure and efficient– Interoperability with other Grids– Implementation of standards
X#X#
Cracow Grid Workshop, November 5-6, 2001
Interactive, Compute and Data Intensive Applications (WP1)
Interactive simulation and visualisation of a biomedical systemFlooding crisis team supportDistributed data analysis in HEPWeather forecast and air pollution modelling
Grid Application Programming Environment (WP2)MPI code debugging and verificationMetrics and benchmarks Interactive and semiautomatic performance evaluation tools
HLAGrid
Visualisation Kernel
Data Mining
Datagrid
GriPhyN
...
Services
New Grid Services and Tools (WP3)
Portals and roaming accessGrid resource managementGrid monitoringOptimisation of data access
Globus Middleware
Fabric Infrastructure
CrossGrid Architecture
X#X#
Cracow Grid Workshop, November 5-6, 2001
Key functionalities of applications
– Data gathering• Data generators and data bases geographically distributed
• Selected on demand
– Processing• Needs large processing capacity on demand
• Interactive
– Presentation• Complex data require versatile 3D visualisation
• Support interaction and feedback to other components
X#X#
Cracow Grid Workshop, November 5-6, 2001
Why Interactive Computing?– Goal: From Data, via Information to Knowledge =>Planning
and Management
– Complexity: Huge data-sets, complex processes
– Approach: Parametric exploration and sensitivity analyses:• Combine raw (sensory) data with simulation
• Person in the loop: • Sensory interaction • Intelligent short-cuts
X#X#
Cracow Grid Workshop, November 5-6, 2001
Common issues of applications
– Inherently distributed applications profit from grid approach
– All tasks require high performance & MPI• 1.1 and 1.2 - interactive, near-real time• 1.3 and 1.4 - high throughput
– Data mining• 1.3 and 1.4
– Data discovery• 1.2 and 1.4
X#X#
Cracow Grid Workshop, November 5-6, 2001
Complementarity with DataGrid HEP application package:
• Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data-processing preceeding stages of Datagrid• Apart from the file-level service that will be offered by Datagrid, Crossgrid will offer an object-level service to optimise the use of distributed databases:
-Two possible implementations (will be tested in running experiments):
–Three-tier model accesing OODBMS or O/R DBMS
–More specific HEP solution like ROOT.
• User friendly due to specific portal tools
Distributed Data Analysis in HEP
X#X#
Cracow Grid Workshop, November 5-6, 2001
•Several challenging points:–Access to large distributed databases in the Grid.–Development of distributed data-mining techniques.–Definition of a layered application structure.–Integration of user-friendly interactive access.
•Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb)
Distributed Data Analysis in HEP
X#X#
Cracow Grid Workshop, November 5-6, 2001
Objectives
• specify
• develop
• integrate
• test
tools that facilitate the development and tuning of parallel
distributed
high-performance and high-throughput computing applications on
Grid infrastructures
WP2 - Grid Application Programming Environments
X#X#
Cracow Grid Workshop, November 5-6, 2001
Six Tasks in WP2
2.0 Co-ordination and Management
2.1 Tools requirement definition
2.2 MPI code debugging and verification
2.3 Metrics and benchmarks
2.4 Interactive and semiautomatic performance evaluation tools
2.5 Integration, testing and refinement
WP2 - Grid Application Programming Environments
X#X#
Cracow Grid Workshop, November 5-6, 2001
WP2 - Components and relations to other WPs
Analyticalmodel
Benchmarks(2.3)
Gridmonitoring
(3.3)
MPIverification
(2.2)
Performancemeasurement
Visualization
Automaticanalysis
Performance analysis (2.4)
Applicationsource code
ApplicationWP1
running ontestbed WP4
X#X#
Cracow Grid Workshop, November 5-6, 2001
WP3 Objectives
• Tools for development of interactive compute- and data-intensive applications
• To address user-friendly Grid environments• To simplify the applications and Grid access by
supporting the end user • To achieve a reasonable trade-off between resource
usage efficiency and application speedup• To support management issues while accessing
resources
X#X#
Cracow Grid Workshop, November 5-6, 2001
WP3Portals(3.1)
Roaming Access(3.1)
Grid Resource Management
(3.2)
Grid Monitoring(3.3)
Optimisation of Data Access
(3.4)
Tests and Integration
(3.5)
ApplicationsWP1
End Users
WP1, WP2, WP5
TestbedWP4
Performance evaluation tools
(2.4)
X#X#
Cracow Grid Workshop, November 5-6, 2001
Testbed Organisation (WP4)
– Testbed setup and incremental evolution• from several local testbeds to fully integrated one
– Integration with DataGrid• common design, environment for HEP applications
– Infrastructure support• flexible fabric management tools and network support
– Verification and quality control• reliability of the middleware and network infrastructure
X#X#
Cracow Grid Workshop, November 5-6, 2001
Partners in WP4
WP4 lead by
CSIC (Spain)
CrossGrid WP4 - International Testbed Organisation
Auth Thessaloniki
U v Amsterdam
FZK Karlsruhe
TCD Dublin
U A Barcelona
LIP Lisbon
CSIC Valencia CSIC Madrid
USC Santiago CSIC Santander
DEMO Athens UCY Nikosia
CYFRONET Cracow
II SAS Bratislava
PSNC Poznan
ICM & IPJ Warsaw
X#X#
Cracow Grid Workshop, November 5-6, 2001
Tasks in WP4
4.0 Coordination and management
(task leader: J.Marco, CSIC, Santander)–Coordination with WP1,2,3–Collaborative tools (web+videoconf+repository)–Integration Team
4.1 Testbed setup & incremental evolution (task leader:R.Marco, CSIC, Santander)
–Define installation–Deploy testbed releases–Trace security issues
WP4 - International Testbed Organisation
Testbed site responsibles:– CYFRONET (Krakow) A.Ozieblo– ICM(Warsaw) W.Wislicki– IPJ (Warsaw) K.Nawrocki– UvA (Amsterdam) D.van Albada– FZK (Karlsruhe) M.Kunze– IISAS (Bratislava) J.Astalos– PSNC(Poznan) P.Wolniewicz– UCY (Cyprus) M.Dikaiakos– TCD (Dublin) B.Coghlan– CSIC (Santander/Valencia) S.Gonzalez– UAB (Barcelona) G.Merino– USC (Santiago) A.Gomez– UAM (Madrid) J.del Peso– Demo (Athenas) C.Markou– AuTh (Thessaloniki) D.Sampsonidis– LIP (Lisbon) J.Martins
X#X#
Cracow Grid Workshop, November 5-6, 2001
Tasks in WP4
4.2 Integration with DATAGRID (task leader: M.Kunze, FZK)–Coordination of testbed setup–Exchange knowledge–Participate in WP meetings
4.3 Infrastructure Support (task leader: J.Salt, CSIC, Valencia)–Fabric management–HelpDesk–Provide Installation Kit–Network support
4.4 Verification & quality control (task leader: J.Gomes, LIP)–Feedback –Improve stability of the testbed
WP4 - International Testbed Organisation
X#X#
Cracow Grid Workshop, November 5-6, 2001
Technical Coordination
– Merging of requirements – Specification and refinement of the GrossGrid architecture
(protocols, APIs; HLA, CCA ...)– Establishing standard operational procedures
• repository acces procedures• problem reporting mechanism• handling changed requests mechanism• release preparation procedure
– Specification of the structure of deliverables– Approach: rapid prototyping and iterative engineering
X#X#
Cracow Grid Workshop, November 5-6, 2001
Project Phases
M 1 - 3: requirements definition and merging
M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements
M 13 -24: second development phase: integration of components, 2nd prototypes
M 25 -32: third development phase: complete integration, final code versions
M 33 -36: final phase: demonstration and documentation
X#X#
Cracow Grid Workshop, November 5-6, 2001
Clustering with # Projects
– Objective – exchange of• information• software components
– Our partners• DATAGRID• DATATAG• GRIDLAB• EUROGRID and GRIP
– GRIDSTART – Participation in GGF
X#X#
Cracow Grid Workshop, November 5-6, 2001
Expected Results of the CrossGrid
– Grid enabled interactive applications
– Elaborated methodology
– Generic application architecture
– New programming tools
– New Grid services
– Extension of the Grid in Europe and to new virtual organisations
X#X#
Cracow Grid Workshop, November 5-6, 2001
Dissemination & Exploitation
– Methods & software developed will be available to scientific community
– Each collaboration partner• topic conferences, GGF, national Grid initiatives• MSc, PhD and lectures on Grid technology
– Centralised• CrossGrid vortal• workshops, seminars, user/focus groups• newsletter, brochures• industrial deployment
X#X#
Cracow Grid Workshop, November 5-6, 2001
1.0Coordination
1.1-1.4Applications
2.0Coordination
2.1Requirements
2.2-2.4Tools
2.5Tests
3.0Coordination
3.1-3.4Services
3.5Tests
5.1Coordination
& Management
5.3Dissemination & Exploitation
5.2Architecture
Team
4.0Coordination
4.2Integration with
DataGrid
4.1, 4.3, 4.4Testbeds
GGF DataGrid Other Grid Projects
Overall Links between WPs and Tasks