icalepcs 2005 10/13/2005 m. greenwald visions for data management and remote collaboration on iter...
Post on 05-Jan-2016
214 Views
Preview:
TRANSCRIPT
ICALEPCS 2005 10/13/2005 M. Greenwald
Visions for Data Management and Remote Collaboration on ITER
M. Greenwald, D. Schissel, J. Burruss, T. Fredian, J. Lister, J. StillermanMIT, GA, CRPP
Presented by Martin Greenwald
MIT – Plasma Science & Fusion CenterCRPP, Lausanne, 2005
ICALEPCS 2005 10/13/2005 M. Greenwald
ITER is Clearly the Next Big Thing in Magnetic Fusion Research
• It will be the largest and most expensive scientific instrument ever built for fusion research
• Built and operated as an international collaboration
• To ensure its scientific productivity, systems for data management and remote collaboration must be done well.
ICALEPCS 2005 10/13/2005 M. Greenwald
What Challenges Will ITER Present?
• Fusion experiments require extensive “near” real-time data visualization and analysis in support of between-shot decision making.
– For ITER, shots are: ~400 seconds each, maybe 1 hour apart 2,000 per year for 15 years
– Average cost per shot is very high (order $1M)
• Today, teams of ~30-100 work together closely during operation of experiments.
• Real-time remote participation is standard operating procedure.
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenges: Experimental Fusion Science is a Demanding Real-Time Activity
• Run-time goals:
– Optimize fusion performance
– Ensure conditions are fully documented before moving on
• Drives need to assimilate, analyze and visualize large quantity of data between shots.
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenge: Long Pulse Length
• Requires concurrent writing, reading, analysis
– (We don’t think this will be too hard)
• Data sets will be larger than they are today
– Perhaps 1 TB per shot, > 1 PB per year
– (We think this will be manageable when needed)
• More challenging – integration across time scales
• Data will span range > 109 in significant time scales
– Fluctuation time scale pulse length
• Will require efficient tools
– To browse very long records
– To locate and describe specific events or intervals
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenge: Long Life of Project
• 10 years construction; 15+ years operation
• Systems must adapt to decades of information technology
evolution
– Software, protocols, hardware will all change.
– Think back 25 years!
• We need to anticipate a
complete changeover in
workforce.
• Backward compatibility must be
maintained
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenges: International, Remote Participation
• Scientists will want to participate in experiments from their home
institutions dispersed around the world.
– View and analyze data during operations
– Manage ITER diagnostics
– Lead experimental sessions
– Participate in international task forces
• Collaborations span many administrative domains (more on this
later)
• Cyber-security must be maintained, plant security must be
inviolable.
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenges: International, Remote Participation
• Scientists will want to participate in experiments from their home
institutions dispersed around the world.
– View and analyze data during operations
– Manage ITER diagnostics
– Lead experimental sessions
– Participate in international task forces
• Collaborations span many administrative domains (more on this
later)
• Cyber-security must be maintained, plant security must be
inviolable.
ICALEPCS 2005 10/13/2005 M. Greenwald
Challenges: International, Remote Participation
• Scientists will want to participate in experiments from their home
institutions dispersed around the world.
– View and analyze data during operations
– Manage ITER diagnostics
– Lead experimental sessions
– Participate in international task forces
• Collaborations span many administrative domains (more on this
later)
• Cyber-security must be maintained, plant security must be
inviolable.
ICALEPCS 2005 10/13/2005 M. Greenwald
We Are Beginning the Dialogue About How to Proceed
• This is not yet an “official” ITER
activity
• What follows is our vision for data
management and remote
participation systems
• Opinions expressed here are the
authors alone.
ICALEPCS 2005 10/13/2005 M. Greenwald
Strategy: Design, Prototype and Demo
• With 10 years before first operation, it is too early to choose specific implementations – software or hardware
• Begin now on enduring features
– Define requirements, scope of effort, approach
– Decide on general principles and features of architecture
• Within 2 years: start on prototypes: Part of conceptual design
• Within 4 years: demonstrate:
– Test, especially on current facilities
– Simulation codes could provide testbed for long-pulse features
• In 6 years: proven implementations expanded and elaborated to meet requirements
ICALEPCS 2005 10/13/2005 M. Greenwald
Strategy: Design, Prototype and Demo
• With 10 years before first operation, it is too early to choose specific implementations – software or hardware
• Begin now on enduring features
– Define requirements, scope of effort, approach
– Decide on general principles and features of architecture
• Within 2 years: start on prototypes: Part of conceptual design
• Within 4 years: demonstrate:
– Test, especially on current facilities
– Simulation codes could provide testbed for long-pulse features
• In 6 years: proven implementations expanded and elaborated to meet requirements
ICALEPCS 2005 10/13/2005 M. Greenwald
Strategy: Design, Prototype and Demo
• With 10 years before first operation, it is too early to choose specific implementations – software or hardware
• Begin now on enduring features
– Define requirements, scope of effort, approach
– Decide on general principles and features of architecture
• Within 2 years: start on prototypes: Part of conceptual design
• Within 4 years: demonstrate:
– Test, especially on current facilities
– Simulation codes could provide testbed for long-pulse features
• In 6 years: proven implementations expanded and elaborated to meet requirements
ICALEPCS 2005 10/13/2005 M. Greenwald
Strategy: Design, Prototype and Demo
• With 10 years before first operation, it is too early to choose specific implementations – software or hardware
• Begin now on enduring features
– Define requirements, scope of effort, approach
– Decide on general principles and features of architecture
• Within 2 years: start on prototypes: Part of conceptual design
• Within 4 years: demonstrate:
– Test, especially on current facilities
– Simulation codes could provide testbed for long-pulse features
• In 6 years: proven implementations expanded and elaborated to meet requirements
ICALEPCS 2005 10/13/2005 M. Greenwald
Strategy: Design, Prototype and Demo
• With 10 years before first operation, it is too early to choose specific implementations – software or hardware
• Begin now on enduring features
– Define requirements, scope of effort, approach
– Decide on general principles and features of architecture
• Within 2 years: start on prototypes: Part of conceptual design
• Within 4 years: demonstrate:
– Test, especially on current facilities
– Simulation codes could provide testbed for long-pulse features
• In 6 years: proven implementations expanded and elaborated to meet requirements
ICALEPCS 2005 10/13/2005 M. Greenwald
General Features
• Extensible, flexible, scalable
– We won’t be able to predict all future needs
– Capable of continuous and incremental improvement
– Requires robust underlying abstraction
• Data Accessible from wide range of languages, software
frameworks and hardware platforms
– The international collaboration will be heterogeneous
• Built-in security
– Must protect plant without endangering science mission
ICALEPCS 2005 10/13/2005 M. Greenwald
Proposed Top Level Data Architecture
Data Acquisition
Control
Service Oriented API
Analysis Applications
Visualization Applications
RelationalDatabase
Main Repository
Data Acquisition
Systems
Contains data searchable by their contents
Contains multi-dimensional data indexed by their independent parameters
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Contents & Structure
• Provide coherent, complete, integrated, self-descriptive view of all
data through simple interfaces.
– All Raw, processed, analyzed data, configuration, geometry
calibrations, data acquisition setup, code parameters, labels,
comments…
– No data in applications or private files
• Metadata stored for each data element
• Logical relationships and associations among data elements are
made explicit by structure (probably multiple hierarchies).
• Data structures can be traversed independent of reading data.
• Powerful data directories (105 – 106 named items)
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System – Service Oriented Architectures
• Service oriented
– Loosely coupled applications, running on distributed servers
– Interfaces simple and generic, implementation details hidden
Transparency and ease-of-use are crucial
Applications specify what is to be done, not how
– Data structures shared
– Service discovery supported
• We’re already headed in this direction
– MDSplus
– TRANSP “FusionGrid” service
ICALEPCS 2005 10/13/2005 M. Greenwald
Resources Accessible Via Network Services
• Resources = Computers, Codes, Data, Analysis Routines,
Visualization tools, Experimental Status and Operations
• Access is stressed rather than portability
• Users are shielded from implementation details.
• Transparency and ease-of-use are crucial elements
• Shared toolset enables collaboration between sites and across sub-
disciplines.
• Knowledge of relevant physics is still required of course.
ICALEPCS 2005 10/13/2005 M. Greenwald
Case Study: TRANSP CODE – “FusionGrid Service”
• Over 20 years of development by PPPL (+ others)
– >1,000,000 lines of Fortran, C, C++
– >3,000 program modules
– 10,000s lines of supporting script code: perl, python, shell-script
– Used internationally for most tokamak experiments
– Local maintenance has been very manpower intensive
• Now fully integrated with MDSplus data system
– Standard data “trees” developed for MIT, GA, PPPL, JET, …
– Standard toolset for run preparation, visualization
ICALEPCS 2005 10/13/2005 M. Greenwald
Production TRANSP System
User (anywhere)
Experimental Site
PPPL
Authorization server may be consulted at each stage
ICALEPCS 2005 10/13/2005 M. Greenwald
TRANSP Service Has Had Immediate Payoff
• Remote sites avoid costly installation and code maintenance
– Was ~1 man-month per year, per site
• Users always have access to latest code version
• PPPL maintains and supports a single production version of code on well
characterized platform
– Improves user support at reduced costs
• Users have access to high-performance production system
– 16 processor linux cluster
– Dedicated PBS queue
– Tools developed for job submission, cancellation, monitoring
ICALEPCS 2005 10/13/2005 M. Greenwald
TRANSP Jobs Tracked by Fusion Grid Monitor
• Java Servlet derived from GA Data Analysis Monitor
• User presented with dynamical web display
• Sits on top of relational database – can feed accounting database
• Provides information on state of jobs, servers, logs, etc.
ICALEPCS 2005 10/13/2005 M. Greenwald
Usage Continues to Grow
• As of July: 5,800 runs from 10 different experiments
ICALEPCS 2005 10/13/2005 M. Greenwald
This Is Related To, But Not The Same As “Grid” Computing
• Traditional computational grids
– Arrays of heterogeneous servers
– Machines can arrive and leave
– Adaptive discovery – problems find resources
– Workload balancing and cycle scavenging
– Bandwidth diversity – not all machines are well connected
• This model is not especially suited to our problems
• Instead, we are aiming to move high-performance distributed
computing out onto the wide-area network
• We are not focusing on “traditional” grid applications – cycle
scavenging and dynamically configured server farms
ICALEPCS 2005 10/13/2005 M. Greenwald
Putting Distributed Computing Applications out on the Wide Area
Network Presents Significant Challenges
• Crosses administrative boundaries
– Increased concerns and complexity for security model
(authentication, authorization, resource management)
• Resources not owned by a single project or program
– Distributed control of resources by owners is essential
• Needs for end-to-end application performance and problem
resolution
– Resource monitoring, management and troubleshooting are not
straightforward
• Higher latency challenges network throughput, interactivity
• People are not in one place for easy communication
ICALEPCS 2005 10/13/2005 M. Greenwald
Data Driven Applications
• Data driven
– All parameters in database not imbedded in applications
– Data structure, relations, associations are data themselves
– Processing “tree” maintained as data
• Enable “generic” applications
– Processing can be sensitive to data relationships and to position
of data within structure
– Scope of applications can grow without modifying code
ICALEPCS 2005 10/13/2005 M. Greenwald
Data System - Higher Level Organization
• All part of database
• All indexed into main data repository
• High level physics analysis
– Scalar and profile databases
• Event identification, logging & tracking
• Integrated and shared workspaces
– Electronic logbook
– Summaries and status
Runs
Task groups
Campaigns
– Presentations & publications
ICALEPCS 2005 10/13/2005 M. Greenwald
Remote ParticipationCreating an Environment Which Is Equally Productive for Local
and Remote Researchers
• Transparent remote access to
data
– Secure and timely
• Real-time info
– Machine status
– Shot cycle
– Data acquisition and
analysis monitoring
– Announcements
• Shared applications
• Provision for Ad Hoc
interpersonal communications
• Provision for Structured
communications
ICALEPCS 2005 10/13/2005 M. Greenwald
Remote is Easy, Distributed is Hard
• Informal interactions in the
control room are a crucial
part of the research
• We must extend this into
remote and distributed
operations
• Fully engaging remote
participants is challenging
• (Fortunately we have already
substantial experience)
ICALEPCS 2005 10/13/2005 M. Greenwald
Remote Participation
Ad Hoc Communications
• Exploit convergence of telecom and
internet technologies (eg. SIP)
• Deploy integrated communications
– Voice
– Video
– Messaging
– Data streaming
• Advanced directory services
– Identification, location, scheduling
– “Presence”
– Support for “roles”
ICALEPCS 2005 10/13/2005 M. Greenwald
Cyber-Security Needs to Be Built In
• Must protect plant without endangering science mission
• Employ best features of identity-based, application and perimeter security
models
• Strong authentication mechanisms
• Single sign-on – a must if there are many distributed services
• Distributed authorization and resource management
– Allows stakeholders to control their own resources.
– Facility owners can protect computers, data and experiments
– Code developers can control intellectual property
– Fair use of shared resources can be demonstrated and controlled.
ICALEPCS 2005 10/13/2005 M. Greenwald
Cyber-Security Needs to Be Built In
• Must protect plant without endangering science mission
• Employ best features of identity-based, application and perimeter security
models
• Strong authentication mechanisms
• Single sign-on – a must if there are many distributed services
• Distributed authorization and resource management
– Allows stakeholders to control their own resources.
– Facility owners can protect computers, data and experiments
– Code developers can control intellectual property
– Fair use of shared resources can be demonstrated and controlled.
ICALEPCS 2005 10/13/2005 M. Greenwald
Cyber-Security Needs to Be Built In
• Must protect plant without endangering science mission
• Employ best features of identity-based, application and perimeter security
models
• Strong authentication mechanisms
• Single sign-on – a must if there are many distributed services
• Distributed authorization and resource management
– Allows stakeholders to control their own resources.
– Facility owners can protect computers, data and experiments
– Code developers can control intellectual property
– Fair use of shared resources can be demonstrated and controlled.
ICALEPCS 2005 10/13/2005 M. Greenwald
Cyber-Security Needs to Be Built In
• Must protect plant without endangering science mission
• Employ best features of identity-based, application and perimeter security
models
• Strong authentication mechanisms
• Single sign-on – a must if there are many distributed services
• Distributed authorization and resource management
– Allows stakeholders to control their own resources.
– Facility owners can protect computers, data and experiments
– Code developers can control intellectual property
– Fair use of shared resources can be demonstrated and controlled.
ICALEPCS 2005 10/13/2005 M. Greenwald
Cyber-Security Needs to Be Built In
• Must protect plant without endangering science mission
• Employ best features of identity-based, application and perimeter security
models
• Strong authentication mechanisms
• Single sign-on – a must if there are many distributed services
• Distributed authorization and resource management
– Allows stakeholders to control their own resources.
– Facility owners can protect computers, data and experiments
– Code developers can control intellectual property
– Fair use of shared resources can be demonstrated and controlled.
ICALEPCS 2005 10/13/2005 M. Greenwald
Summary
• While ITER operation is many years in the future, work on the
systems for data management and remote participation should
begin now
We propose:
• All data into a single, coherent, self-descriptive structure
• Service-oriented access
• All applications data driven
• Remote participation fully supported
– Transparent, secure, timely remote data access
– Support for ad hoc interpersonal communications
– Shared applications enabled
top related