april 26, 2011 page

12
April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management & Graphics Richard Cook

Upload: hayden

Post on 08-Jan-2016

22 views

Category:

Documents


3 download

DESCRIPTION

DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management & Graphics Richard Cook. April 26, 2011 Page. DOECGF11. Where is Graphics Expertise at LLNL?. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: April 26, 2011 Page

April 26, 2011 Page DOECGF11

DOECGF 2011: LLNL Site ReportIntegrated Computing & Communications Department

Livermore Computing Information Management & Graphics

Richard Cook

Page 2: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Where is Graphics Expertise at LLNL?

At the High-Performance Computing Center in the Information Management and Graphics (IMG) Group

In the Applications, Simulations, and Quality Division, in the Data Group (under Eric Brugger)

At the Center for Applied Scientific Computing in the Data Analysis Group (under Daniel Laney)

Page 3: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Who are our users and what are their requirements?

Who? Physicists, chemists, biologists Computer scientists HPC users – novice to expert

Major science applications ALE3d, ddcMD, pf3d, Miranda, CPMD, Qbox, MDCask, ParaDiS, climate, bio, …

What? Need to analyze data, often interactively. Need to visualize data for scientific insight, publication, and presentation,

sometimes collaborating with vis specialists. Need to interact with all or part of the data. For the largest data sets, zero-copy

access is a must and data management is key.

Page 4: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Information Management & Graphics Group

Data exploration of distributed, complex, unique data sets

Develops and supports tools for managing, visualizing, analyzing, and presenting scientific data

Multi-TB datasets with 10s of billions of zones 1000s of files/timestep and 100s of timesteps Using vis servers with high I/O rates

Graphics consulting and video productionPresentation support for PowerWallsVisualization hardware procurement and supportData management with Hopper file manager

Page 5: April 26, 2011 Page

April 26, 2011 Page DOECGF11

LC Graphics Consulting

Support and maintain graphics packages Tools and Libraries: VisIt, EnSight, AVS/Express, Tecplot... Everyday Utilities: ImageMagick, xv, xmgrace, gnuplot…

Custom development and consulting Custom scripts and compiled code to automate tool use Data conversion Tool support in parallel environments

Page 6: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Visualization Theater Software Development

Blockbuster Movie Player Distributed parallel design with streaming I/O system Effective cache and I/O utilization for high frame rates Sidecar provides “movie cues” and remote control Cross platform (Linux, Windows*, Mac OS) -- works on vis clusters and desktops

with same copy of movie Technologies: C++, Qt, OpenGL, MPI, pthreads Blockbuster is open source: http://www.sourceforge.net/projects/blockbuster

Telepath Session Manager Simple interface to hide the complexity of the environment that includes vis

servers, displays, switches, and software application layers including resource managers, xservers

Orchestrates vis sessions: allocates nodes, configures services, sets up environments, and manages session

Technologies: Python, Tkinter. Interface to DMX -- Distributed Multihead X (X Server of Servers) and SLURM

Page 7: April 26, 2011 Page

7

DOE Computer Graphics Forum

7

When possible or necessary users run vis tools on HPC platforms where data was generated and use “vis nodes”.

When advantageous or necessary, users run vis tools on interactive vis servers that share a file system with the compute platforms.

Small display clusters drive PowerWalls, removing the need for large vis servers to drive displays.

Some applications require graphics cards in vis servers; others benefit from high bandwidth to file system and large memory footprint without need for graphics cards.

Visualization Hardware Usage Model

Vis cluster (typically fraction of size of other clusters)

See next slide for LC description

Page 8: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Current LLNL Visualization Hardware

Two large servers, several small display clusters, all running Linux with same admin support as compute clusters. Four machine rooms.

Users access clusters over the network using diskless workstations on SCF and various workstations on the OCF. No RGB to offices.

Graph EdgeLustre

Lustre

PW 1 PW 2

TSF PowerWall

451PowerWall TSF

PowerWall

Open Computing FacilitySecure Computing Facility

NASNAS

Grant

Boole

Moebius

StaggThriller

Page 9: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Specs for two production vis servers and five wall drivers

PowerWall clusters are 6-10 nodes and all have Opteron CPUs with IB interconnect, with Quadro FX5600 for walls with stereo and FX 4600s for the two without stereo. The newest have 8 GB RAM per

node and older have 4 GB RAM.

Page 10: April 26, 2011 Page

April 26, 2011 Page DOECGF11

HPC at LLNL - Livermore Computing

Production vis systems: Edge and Graph

Page 11: April 26, 2011 Page

April 26, 2011 Page DOECGF11

Our petascale driver - Sequoia

•We have a multi-PetaFlop machine arriving going into production in 2012

•Furthers our ability to simulate complex phenomena “just like God does it – one atom at a time”

•Uncertainty quantification

•3D confirmations of 2D discoveries for more predictive models

•The success of Sequoia will depend on an enormous off-machine petascale storage infrastructure

Page 12: April 26, 2011 Page

April 26, 2011 Page DOECGF11

More Information/Contacts

General LLNL Computing Information http://computing.llnl.gov

DNT - B Division’s Data and Vis Group Eric Brugger: [email protected]

Information Management and Graphics Group Becky Springmeyer: [email protected] Rich Cook: [email protected] https://computing.llnl.gov/vis

CASC Data Analysis Group Daniel Laney: [email protected] https://computation.llnl.gov/casc/

Scientific Data Management Project Jeff Long: [email protected]

https://computing.llnl.gov/resources/hopper/