isabelle d ’ ast - cerfacs

12
Large Eddy Simulation of two phase flow combustion in gas turbines: Predicting extreme combustion processes in real engines Isabelle d’Ast - CERFACS

Upload: isi

Post on 13-Jan-2016

97 views

Category:

Documents


0 download

DESCRIPTION

Large Eddy Simulation of two phase flow combustion in gas turbines: Predicting extreme combustion processes in real engines. Isabelle d ’ Ast - CERFACS. CERFACS. Around 130 people in Toulouse ( South West France). - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Isabelle d ’ Ast  -  CERFACS

Large Eddy Simulation of two phase flow combustion in gas turbines:

Predicting extreme combustion processes in real engines

Isabelle d’Ast - CERFACS

Page 2: Isabelle d ’ Ast  -  CERFACS

CERFACS

• Around 130 people in Toulouse ( South West France).• Goal : Develop and improve numerical simulation methods and

advanced scientific computing on real applications ( CFD, climate, electromagnetism ).

• 30 to 40 A class publications by year (International journals)• 10 Phds per year • Collaborations with industry and academia ( France, Germany,

Spain, USA, Italy ).

Page 3: Isabelle d ’ Ast  -  CERFACS

Scientific problem: the prediction of extinction in an industrial burner

• In an industrial burner due to the fast change in operating conditions, the fuel mass flux can vary much faster than the air mass flux the engine extinction must be avoided.

• Extinction is an unsteady phenomenon which has been widely studied in very academic configurations, but very little in complex industrial burners.

Purpose of the project: perform Large Eddy Simulation (LES) in an industrial combustion chamber to understand the mechanisms of extinction and evaluate the capability of LES to predict accurately the extinction limits.

Annular combustion

chamber

Combustion chamber sector

Air + fuel injection

Air injection

Outlet

Unstructured mesh

~ 9M cells

Page 4: Isabelle d ’ Ast  -  CERFACS

B) Science Lesson• The AVBP Code :

– Navier stokes 3D compressible equations • Two phase ( liquid Eulerian/lagragian )• Reactive flows• Real thermodynamics (perfect and transcritical gases)• Moving meshes (piston engines)

– Large Eddy Simulation • Scales larger than the mesh cells are fully resolved• Scales smaller than the mesh cells are modeled via a sub-grid stress tensor

model ( Smagorinsky / Wale ) – Unstructured grids:

• Important for complex geometries– Explicit schemes ( Taylor Galerkin / Lax wendroff )

• Contact: [email protected] for details

Page 5: Isabelle d ’ Ast  -  CERFACS

C) Parallel Programming Model

• MPI code written in fortran 77.• Library requirements:

– Parmetis (partitionning) – (p)HDF5 ( I/Os) – Lapack

• The code runs on any x86 / power / Sparc computer in the market so far ( BullX, Bluegene P, CRAY XT5, Power 6, Sgi Altix)

• Currently migrating to fortran 90 (validation underway). • Introduction of OpenMP and OmpSS for fine grain threading in

progress.

Page 6: Isabelle d ’ Ast  -  CERFACS

E) I/O Patterns and Strategy• Two categories of I/O

– Small binary files ( one file written by the master for progress monitoring).

– Large HDF5 files. Single file only.• Written by the master ( HDF5 standard)• Phdf5 collective file under study ( parallel I/O handled via PHDF5

only). Performance is erratic and variable. – Multiple master - Slave I/O ( a subset of ranks has I/O responsabilities )

One file per master ( 1/100 of core count files ) under study. Sketch code performance encouraging.

– Average size of HDF5 : 2GB. Depends on the mesh size ( max today 15GB per file, one file per dumped time steps , usually 200 for converged simulation). Binary file 100 MB.

• Input I/O 2 large HDF5 files. – Sequential master read – Buffered / alltoall alltoallv under validation.

Page 7: Isabelle d ’ Ast  -  CERFACS

F) Visualization and Analysis

• Visualization uses 2 methods: – Translation of selected datasets to ensight/fieldview/tecplot format.

Relies on parallelisation of these tools. – Xdmf format : xml indexing of HDF file and direct read via

paraview/ensight (no translation)• ‘advanced user methods’ available (not tested on INTREPID yet):

– Single HDF5 file written in block format ( per partition ) . – Indexed via xdmf – Read and postprocessed in parallel directly via pvserver ( paraview ) on

the cluster and automatically generates jpg.

• Full migration to xdmf for 2012 3Rd quarter. Generalisation of pvserver.

Page 8: Isabelle d ’ Ast  -  CERFACS

G) Performance• Performance analysis with :

– Scalasca– Tau– Paraver / dyninst

• Current Bottlenecks : – Master/slave – Extreme usage of allreduce. Over 100 Calls per iteration. – Hand coded collective communications instead of alltoall / broadcast– Cache misses: Adaptative cache loop not implemented for node (only for cells). – Pure MPI Implementation (instead of hybrid mode).

• Current status and future plans for improving performance:– Parallelisation of preprocessing task sketch done 2h -> 3min max memory 15GB

versus 50 MB . Replacement of the current master /slave scheme 3rd Quarter 2012.

– Buffered – MPI_reduce switch underway on current version: 20% gain per iteration at 1024 cores. Strong scaling performance to be studied.

– OpenMP / OmpSS implementation to reduce communications

Page 9: Isabelle d ’ Ast  -  CERFACS

H) Tools

• How do you debug your code?– Compiler : “-g -fbounds-check -Wuninitialized -O -ftrapv -fimplicit-none -

fno-automatic –Wunused”– Gdb / ddt

• Current status and future plans for improved tool integration and support– Debug verbosity level included in the next code release.

Page 10: Isabelle d ’ Ast  -  CERFACS

I) Status and Scalability• How does your application scale now?

– 92% scalability up to 8 Racks on BG-P ( dual mode )• Target 128k cores end of 2012:

– Currently 60% on 64k cores.

Page 11: Isabelle d ’ Ast  -  CERFACS

I) Status and Scalability• What are our top pains?

– 1- Scalable I/O. – 2- Blocking allreduce. – 3- Scalable post-processing.

• What did you change to achieve current scalability? – Buffer Asynchronous partition communications ( Irecv/Isend) previously

per dataset Irecv/Send. • Current status and future plans for improving scalability

– Switch to Parmetis 4 for improve performance and larger datasets – Ptscotch ? ( Zoltan ? )

Page 12: Isabelle d ’ Ast  -  CERFACS

J) Roadmap

• Where will your science take you over the next 2 years?– Currently we are able to predict instabilities, extinction and ignition of

gas turbines. – Switch to larger problems and safety concerns : Fires in buildings ( submitted for consideration for 2013 ).

• What do you hope to learn / discover?– Understanding flame propagation inside buildings/furnaces will greatly

improve prediction models and safety standards can be adapted accordingly.

• Even larger datasets : 2013 I/O expected 40Gb per snapshot. • Need to improve workflow ( fully parallel postprocessing ) –

Scalable I/O.