some coding structure in wrf. software architecture f90 w/ structures and dynamic memory allocation...

64
Some Coding Structure in WRF

Upload: kurtis-liddicoat

Post on 15-Dec-2015

243 views

Category:

Documents


5 download

TRANSCRIPT

Some Coding Structure in WRF

Software Architecture

F90 w/ structures and dynamic memory allocation

Modules

Run-time configurable

Hierarchical Software Design

Features

Multi-level parallel decomposition

shared-, distributed-, hybrid

Model domains are decomposed for parallelism on two-levels

Patch: section of model domain allocated to a distributed memory node

Single version of code for efficient execution on:

Distributed-memory Shared-memory Hybrid-memory

Logical domain

1 Patch, divided into multiple tiles

Multi-level parallel decomposition

Tile: section of a patch allocated to a shared-memory processor within a node; this is also the scope of a model layer subroutine.Distributed memory parallelism is over patches; shared memory parallelism is over tiles within patches

Domain size:Domain size: ids, ide, jds, jde, kds, kdeids, ide, jds, jde, kds, kde

Memory size:Memory size: ims, ime, jms, jme, kms, kmeims, ime, jms, jme, kms, kme

Tile size:Tile size: its, ite, jts, jte, kts, kteits, ite, jts, jte, kts, kte

Three Sets of Dimensions

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

logical patch

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

Dis

trib

uted

Mem

ory

Com

mun

icat

ions

(dyn_eh/module_diffusion.F )

SUBROUTINE horizontal_diffusion_s (tendency, rr, var, . . .. . . DO j = jts,jte DO k = kts,ktf DO i = its,ite mrdx=msft(i,j)*rdx mrdy=msft(i,j)*rdy tendency(i,k,j)=tendency(i,k,j)- & (mrdx*0.5*((rr(i+1,k,j)+rr(i,k,j))*H1(i+1,k,j)- & (rr(i-1,k,j)+rr(i,k,j))*H1(i ,k,j))+ & mrdy*0.5*((rr(i,k,j+1)+rr(i,k,j))*H2(i,k,j+1)- & (rr(i,k,j-1)+rr(i,k,j))*H2(i,k,j ))- & msft(i,j)*(H1avg(i,k+1,j)-H1avg(i,k,j)+ & H2avg(i,k+1,j)-H2avg(i,k,j) & )/dzetaw(k) & ) ENDDO ENDDO ENDDO . . .

Example code fragment that requires communication between patches

Note the tell-tale +1 and –1 expressions in indices for rr and H1 arrays on right-hand side of assignment. These are horizontal data dependencies because the indexed operands may lie in the patch of a neighboring processor. That neighbor’s updates to that element of the array won’t be seen on this processor. We have to communicate.

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

logical patch

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

logical patch

jms

jme

halo

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

• Memory dimensions• Used to dimension dummy

arguments• Do not use for local arrays

ims ime

1 node

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

• Memory dimensions• Used to dimension dummy

arguments• Do not use for local arrays

• Tile dimensions• Local loop ranges• Local array dimensions

ims imejms

jme

its ite

tilejts

jte

halo

Data structure

WRF Data Taxonomy

State data

Intermediate data type 1 (L1)

Intermediate data type 2 (L2)

State data

Persist for the duration of a domainPersist for the duration of a domain Represented as fields in domain data structureRepresented as fields in domain data structure Arrays are represented as dynamically allocated Arrays are represented as dynamically allocated

pointer arrays in the domain data structurepointer arrays in the domain data structure Declared in Registry using Declared in Registry using statestate keyword keyword Always Always memorymemory dimensioned; always dimensioned; always thread thread

sharedshared Only state arrays can be subject to I/O and Only state arrays can be subject to I/O and

Interprocessor communicationInterprocessor communication

Data structure

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

logical patch

jms

jme

halo

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

• Memory dimensions• Used to dimension dummy

arguments• Do not use for local arrays

ims ime

1 node

Data that persists for the duration of 1 time step on a domain Data that persists for the duration of 1 time step on a domain

and then releasedand then released

Declared in Registry using Declared in Registry using i1 i1 keywordkeyword

Typically automatic storage (program stack) in solve routineTypically automatic storage (program stack) in solve routine

Typical usage is for tendency or temporary arrays in solverTypical usage is for tendency or temporary arrays in solver

Always Always memory memory dimensioned and dimensioned and thread sharedthread shared

Typically Typically not not communicated or I/Ocommunicated or I/O

Data structure

L1 Data

L2 data are local arrays that exist only in model-layer subroutines L2 data are local arrays that exist only in model-layer subroutines

and exist only for the duration of the call to the subroutineand exist only for the duration of the call to the subroutine

L2 data is not declared in Registry, never communicated and L2 data is not declared in Registry, never communicated and

never input or outputnever input or output

L2 data is L2 data is tiletile dimensioned and dimensioned and thread localthread local; over-dimensioning ; over-dimensioning

within the routine for redundant computation is allowed within the routine for redundant computation is allowed

the responsibility of the model layer programmerthe responsibility of the model layer programmer

should always be limited to thread-local datashould always be limited to thread-local data

Data structure

L2 Data

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO END DO END DO

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

• Memory dimensions• Used to dimension dummy

arguments• Do not use for local arrays

• Tile dimensions• Local loop ranges• Local array dimensions

ims imejms

jme

its ite

tilejts

jte

halo

The Registry

"Active data-dictionary” for managing WRF data structures Database describing attributes of model state, intermediate, and

configuration data Dimensionality, number of time levels, staggering Association with physics I/O classification (history, initial, restart, boundary) Communication points and patterns Configuration lists (e.g. namelists)

Program for auto-generating sections of WRF from database: 570 Registry entries 30-thousand lines of automatically generated WRF code Allocation statements for state data, I1 data Argument lists for driver layer/mediation layer interfaces Interprocessor communications: Halo and periodic boundary updates, transposes Code for defining and managing run-time configuration information Code for forcing, feedback and interpolation of nest data

Automates time consuming, repetitive, error-prone programming Insulates programmers and code from package dependencies Allow rapid development Documents the data

Currently implemented as a text file: Registry/Registry Types of entry:

State – Describes state variables and arrays in the domain structure

Dimspec – Describes dimensions that are used to define arrays in the model

L1 – Describes local variables and arrays in solve

Typedef – Describes derived types that are subtypes of the domain structure

Rconfig – Describes a configuration (e.g. namelist) variable or array

Package – Describes attributes of a package (e.g. physics)

Halo – Describes halo update interprocessor communications

Period – Describes communications for periodic boundary updates

Xpose – Describes communications for parallel matrix transposes

Registry data base

State/L1 Entry (Registry)

# Type Sym Dims Use Tlev Stag IO Dname Descrip# definition of a 3D, two-time level, staggered state array

state real ru ikj dyn_em 2 X irh "RHO_U" "X WIND COMPONENT“i1 real ww1 ikj dyn_em 1 Z

Elements Entry: The keyword “state” Type: The type of the state variable or array (real, double, integer, logical,

character, or derived) Sym: The symbolic name of the variable or array Dims: A string denoting the dimensionality of the array or a hyphen (-) Use: A string denoting association with a solver or 4D scalar array, or a hyphen NumTLev: An integer indicating the number of time levels (for arrays) or

hypen (for variables) Stagger: String indicating staggered dimensions of variable (X, Y, Z, or

hyphen) IO: String indicating whether and how the variable is subject to I/O and Nesting DName: Metadata name for the variable Descrip: Metadata description of the variable

Example

State Entry– different output times

ExampleExample

In Registrystate real ru ikj dyn_em 2 X irh01 "RHO_U" "XX“

In namelist.input

auxhist1_outname = 'pm_output_d<domain>_<date>' auxhist1_interval = 10000, 10000, 5 frames_per_auxhist1 = 30, 30, 24 auxhist1_begin_y = 0 auxhist1_begin_mo = 0 auxhist1_begin_d = 1 auxhist1_begin_h = 0 auxhist1_begin_m = 0 auxhist1_begin_s = 0 io_form_auxhist1 = 2,

This will give you a five minute output interval on domain 3 starting after 1 day simulation.

Elements Entry: The keyword “dimspec” DimName: The name of the dimension (single character) Order: The order of the dimension in the WRF framework (1, 2, 3, or ‘-‘) HowDefined: specification of how the range of the dimension is defined CoordAxis: which axis the dimension corresponds to, if any (X, Y, Z, or C) DatName: metadata name of dimension

Example

#<Table> <Dim> <Order> <How defined> <Coord-axis> <DatName>dimspec i 1 standard_domain x west_eastdimspec j 3 standard_domain y south_northdimspec k 2 standard_domain z bottom_topdimspec l 2 namelist=num_soil_layers z soil_layers

Dimspec entry

# specification of microphysics optionspackage passiveqv mp_physics==0 - moist:qvpackage kesslerscheme mp_physics==1 - moist:qv,qc,qrpackage linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qgpackage ncepcloud3 mp_physics==3 - moist:qv,qc,qrpackage ncepcloud5 mp_physics==4 - moist:qv,qc,qr,qi,qs

# namelist entry that controls microphysics optionrconfig integer mp_physics namelist,namelist_04 max_domains 0

Package Entry (Registry)

Elements Entry: the keyword “package”, Package name: the name of the package: e.g. “kesslerscheme” Associated rconfig choice: the name of a rconfig variable and

the value of that variable that choses this package Package state vars: unused at present; specify hyphen (-) Associated 4D scalars: the names of 4D scalar arrays and the

fields within those arrays this package uses

Example

Elements Entry: keywords “halo” or “period” Commname: name of comm operation Description: defines the halo or period operation

For halo: npts:f1,f2,...[;npts:f1,f2,...]* For period: width:f1,f2,...[;width:f1,f2,...]*

Example

# first exchange in eh solverhalo HALO_EH_A dyn_em 24:u_2,v_2,ru_1,ru_2,rv_1,rv_2,w_2,t_2;4:pp,pip

# a periodic boundary updateperiod PERIOD_EH_A dyn_em 2:u_1,u_2,ru_1,ru_2,v_1,v_2,rv_1,rv_2,rw_1,rw_2

Comm entries: halo and period

State arrays, used to store arrays of 3D fields such as State arrays, used to store arrays of 3D fields such as moisture tracers, chemical species, ensemble members, moisture tracers, chemical species, ensemble members, etc.etc.

First 3 indices are over grid dimensions; last dimension First 3 indices are over grid dimensions; last dimension is the tracer indexis the tracer index

Each tracer is declared in the Each tracer is declared in the Registry Registry as a separate as a separate statestate array but with array but with ff and optionally also and optionally also tt modifiers to modifiers to the dimension field of the entrythe dimension field of the entry

The field is then added to the 4D array whose name is The field is then added to the 4D array whose name is given by the use field of the Registry entrygiven by the use field of the Registry entry

4D Tracer Arrays

Package Entry (Registry)

state real qv ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqv_b,rqv_bt) "QVAPOR" "Water vapor mixing ratio" "kg kg-

1"

state real qc ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqc_b,rqc_bt) "QCLOUD" "Cloud water mixing ratio" "kg kg-1"

state real qr ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqr_b,rqr_bt) "QRAIN" "Rain water mixing ratio" "kg kg-1"

state real qi ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqi_b,rqi_bt) "QICE" "Ice mixing ratio" "kg kg-

1"

state real qs ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqs_b,rqs_bt) "QSNOW" "Snow mixing ratio" "kg kg-

1"

state real qg ikjft moist 2 - \

i01rhusdf=(bdy_interp:dt,rqg_b,rqg_bt) "QGRAUP" "Graupel mixing ratio" "kg kg-

1"

The extent of the last dimension of a tracer array is from The extent of the last dimension of a tracer array is from PARAM_FIRST_SCALAR to num_PARAM_FIRST_SCALAR to num_tracername tracername Both defined in Registry-generated Both defined in Registry-generated

frame/module_state_description.Fframe/module_state_description.F PARAM_FIRST_SCALAR is a defined constant (2)PARAM_FIRST_SCALAR is a defined constant (2) Num_Num_tracername tracername is computed at run-time in is computed at run-time in

set_scalar_indices_from_config (module_configure)set_scalar_indices_from_config (module_configure) Calculation is based on which of the tracer arrays are Calculation is based on which of the tracer arrays are

associated with which specific packages in the associated with which specific packages in the Registry and on which of those packages is active at Registry and on which of those packages is active at run time (namelist.input)run time (namelist.input)

4D Tracer Arrays

Each tracer index (e.g. P_QV) into the 4D array is also Each tracer index (e.g. P_QV) into the 4D array is also defined in module_state_description and set in defined in module_state_description and set in set_scalar_indices_from_configset_scalar_indices_from_config

Code should always test that a tracer index greater than Code should always test that a tracer index greater than or equal to PARAM_FIRST_SCALAR before referencing or equal to PARAM_FIRST_SCALAR before referencing the tracer (inactive tracers have an index of 1)the tracer (inactive tracers have an index of 1)

Loops over tracer indices should always run from Loops over tracer indices should always run from PARAM_FIRST_SCALAR to num_PARAM_FIRST_SCALAR to num_tracername -- tracername -- EXAMPLEEXAMPLE

4D Tracer Arrays

• • 4D moisture field, 4D moisture field, moist_1(i,k,j,?)moist_1(i,k,j,?)

? =? = P_QV (water vapor)P_QV (water vapor)

P_QC (cloud water)P_QC (cloud water)

P_QI (cloud ice)P_QI (cloud ice)

P_QR (rain)P_QR (rain)

P_QS (snow)P_QS (snow)

P_QG (graupel) P_QG (graupel)

4D Tracer Array Example

IF (qi_flag) thenIF (qi_flag) then (the memory of cloud ice is allocated)(the memory of cloud ice is allocated) . . .. . .

Registry

Directory Structure

Runge-Kutta loop (steps 1, 2, and 3) (i) advection, p-grad, buoyancy using (ii) if step 1 (first_rh_part1/part2) physics, save for steps 2 and 3 (iii) assemble dynamics tendencies Acoustic step loop (i) advance U,V, then then w, (ii) time-average U,V, End acoustic loop Advance scalars using time-averaged U,V, End Runge-Kutta loopOther physics (currently microphysics)

WRFV3/dyn_em/solve_em.F

End time step

WRF Mass-Coordinate Model Integration Procedure

,,t

,,

Begin time step

WRF … solve_em

DYNAMICS

phy_init…IN

IT

.

microphysics_driver

radiation_driver

cumulus_driver

surface_driver

phy_prep

moist_physics_prep

pbl_driver

part1

Calculate decoupled variable tendenciesCalculate decoupled variable tendencies

Update decoupled variables directly Update decoupled variables directly

Physics

• Cumulus parameterizationCumulus parameterization

• Boundary layer parameterizationBoundary layer parameterization

• Radiation parameterizationRadiation parameterization

• MicrophysicsMicrophysics

solve_em

Physics_driverSELECT CASE (CHOICE) CASE ( NOPHY ) CASE ( SCHEME1 ) CALL XXX CASE ( SCHEME2 ) CALL YYY CASE DEFAULTEND SELECT

Individual physics scheme ( XXX )

Physics three-level structure

Rules for WRF physics

Naming rulesNaming rules

xxxxxx = individual scheme = individual scheme

ex, module_ex, module_cucu__grellgrell.F.F

yyyy = ra is for radiation = ra is for radiation bl is for PBL bl is for PBL

sf is for surface and surface layersf is for surface and surface layer cu is for cumuluscu is for cumulus mp is for microphysics.mp is for microphysics.

module_module_yyyy__xxxxxx.F.F (module)(module)

Rules for WRF physics

YYYY = ra is for radiation = ra is for radiation bl is for PBL bl is for PBL cu is for cumulus cu is for cumulus

RRXXXXYYYYTENTEN (tendencies)(tendencies)

XXXX = variable (th, u, v, qv, qc, … ) = variable (th, u, v, qv, qc, … )

ex, Rex, RTHTHBLBLTENTEN

Naming rulesNaming rules

Coding rules Coding rules (later)(later)

One scheme one moduleOne scheme one module

Rules for WRF physics

Naming rulesNaming rules

WRF Physics Features

REAL , PARAMETER :: r_d = 287.REAL , PARAMETER :: r_d = 287. REAL , PARAMETER :: r_v = 461.6REAL , PARAMETER :: r_v = 461.6 REAL , PARAMETER :: cp = 7.*r_d/2.REAL , PARAMETER :: cp = 7.*r_d/2. REAL , PARAMETER :: cv = cp-r_dREAL , PARAMETER :: cv = cp-r_d . . ..

• • Unified global constatntsUnified global constatnts

(module_model_constants.F)(module_model_constants.F)

• • Vertical index Vertical index

(kms is at the bottom)(kms is at the bottom)

• • Unified common calculations Unified common calculations

(saturation mixing ratio)(saturation mixing ratio)

WRF Physics Features

• • Unified global constatntsUnified global constatnts

(module_model_constants.F)(module_model_constants.F)

Prepare your codePrepare your code

Create a new Create a new modulemodule

Declare new variables and Declare new variables and a new package in a new package in RegistryRegistry

Modify Modify solve_em.Fsolve_em.F

Do Do initializationinitialization

Modify Modify namelistnamelist

Modify Modify phy_prepphy_prep

Implement a new physics scheme

Modify Modify cumulus_driver.F cumulus_driver.F (use cumulus parameterization as an example)(use cumulus parameterization as an example)

Modify Modify MakefileMakefile

Compile and testCompile and test

Modify Modify calculate_phy_ten calculate_phy_ten

Modify Modify phy_cu_ten phy_cu_ten (module_physics_addtendc.F)(module_physics_addtendc.F)

Implement a new physics scheme

WRF … solve_em

DYNAMICS

phy_init…IN

IT

.

microphysics_driver

radiation_driver

cumulus_driver

surface_driver

phy_prep

moist_physics_prep

pbl_driver

part1

a)a) Replace continuation characters in the 6th column Replace continuation characters in the 6th column with f90 continuation `&‘ at end of previous linewith f90 continuation `&‘ at end of previous line

1. F901. F90

Subroutine kessler(QV, T,Subroutine kessler(QV, T,+ its,ite,jts,jte,kts,kte,+ its,ite,jts,jte,kts,kte,+ ims,ime,jms,jme,kms,kme,+ ims,ime,jms,jme,kms,kme,+ ids,ide,jds,jde,kds,kde)+ ids,ide,jds,jde,kds,kde)

F77F77

Subroutine kessler(QV, T, . . . &Subroutine kessler(QV, T, . . . & its,ite,jts,jte,kts,kte, &its,ite,jts,jte,kts,kte, & ims,ime,jms,jme,kms,kme,&ims,ime,jms,jme,kms,kme,& ids,ide,jds,jde,kds,kde )ids,ide,jds,jde,kds,kde )

F90F90

Prepare your code

1. F901. F90

b) Replace the 1st column `C` for comment with `!`b) Replace the 1st column `C` for comment with `!`

c This is a testc This is a test

F77F77

! This is a test! This is a test

F90F90

Prepare your code

a)a) Replace continuation characters in the 6th column Replace continuation characters in the 6th column with f90 continuation `&‘ at end of previous linewith f90 continuation `&‘ at end of previous line

1. F901. F90

2. No common block2. No common block

common/var1/T,q,p, …common/var1/T,q,p, …

Subroutine sub(T,q,p, ….) Subroutine sub(T,q,p, ….) real,intent(out), &real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: T,q,p dimension(ims:ime,kms:kme,jms:jme):: T,q,p

WRFWRF

Prepare your code

1. F901. F90

3. Use “3. Use “ implicit none implicit none ””

4. Use “4. Use “ intent intent ””

Subroutine sub(T,q,p, ….) Subroutine sub(T,q,p, ….) implicit noneimplicit none real,intent(out), &real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: T dimension(ims:ime,kms:kme,jms:jme):: T real,intent( in), &real,intent( in), & dimension(ims:ime,kms:kme,jms:jme):: qdimension(ims:ime,kms:kme,jms:jme):: q real,intent(inout), &real,intent(inout), & dimension(ims:ime,kms:kme,jms:jme):: pdimension(ims:ime,kms:kme,jms:jme):: p

Prepare your code

2. No common block2. No common block

5.Variable dimensions5.Variable dimensions

Subroutine sub(global,….) Subroutine sub(global,….) implicit none implicit none real,intent(out), &real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: global dimension(ims:ime,kms:kme,jms:jme):: global

real,dimension(its:ite,kts:kte,jts:jte):: local real,dimension(its:ite,kts:kte,jts:jte):: local

Prepare your code

1. F901. F90

3. Use “3. Use “ implicit none implicit none ””

4. Use “4. Use “ intent intent ””

2. No common block2. No common block

6.Do loops6.Do loops

Prepare your code

5.Variable dimensions5.Variable dimensions

1. F901. F90

3. Use “3. Use “ implicit none implicit none ””

4. Use “4. Use “ intent intent ””

2. No common block2. No common block

do j = jts, jtedo j = jts, jte do k = kts, ktedo k = kts, kte do i = its, ite do i = its, ite ...... enddoenddo enddoenddo enddoenddo

Create a new moduleCreate a new module

ex,ex, module_cu_exp.F (plug in all your codes)module_cu_exp.F (plug in all your codes)

Go Registry and declare a new package Go Registry and declare a new package (and new variables) (and new variables) (WRFV1/Registry)(WRFV1/Registry)

package expscheme cu_physics==3 - -package expscheme cu_physics==3 - -

package kfscheme cu_physics==1 - - package kfscheme cu_physics==1 - -

package bmjscheme cu_physics==2 - - package bmjscheme cu_physics==2 - -

Implement a new physics scheme

Cloud microphysicsCloud microphysics

package kesslerscheme mp_physics==1 - moist:qv,qc,qrpackage kesslerscheme mp_physics==1 - moist:qv,qc,qr

package linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qgpackage linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qg

package wsm3 mp_physics==3 - moist:qv,qc,qrpackage wsm3 mp_physics==3 - moist:qv,qc,qr

package wsm5 package wsm5 mp_physics==4 - moist:qv,qc, mp_physics==4 - moist:qv,qc, qr,qi,qsqr,qi,qs

Implement a new physics scheme

Create a new moduleCreate a new module

ex,ex, module_cu_exp.F (plug in all your codes)module_cu_exp.F (plug in all your codes)

Go Registry and declare a new package Go Registry and declare a new package (and new variables) (and new variables) (WRFV1/Registry)(WRFV1/Registry)

ModifyModify namelist.input and assign namelist.input and assign

cu_physics = cu_physics = 33

Implement a new physics scheme

Create a new moduleCreate a new module

ex,ex, module_cu_exp.F (plug in all your codes)module_cu_exp.F (plug in all your codes)

Go Registry and declare a new package Go Registry and declare a new package (and new variables) (and new variables) (WRFV1/Registry)(WRFV1/Registry)

INIT

WRF ……. solve_em

phy_initstart_domain_em cu_init

(dyn_em)(start_em.F)

*

*

(phys)(module_physics_init.F)

(dyn_em)

Pass new variables down to cu_initPass new variables down to cu_init

phys/module_physics_init.F

INIT

WRF ……. solve_em

phy_initstart_domain_em cu_init

(dyn_em)(start_em.F)

*

*

(phys)(module_physics_init.F)

(dyn_em)

Go subroutine cu_initGo subroutine cu_init Include the new module and create a new Include the new module and create a new SELECT case SELECT case

phys/module_physics_init.F

Pass new variables down to cu_initPass new variables down to cu_init

cps_select: SELECT CASE(config_flags%cu_physics) CASE (KFSCHEME) CALL kfinit(...) CASE (BMJSCHEME)

CALL bmjinit(...)

CASE DEFAULT END SELECT cps_select

Match the package name in Registry

Subroutine cu_init(…) . USE module_cu_kf USE module_cu_bmj

.

CASE (EXPSCHEME) CALL expinit(...)

USE module_cu_exp

Put intomodule_cu_exp.F

phys/module_physics_init.F

WRF … solve_em

DYNAMICS

phy_init…IN

IT

.

microphysics_driver

phy_prep

moist_physics_prep

part1

• • Calculate required variablesCalculate required variables

• • Convert variables from C gridConvert variables from C grid

to A gridto A grid

phy_prep/moist_physics_prep

WRF … solve_em

DYNAMICS

phy_init…IN

IT

.

microphysics_driver

radiation_driver

cumulus_driver

surface_driver

phy_prep

moist_physics_prep

pbl_driver

part1 Expcps

solve_em

Physics_driverSELECT CASE (CHOICE) CASE ( NOPHY ) CASE ( SCHEME1 ) CALL XXX CASE ( SCHEME2 ) CALL YYY CASE DEFAULTEND SELECT

Individual physics scheme ( XXX )

Three-level structure

Go physics driver (cumulus_driver.F)Go physics driver (cumulus_driver.F) Include the new module Include the new module and create a new SELECT CASE in driver and create a new SELECT CASE in driver

Check available variables in driversCheck available variables in drivers (variables are explained inside drivers) (variables are explained inside drivers)

cumulus_driver.F

MODULE module_cumulus_driverCONTAINS Subroutine cumulus_driver (….) . ..!-- RQICUTEN Qi tendency due to ! cumulus scheme precipitation (kg/kg/s)!-- RAINC accumulated total cumulus scheme precipitation (mm)!-- RAINCV cumulus scheme precipitation (mm)!-- NCA counter of the cloud relaxation ! time in KF cumulus scheme (integer)!-- u_phy u-velocity interpolated to theta points (m/s)!-- v_phy v-velocity interpolated to theta points (m/s)!-- th_phy potential temperature (K)!-- t_phy temperature (K)!-- w vertical velocity (m/s)!-- moist moisture array (4D - last index is species) (kg/kg)!-- dz8w dz between full levels (m)!-- p8w pressure at full levels (Pa)

Module_cumulus_driver.F

MODULE module_cumulus_driverCONTAINS Subroutine cumulus_driver . USE module_cu_kf USE module_bmj_kf

cps_select: SELECT CASE(config_flags%cu_physics) CASE (KFSCHEME) CALL KFCPS(...) CASE (BMJSCHEME) CALL BMJCPS(...)

CASE DEFAULTEND SELECT cps_select

Match the package name in Registry

Put inmodule_cu_exp.F

USE module_cu_exp

CASE (EXPSCHEME) CALL EXPCPS(...)

Module_cumulus_driver.F

WRF … solve_em

DYNAMICS

phy_init…IN

IT

.

microphysics_driver

radiation_driver

cumulus_driver

surface_driver

phy_prep

moist_physics_prep

pbl_driver

part1

solve_em

cumulus_driver expcps

phy_prep

DYNAMICS

.

calculate_phy_tend

update_phy_ten phy_cu_ten

messagepassing ?

part1

part2

. CASE(BMJSCHEME) . CASE (EXPSCHEME) CALL add_a2a (rt_tendf, RTHCUTEN,… ) CALL add_a2c_u(ru_tendf,RUBLTEN,… ) CALL add_a2c_v(rv_tendf,RVBLTEN,… ) . if ( QI_FLAG ) & CALL add_a2a(moist_tendf(ims,kms,jms,P_QV),RQVCUTEN, .. & ids,ide, jds, jde, kds, kde, & ims, ime, jms, jme, kms, kme, & its, ite, jts, jte, kts, kte ) .

Subroutine phy_cu_ten (… )

phys/module_physics_addtendc.F