using python to solve partial differential equations

27
O ur work at the Simula Research Labo- ratory mostly focuses on computational applications in life sciences. Usually, this involves fairly typical partial differ- ential equations such as the incompressible Navier- Stokes equations, elasticity equations, and parabolic and elliptic PDEs, but these PDEs are typically cou- pled either with each other or with ordinary differ- ential equations (ODEs). Hence, even though the PDEs themselves are reasonably well understood, the couplings between them make the problems we study quite challenging. Our design goals are therefore threefold. First, we want to easily define systems of PDEs. Second, we want it to be easy to play with different solution algorithms for systems of coupled PDEs. Finally, we want to reuse existing software to avoid rein- venting the wheel. We use many good and mature libraries from the Web, including Dolfin (www.fenics.org/dolfin/), GiNaC (www.ginac.de), MayaVi (http://mayavi. sourceforge.net), NumPy (http://numpy.scipy.org), PETSc (www.mcs.anl.gov/petsc/), SciPy (www. scipy.org), Trilinos (http://software.sandia.gov/ trilinos/), and VTK (www.vtk.org). In fact, we’re mixing these libraries with our own packages: • Famms (verification based on the method of manufactured solutions), Instant (www.fenics.org/instant; inlining of C++ in Python), PyCC (http://folk.uio.no/skavhaug/heart_sim ulations.html; the underlying framework for gluing components together), • PySE (http://pyfdm.sf.net; a finite difference toolbox), Swiginac (http://swiginac.berlios.de/; a Python interface to the symbolic mathematics engine GiNaC), and • SyFi (www.fenics.org/syfi/; a finite element toolbox). Some of these packages are Python modules, whereas the others—thanks to Python’s popularity in scientific computing—are equipped with Python in- terfaces. By using Python, we don’t have to mix these packages at the C level, which is a huge advantage. Solving Systems of PDEs Currently, our most important application is in car- diac electrophysiology. 1 The central model here is the bidomain model, 2 which is a system of two PDEs 48 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING Using Python to Solve Partial Differential Equations This article describes two Python modules for solving partial differential equations (PDEs): PyCC is designed as a Matlab-like environment for writing algorithms for solving PDEs, and SyFi creates matrices based on symbolic mathematics, code generation, and the finite element method. KENT-ANDRE MARDAL, OLA SKAVHAUG, GLENN T. LINES, GUNNAR A. STAFF , AND ÅSMUND ØDEGÅRD Simula Research Laboratory 1521-9615/07/$25.00 © 2007 IEEE Copublished by the IEEE CS and the AIP P YTHON: B ATTERIES I NCLUDED

Upload: edward-yesid-villegas

Post on 24-Apr-2015

307 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: Using Python to Solve Partial Differential Equations

Our work at the Simula Research Labo-ratory mostly focuses on computationalapplications in life sciences. Usually,this involves fairly typical partial differ-

ential equations such as the incompressible Navier-Stokes equations, elasticity equations, and parabolicand elliptic PDEs, but these PDEs are typically cou-pled either with each other or with ordinary differ-ential equations (ODEs). Hence, even though thePDEs themselves are reasonably well understood,the couplings between them make the problems westudy quite challenging.

Our design goals are therefore threefold. First,we want to easily define systems of PDEs. Second,we want it to be easy to play with different solutionalgorithms for systems of coupled PDEs. Finally,we want to reuse existing software to avoid rein-venting the wheel.

We use many good and mature libraries from theWeb, including Dolfin (www.fenics.org/dolfin/),GiNaC (www.ginac.de), MayaVi (http://mayavi.sourceforge.net), NumPy (http://numpy.scipy.org),

PETSc (www.mcs.anl.gov/petsc/), SciPy (www.scipy.org), Trilinos (http://software.sandia.gov/trilinos/), and VTK (www.vtk.org). In fact, we’remixing these libraries with our own packages:

• Famms (verification based on the method ofmanufactured solutions),

• Instant (www.fenics.org/instant; inlining of C++in Python),

• PyCC (http://folk.uio.no/skavhaug/heart_simulations.html; the underlying framework forgluing components together),

• PySE (http://pyfdm.sf.net; a finite differencetoolbox),

• Swiginac (http://swiginac.berlios.de/; a Pythoninterface to the symbolic mathematics engineGiNaC), and

• SyFi (www.fenics.org/syfi/; a finite elementtoolbox).

Some of these packages are Python modules,whereas the others—thanks to Python’s popularity inscientific computing—are equipped with Python in-terfaces. By using Python, we don’t have to mix thesepackages at the C level, which is a huge advantage.

Solving Systems of PDEsCurrently, our most important application is in car-diac electrophysiology.1 The central model here isthe bidomain model,2 which is a system of two PDEs

48 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING

Using Python to SolvePartial Differential Equations

This article describes two Python modules for solving partial differential equations (PDEs):PyCC is designed as a Matlab-like environment for writing algorithms for solving PDEs, andSyFi creates matrices based on symbolic mathematics, code generation, and the finiteelement method.

KENT-ANDRE MARDAL, OLA SKAVHAUG, GLENN T. LINES,GUNNAR A. STAFF, AND ÅSMUND ØDEGÅRD

Simula Research Laboratory

1521-9615/07/$25.00 © 2007 IEEE

Copublished by the IEEE CS and the AIP

P Y T H O N :B A T T E R I E S I N C L U D E D

Page 2: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 49

with the following form:

, (1)

0 = � � (Mi�v) + � � ((Mi + Me)�u). (2)

(The domain here is the same for both PDEs—thatis, the heart—but it has two potentials, intra- andextra-cellular, which live inside and outside heartcells.) The primary unknowns here are the trans-membrane potential v and the extra cellular poten-tial u. The function Iion(v, s) describes the flow ofions across the cell membrane and can be quitecomplicated; Mi represents intracellular conduc-tivity, and Me is extracellular. The second argumentIion(v, s) is generally a vector of variables, governedby a set of ODEs:

. (3)

Note that s = s(x), so the ODE system is defined ineach point (you can find examples of cell models atwww.cellml.org). Hence, we must solve the systemfor each computational node.

The bidomain formulation gives an accurate de-scription of the myocardial tissue’s electrical con-duction. Coupled with realistic ODE models of theionic current, we can study a large range of phe-nomena, including conduction abnormalities due toischmeia or channel myopathies (genetic defects),fibrillation and defibrillation, and drug intervention.Figure 1, for example, shows a geometric model ofthe human heart’s ventricles. The color representsthe transmembrane potential’s magnitude; Figure1a shows normal activation, and Figure 1b showschaotic behavior (which corresponds to a fibrilla-tory heart with critically reduced pumping ability).

We solve the bidomain model in Equations 1through 3 by using an operator-splitting approach,in which we first solve the ODE systems in eachcomputational node at each time step before wesolve the PDE system. Here’s a simple Pythonscript we use for solving this problem:

from dolfin import Mesh

from pycc.MatSparse import *

import numpy

from pycc import MatFac

from pycc import ConjGrad

from pycc.BlockMatrix import *

from pycc.Functions import *

from pycc.ODESystem import *

from pycc.CondGen import *

from pycc.IonicODEs import *

mesh = Mesh("Heart.xml.gz")

matfac = MatFac.MatrixFactory(mesh)

M = matfac.computeMassMatrix()

pc = PyCond("Heart.axis")

pc.setconductances(3.0e-3, 3e-4)

ct = ConductivityTensorFunction(

pc.conductivity)

Ai = matfac.computeStiffnessMatrix(ct)

pc.setconductances(5.0e-3, 1.6e-3)

ct = ConductivityTensorFunction(

pc.conductivity)

Aie = matfac.computeStiffnessMatrix(ct)

# Construct compound matrices

dt = 0.1

A = M + dt*Ai

B = dt*Ai

Bt = dt*Ai

C = dt*Aie

# Create the Block system

AA = BlockMatrix((A,B),(Bt,C))

prec = DiagBlockMatrix((MLPrec(A),

MLPrec(C)))

v = numpy.zeros(A.n, dtype='d') - 45.0

u = numpy.zeros(A.n, dtype='d')

x = BlockVector(v,u)

# Create one ODE systems for each vertex

odesys = Courtemanche_ODESystem()

ode_solver = RKF32(odesys)

ionic = IonicODEs(A.n, ode_solver,

odesys)

∂∂

=st

F s t( , )

∂∂

= ∇ ⋅ ∇ + ∇ ⋅ ∇ −vt

M v M u I v si i ion( ) ( ) ( , )

(a) (b)

Figure 1. Human heart ventricles. This model compares (a) normalactivity and (b) chaotic behavior.

Page 3: Using Python to Solve Partial Differential Equations

50 COMPUTING IN SCIENCE & ENGINEERING

ionic.setState(odesys.getDefault

InitialCondition())

# Solve

z = numpy.zeros((A.n,), dtype='d')

for i in xrange(0, 10):

t = i*dt

ionic.forward(x[0], t, dt)

ConjGrad.precondconjgrad(prec, AA,

x, BlockVector(M*x[0], z))

Although the code seems clean and simple, it’sdue to a powerful combination of C/C++/Fortranand Python. The script runs on desktop computerswith meshes that have millions of nodes and cansolve complete problems within minutes or hours.All computer-intensive calculations such as com-puting matrices, solving linear systems (via alge-braic multigrid and the conjugate gradientmethod), and solving ODE systems are done effi-ciently in C or C++.

Creating Matrices for Systems of PDEsWe created the tool SyFi to define finite elementsand variational forms, as well as generate C++ codefor finite element computations. It uses the sym-bolic mathematics engine GiNaC and its Pythoninterface Swiginac for all its basic mathematical op-erations. SyFi enables polynomial differentiationand integration on polygonal domains. Further-more, it uses the computed expressions, such as en-tries in an element matrix, to generate C++ code.

The following example demonstrates how tocompute an element matrix for the Jacobian of anincompressible power-law fluid’s (nonlinear) sta-tionary Navier-Stokes equations. Let

,

where

u = �kukNk and �(u) = ||�u||2n.

Then,

, (4)

Here’s the corresponding Python code for com-puting Equation 4 and generating the C code:

from swiginac import *

from SyFi import *

def sum(u_char,fe):

ujs = symbolic_matrix(1,fe.nbf(),

u_char)

u = 0

for j in range(0,fe.nbf()):

u += ujs.op(j)*fe.N(j)

u = u.evalm()

return u, ujs

nsd = cvar.nsd = 3

polygon = ReferenceTetrahedron()

fe = VectorCrouzeixRaviart(polygon,1)

fe.set_size(nsd) # size of vector

fe.compute_basis_functions()

# create sum u_i N_i

u, ujs = sum("u", fe)

n = symbol("n")

mu = pow(inner(grad(u), grad(u)),n)

for i in range(0,fe.nbf()):

# nonlinear power-law diffusion term

fi_diffusion = mu*inner(grad(u),

grad(fe.N(i)))

# nonlinear convection term

uxgradu = (u.transpose()

*grad(u)).evalm()

fi_convection = inner(uxgradu,

fe.N(i), True)

fi = fi_diffusion + fi_convection

Fi = polygon.integrate(fi)

for j in range(0,fe.nbf()):

# differentiate to get the Jacobian

uj = ujs.op(j)

Jij = diff(Fi, uj)

print "J[%d,%d]=%s\n"%(i,j,

Jij.printc())

Note that both the differentiation and integrationis performed symbolically exactly as we would havedone by hand. This naturally leads to quite efficientcode compared to the traditional way of imple-menting such integrals—namely, as quadratureloops that involve the evaluation of basis functions,their derivatives, and so on. The printc functiongenerates C++ code for the expressions; so far, we’veused this system to generate roughly 60,000 lines ofC++ code for computing various matrices based onvarious finite elements and variational forms.

To ease the integration of the generated C++code in Python, we developed an inlining tool called

JF

u u N u u N

iji

j

ji iT

u

udx

=∂∂

= ∂∂

⋅∇ ⋅ + ∇ ∇∫ ( ) ( ) :μ

F u u N u u Ni i iTdx= ⋅∇ ⋅ + ∇ ∇∫ ( ) ( ) :μ

Page 4: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 51

Instant, which lets us generate code, generate thecorresponding wrapper code, compile and link it toan extension module, and then import the moduleon the fly. The following code demonstrates Instantwith a simple example, in which we compute

symbolically, generate the corresponding C++code, and inline the expression in Python with x asa NumPy array:

import swiginac as S

from Instant import inline_with_numpy

import numpy as N

x = S.symbol("x")

xi = S.symbol("x[i]")

f = S.sin(S.cos(x))

dfdx = S.diff(f,x)

print dfdx

string = """

void func (int n, double* x, int m,

double* y) {

if ( n != m ) {

printf("Both arrays should be of

the same size!");

return;

}

for (int i=0; i<n; i++) {

y[i] = %s;

}

} """ % dfdx.subs( x == xi ).printc()

print string

func = inline_with_numpy(string, arrays

= [['n', 'x'], ['m', 'y']])

x = N.arange(100.0 )

y = N.zeros(100, dtype='d')

func(x,y)

print x

print y

We’ve shown that it’s possible tosolve real-life problems in a user-friendly environment by combin-ing Python’s high-level syntax with

the efficiency of compiled languages. However, thisapproach opens up many new possibilities for com-bining symbolic mathematics and code generation,which is a largely overlooked alternative to traditionalapproaches in finite element simulations. We’re cur-rently implementing fairly advanced finite elementmethods such as the mixed elasticity method,3 andwe also want to simulate human tissue and bloodwith the most realistic models available today.

AcknowledgmentsMardal is supported by the Research Council of Norwayunder the grant ES254277.

References1. J. Sundnes et al., Computing the Electrical Activity in the Human

Heart, Monographs in Computations Science and Engineering,Springer-Verlag, 2006.

2. C.S. Henriquez, “Simulating the Electrical Behavior of CardiacTissue Using the Bidomain Model,” Crit. Rev. Biomedical Eng., vol.21, no. 1, 1993, pp. 1–77.

3. D.N. Arnold, R.S. Falk, and R. Winther, “Finite Element ExteriorCalculus, Homological Techniques, and Applications,” Acta Nu-merica, 2006, pp. 1–155.

Kent-Andre Mardal is a postdoc at the Simula ResearchLaboratory. His research interests include high-level nu-merical programming and solution of PDEs. Mardal hasa PhD in scientific computing from the University of Oslo.Contact him at [email protected].

Ola Skavhaug is a research scientist at the Simula Re-search Laboratory. His research interests include high-levelnumerical programming, PDEs, and code verification.Skavhaug has a PhD in scientific computing from the Uni-versity of Oslo. Contact him at [email protected].

Glenn T. Lines is a research scientist at the Simula ResearchLaboratory. His research interests include PDEs and com-puter simulation of cardiac electrophysiology. Lines has aPhD in scientific computing from the University of Oslo.Contact him at [email protected].

Gunnar A. Staff is a consultant at Scandpower PetroleumTechnology. His research interests include software for sci-entific computing and initial value problems. Staff has aPhD in scientific computing from the University of Oslo.Contact him at [email protected].

Åsmund Ødegård is an IT manager and part-time re-search scientist at the Simula Research Laboratory. Hisresearch interests include high-level languages in scien-tific computing, object-oriented numerics, PDEs, andhigh-level parallelism. Ødegård has a PhD in computa-tional science from the University of Oslo. Contact him [email protected].

y xx

x( ) sin(cos( ))= ∂∂

Page 5: Using Python to Solve Partial Differential Equations

Magnetic resonance imaging (MRI)measures induced magnetic prop-erties of tissue. It has long been thechosen technique for creating

high-resolution anatomical images of the humanbrain. Over the past decade, a new technique calledfunctional MRI (fMRI) has become a powerful andwidely used method for studying human brainfunction. fMRI measures regional blood flowchanges in the brain, which can help researchersidentify the most active brain areas during mentaltasks such as memory and language.

Functional MRI AnalysisThe first step of an fMRI analysis—image recon-struction—takes raw data from the scanner and per-forms a highly customized inverse Fourier transformto create a time series of 3D functional images. A typ-ical next step is to estimate the movement betweenscans via an automated image-matching algorithmand then use that estimate to remove artifacts due to

motion. Researchers commonly relate the activitydetected in the low-resolution functional images toa high-resolution structural image of the same sub-ject. However, to compare between subjects, thefunctional or structural data must be warped tomatch some standard brain, a process that requiressophisticated models of brain anatomy. Finally, sta-tistical techniques are used to determine which brainregions are related to certain tasks or activities.

Clearly, fMRI data analysis comes with severalchallenges. First, it has a wide variety of computa-tionally intensive spatial and statistical processingsteps. Thus, an analysis software package mustcover the range from file system and network in-teraction through complex image processing to ad-vanced statistical inference and 3D visualization.Second, such analysis involves a massive volume ofdata, often reaching several hundred gigabytes.

The most common software package in use todayis SPM (www.fil.ion.ucl.ac.uk/spm/), which is writ-ten in Matlab. Other common packages includeFSL (www.fmrib.ox.ac.uk/fsl/) and AFNI (http://afni.nimh.nih.gov/afni/), written in C and C++, re-spectively. Although these packages are well-designed and supported, an increasing number ofimaging scientists have found that Matlab is notpowerful enough to support the industrial level ofcode size and complexity that neuroimaging re-quires, and that C and C++ are too low-level forrapid development.

52 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING

Analysis of Functional MagneticResonance Imaging in Python

The authors describe a package for analyzing magnetic resonance imaging (MRI) andfunctional MRI (fMRI) data, which is part of the Neuroimaging in Python (NIPY) project.An international group of leading statisticians, physicists, programmers, andneuroimaging methodologists are developing NIPY for wider use.

K. JARROD MILLMAN

University of California, BerkeleyMATTHEW BRETT

MRC Cognitive Brain Science Unit, Cambridge, UK

1521-9615/07/$25.00 © 2007 IEEE

Copublished by the IEEE CS and the AIP

P Y T H O N :B A T T E R I E S I N C L U D E D

Page 6: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 53

Neuroimaging and PythonPython has become a natural choice for neuro-imaging because it is high level, object-oriented,and interactive. All these features make it particu-larly well suited to scientific programming. It alsohas robust libraries for system interaction, imageprocessing, matrix algebra, and visualization.Moreover, Python has very good tools for provid-ing scripting support to software written in otherlanguages. Finally, because of Python’s strong in-tegration with C and C++, it is often used as a typeof high-level language glue for calling routines ina huge array of high-quality C/C++ libraries.

Accordingly, several significant neuroimagingpackages have already been written in Python. Ledby Daniel Sheltraw at Berkeley, researchers re-cently developed a set of Python tools for MRI re-construction (https://cirl.berkeley.edu/view/BIC/ReconTools). John Hunter, the author of the mat-plotlib Python plotting library, developed PBrain,a sophisticated application for analyzing and visu-alizing the data from electrical recordings on thebrain surface of epileptic patients (see Figure 1 andp. 90).1 Finally, BrainVISA is a comprehensivepipeline-analysis tool for anatomical data, devel-oped by a team of researchers in France (www.brainvisa.info).

Integrating PythonDevelopment in NeuroimagingIn this article, our main focus is on NIPY (http://neuroimaging.scipy.org), a new initiative to createa unified, open source, and open development en-vironment for the analysis of neuroimaging data.2

In particular, we focus on the fMRI component ofNIPY, which is based on the BrainSTAT packagethat Jonathan Taylor wrote at Stanford University.3

We are also working with Hunter and researchersat the University of Chicago to better integratePBrain into the NIPY framework.

Image ModelAn MRI scanner produces images that representslices of brain tissue, and several of these slices to-gether constitute a 3D whole-brain scan. The val-ues in each voxel (volume element) in a 3D imageslice represent measurements from a small volumeof the brain.

A major problem in neuroimaging is that differ-ent analysis packages and scanners use different out-put image formats that are not readily compatible.To address this, NIPY provides read and write ca-pabilities for all the popular image formats in neu-roimaging, as well as access to binary data imagesand Python arrays in memory. Images can be com-pressed upon read or write, loaded from an arbitraryURL (with local caching), and managed by memorymapping where possible. Images also have iteratorsfor reading data in slices and other subsets; Pythoniterators offer an elegant mechanism for construct-ing lazily instantiated sequential data structures—aperfect abstraction for intuitively representing se-quential data (such as time series, spatial slices, orgenerally (n – 1) – d data bricks from an n – d dataset) without sacrificing memory efficiency.

Spatial transforms on images are fundamental toneuroimaging—for movement correction, warpingto templates, and many other analysis steps. NIPY

(a) (b)

Figure 1. Python visualization widgets. Images from PBrain showing, (a) the position of the electrode arrays on thesurface of the brain superimposed with a surface reconstruction of the skull from a CT scan and (b) measures offrequency and coherence of electrical activity overlaid on an image of the electrode positions.

Page 7: Using Python to Solve Partial Differential Equations

54 COMPUTING IN SCIENCE & ENGINEERING

offers a general model of image spatial transformsthat allow arbitrary combinations of linear andnonlinear transforms, volume-to-surface mappings,and flexible levels of image interpolation. A com-mon analysis technique is to examine the signalfrom a restricted region of the brain, or region of in-terest (ROI). NIPY has several ROI objects andfunctions, including discrete (point-level) and con-

tinuous (sphere, ellipse) region definitions, regioncombination algebra, and region data extraction.

Data DiagnosticsBecause the scanner’s signal quality can vary fromday to day, special attention is needed to ensuregood data. Currently, NIPY offers several diag-nostic tools to let researchers discover problemswith their data before they analyze it.

Images that summarize data across a time seriesprovide a straightforward way to examine fMRI data.Taking an image of the standard deviation of signalintensity across time, for example, can highlightproblems in the time series acquisition that occurwith subject movement or instabilities of the data ac-quisition over time (see Figure 2).

Figure 3 shows four plots that can help diagnosepotential problems in the time series. The top plotdisplays the scaled variance from image to image; thesecond plot shows the scaled variance per slice; thethird plot shows the scaled mean voxel intensity foreach image; and the bottom one plots the maximum,mean, and minimum scaled slice variances per slice.

Another powerful technique for data diagnosis isprincipal components analysis (PCA), which is par-ticularly useful for detecting outlying time pointsor unexpected sources of spatially coherent noise.PCA on an image time series takes the time pointsas rows and voxels as columns, and decomposes thedata into components that can efficiently expressthe source of variance in the data. Each componentconsists of a characteristic time series and the voxelweights contributing to that component, and theseweights are viewable as an image. Figure 4 showsimages of the weights for the first four principalcomponents of a functional data set.

Statistical ProcessingSubjects in a typical fMRI experiment perform sometask or receive stimuli while being scanned; thus, ar-eas activated by the task or stimulus exhibit voxeltime series correlated with the experimental design.Unfortunately, noise in fMRI analysis can comefrom multiple sources, and the signal is relativelylow. This problem with signal detection has led toseveral standard and more complex statistical meth-ods.4 Partly for historical reasons, current analysispackages use nonstandard or low-level statistical ter-minology and interfaces, making them less accessi-ble to scientists with general training in statistics.

At Stanford, Taylor developed a general set ofstatistical model objects in Python that use stan-dard statistical terminology and allow flexible high-level statistical designs. Similar to the R statisticallanguage, these objects implement a general series

30

40

50

60

70

80

90

100

110

30 40 50 60 70 80 90

Figure 2. Image of standard deviation across timepoints. By summarizing data, the standarddeviation image can highlight problems ofbackground noise and artifacts that vary over time.

Figure 3. Time-series diagnostics. Researcherscan use these four plots to diagnose potentialproblems in the time-series data.

Page 8: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 55

of statistical models and contrasts for data, includ-ing functional images. These models form the basisof NIPY statistical analysis and are now maintainedas part of the SciPy distribution; we hope these willattract further development from researchers out-side brain imaging.

Python’s power and generality mean thatwe have many fruitful directions inwhich to take NIPY. At the most basic,its high-level language features make it

much easier to refactor the code into a well-patterned, high-level interface that scientific de-velopers can quickly pick up. Because Python hassuch good integration with C/C++, we also haveready access to very powerful visualization andimage-processing libraries. Two very important ex-amples are the Visualization Toolkit (VTK) and theInsight Toolkit (ITK). VTK provides high-quality3D graphical display and includes Python wrappersas part of its standard distribution, whereas ITK isa library of image registration and segmentationroutines originally developed for the Visible Hu-man Project. Like VTK, Python wrappers are partof the standard ITK distribution.

We intend NIPY to become the standard analysislibrary in neuroimaging in the medium term, whichmeans we will need to provide the ability to call rou-tines in other packages that are more familiar to re-searchers. Thankfully, this is a much easier task inPython than in many other languages because of itsability to interact with languages such as C.

Python’s language features can also help us tacklethe problem of provenance tracking. Because imag-ing analyses and data sets are very diverse, researchersuse a variety of analysis packages and rarely record alltheir data and analysis parameters, making it very dif-ficult for other people to reproduce published analy-

ses. Fortunately, Python has excellent support formetaprogramming techniques (including metaclassesand function decorators) that can transparentlychange object behavior. We can thus use these tech-niques with Python’s object introspection to capturethe data’s nature and processing in a way that can beclosely wedded to the analysis.5 Ultimately, thismeans that the analysis can become self-document-ing, making it far easier to reproduce.

References1. J.D. Hunter et al., “Locating chronically implanted subdural elec-

trodes using surface reconstruction,” Clinical Neurophysiology,vol. 116, no.8, 2005, pp. 1984–7.

2. J.E. Taylor et al., “BrainPy: An Open Source Environment for theAnalysis and Visualization of Human Brain Data,” Neuroimage,vol. 26, no. 763, 2005, pp. T–AM.

3. J.E. Taylor and K.J. Worsley, “Inference for Magnitudes and De-lays of Responses in the FIAC Data Using BRAINSTAT/FMRISTAT,”Human Brain Mapping, vol. 27, no. 5, 2006, pp. 434–441.

4. A.W. Toga and J.C. Mazziotta, eds., Brain Mapping: The Methods,2nd ed., Elsevier Science, 2002.

5. K.J. Millman and M. D’Esposito, “Data and Analysis Managementfor Functional Magnetic Resonance Imaging Studies,” Proc. Int’lAdvanced Database Conf., M. Amin et al., eds., US Education Ser-vice, 2006, pp. 24–28.

K. Jarrod Millman is the director of computing for the He-len Wills Neuroscience Institute at the University of Cali-fornia, Berkeley. His research interests include functionalbrain imaging, informatics, configuration management,and computer security. Millman has a BA in mathematicsand computer science from Cornell University. Contacthim at [email protected].

Matthew Brett is a senior investigator scientist at theMedical Research Council (MRC) Cognition and Brain Sci-ence Unit in Cambridge, UK. His research interests includefunctional brain imaging and localization of brain func-tion. Brett has an MD from Cambridge University. Con-tact him at [email protected].

Figure 4. Principal components analysis (PCA). This type of analysis is particularly useful for detectingoutlying time points or unexpected sources of spatially coherent noise.

Page 9: Using Python to Solve Partial Differential Equations

Researchers have used Python extensivelyin desktop geographic information sys-tem (GIS) applications for data pro-cessing, analysis, management, spatial

statistical analysis, and so on since ESRI first re-leased the ArcGIS suite, version 9.0.1, in January2005. However, Python’s full value and potentialhasn’t been fully explored or deployed in Internet-based GIS applications yet. This article describeshow Python could offer a unique and easy ap-proach for Internet GIS developers.

Interactive mapping is an important part ofInternet-based GIS applications. Typically, Webusers activate the print function in their browser’smenu to print out a map on a particular Webpage, but generating high-quality PDF mapproducts has been a challenge for developers. Al-though the ArcMap Image Server in ESRI’sArcIMS has a function for printing PDF maps, itrequires the ArcMap license, and the perfor-mance isn’t very satisfactory because it takes solong to get the PDF product (see www.esri.com/software/arcgis/arcims/). Furthermore, this func-

tion doesn’t support the fusion of ArcIMS imageproducts with the Web Mapping Service’s(WMS’s) map images. Although PDF output sup-port is part of the latest version of MapServer (anopen source map software package for the Web;http://mapserver.gis.umn.edu/docs/howto/pdf-output), MapServer itself doesn’t support WMSmap images or the maps’ surrounding compo-nents (the legend, scale, and so on) in its PDFmap product.

Using some of the “batteries” included with thePython language, we can easily create PDF mapsand reports, make composite images that integrateWMS map images with the map image output, anddevelop elevation profiles. Python’s capability formultipurpose Internet mapping applications isunique yet easy to deploy.

Generating aMap Document with PythonPython uses the Reportlab and Image module li-braries to make PDF map creation an easy process.Although Reportlab isn’t included with Python,it’s available as a free downloaded at www.reportlab.org/downloads.html and can be integrated inthe server machine’s Python directory. Applicationdevelopers must first determine the location of themap image product generated by the Internet GISserver, but creating a PDF map document withPython is extremely simple:

56 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING

Python for Internet GIS Applications

Python offers a unique capability in the field of geographic information system applicationsbecause it helps developers create multipurpose Internet maps. This article discusses PDFmaps in particular.

XUAN SHI

West Virginia University

1521-9615/07/$25.00 © 2007 IEEE

Copublished by the IEEE CS and the AIP

P Y T H O N :B A T T E R I E S I N C L U D E D

Page 10: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 57

img=Image.open(mapDir)

c = canvas.Canvas(pdfDir,

pagesize=landscape(A4))

c.drawInlineImage(img, 83, 80,

img.size)

c.showPage()

c.save()

In this code snippet, mapDir = “C:\ArcIMS\output\WVBaseMap_14384368425964.png”,whereas pdfDir =“C:\ArcIMS\output\WVBaseMap_14384368425964.pdf”.

We can use the same approach to insert a logo,map legend, scale bar, arrows, and so forth in thePDF document, as Figure 1 shows. We can alsoeasily add multiple pages to the PDF documentby calling the showPage() and save() functionsmultiple times. With the Reportlab toolkits, de-velopers can add text, graphics, lines, and poly-gons into the PDF map document. BecausePython creates PDF map documents by directlyprocessing the map image output generated bythe Internet GIS server, all application develop-ers can deploy this approach, regardless of theirserver environment.

Image Fusion by PythonThe Open Geospatial Consortium (OGC; www.opengeospatial.org) developed the WMS specifi-cation to facilitate geospatial data interoperabilityand reusability. In this way, users can access mapand geospatial data without having to save the dataset locally. Using a specially structured HTTP re-quest, they can retrieve map images and integratethem into their own applications as backgroundimages. But when a user wants to create a PDFmap document, we have to create a composite im-age that merges multiple images.

Generally, we can overlay and merge multipleWMS-compatible map images as a single map im-age in a Web browser because such images arenormally transparent. The composite raster im-age in Figure 2a, for example, looks like one mapimage, but it’s actually composed of two images,as Figure 2b (a WMS map generated by Mi-crosoft’s TerraServer) and Figure 2c (a transpar-ent PNG map image generated by ArcIMS) show.In this case, the WMS image is served as the back-ground image in a Web page that contains theArcIMS map image output. Because TerraServer’sWMS image is retrieved remotely by an HTTPGETMAP request and served as a background imageon the Web browser, to create a PDF documentwith composite images, we must first save thebackground image into the file directory with the

map output generated by the local Internet map-ping server.

The WMS specification provides a basic guide

Figure 1. PDF map with varied surroundingcomponents. Using Python, developers can addlegends, scale bars, and arrows to maps.

(a)

(b)

(c)

Figure 2. Composite image overlay. What lookslike (a) a composite raster image in a Webbrowser is actually (b) a WMS map imagegenerated by TerraServer and (c) a PNG mapoutput generated by ArcIMS.

Page 11: Using Python to Solve Partial Differential Equations

58 COMPUTING IN SCIENCE & ENGINEERING

for image registration that helps developers retrievethe correct WMS map images with the exact samescale and extent of the corresponding map outputgenerated by the local server. Whenever the userbrowses the Web, ArcIMS will generate four vari-ables that define the bounding box (BBOX) of themap extent—MinX, MinY, MaxX, and MaxY—as wellas the map output image’s width and height(mWidth, mHeight). These variables can then beused to dynamically retrieve the background Terra-Server Digital Orthophoto Quarter (DOQ) imagevia an HTTP request:

http://terraserver-

usa.com/ogcmap.ashx?version=1.1.1&RE-

QUEST=GetMap&

BBOX=MinX,MinY,MaxX,MaxY&width=mWidth&

height=mHeight&LAYERS=DOQ&styles=&SRS=

EPSG:26917&format=PNG&transparent=true

When ArcIMS generates its map output imageby default into server output directoryC:\ArcIMS\output\ with a specific code (such asWVBaseMap_14384368425964.png), we can also

use such code to name the TerraServer image in thesame file directory—for example, terraDOQDir =“C:\ArcIMS\output\WVBaseMap_14384368425

964terraDOQ.png”. In this way, we understandthat we can use these two images to create the com-posite map image for the PDF document.

Python has library modules that provide an im-age-fusion function to generate a PDF map withcomposite images. If the user has requested aWMS map service such as Microsoft’s Terra-Service, we first need to use the urllib and url-lib2 modules in Python to retrieve the WMSimage from the external server, and then save theTerraServer DOQ image to the local machine withthe following code:

response = urllib2.urlopen(terraDO-

Qurl)

mapImg = response.read()

fo = open(terraDOQDir,’wb’)

fo.write(mapImg)

fo.close()

To create a composite image that merges theArcIMS transparent map image with the Terra-Server background image, however, the Pythonfunction for image fusion has different behaviorwhen the mask is defined differently in functionImage.composite(image1, image2, mask).By creating a composite image with the followingPython script, the entire ArcIMS map image istransparent, with a white background in the com-posite image (see Figure 3):

w,h = image1.size

mask = Image.new(“L”,(w,h))

draw = ImageDraw.Draw(mask)

draw.rectangle([0, 0, w, h],

fill=”#FF0000”)

img =

Image.composite(image1,image2,mask)

We want to enforce that we can set the ArcIMSimage’s transparent part as a transparent windowby using the following Python script for imple-mentation in other Internet GIS applications. Inthis way, the white background is removed, and theoriginal image’s transparent behavior is maintained.After we get the composite image, we can create aPDF map document, as in Figure 4:

imgTerraDOQ=Image.open(terraDOQDir)

imgWVmap=Image.open(mapDir)

white =

Image.new(“RGB”,imgWVmap.size,(255,255

Figure 3. Composite image. The ArcIMS map istransparent with a white background.

Figure 4. Composite image with transparentArcIMS map. After we get this image, we cancreate a PDF map document.

Page 12: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 59

,255))

imgWVmap=imgWVmap.convert(“RGBA”)

r,g,b,a = imgWVmap.split()

img = Image.composite(imgWVmap, imgTer-

raDOQ, a)

Generating ElevationProfiles with PythonElevation profiles show the change in elevationalong a line. They can help people assess a trail’sdifficulty, for example, or evaluate the feasibilityof placing a road along a given route. Figure 5shows a map composed of a colorful digital eleva-tion model’s (DEM’s) raster data layer. When theuser clicks on the Internet map viewer, he or shecan retrieve the elevation value at that (x, y) coor-dinate. To generate an elevation profile, as in Fig-ure 6, we retrieve multiple elevation values alongthe line by dividing it into multiple sections andthen using Python to create an elevation profilemap as a PDF.

Figure 6 shows two elevation profiles within asingle graphic. The red line shows the true eleva-tion information between the start and end pointsthat the user provided in Figure 5. Because the hor-izontal distance is roughly 21.17 miles, the verticalelevation information is exaggerated. The brownline shows the true landscape along the line seg-ment because its value is recalculated dynamicallybased on the horizontal distance and the ratio usedto adjust the vertical elevation.

Python’s large “batteries included” set oflibrary modules can help us easily per-form very sophisticated maneuvers withvery little programming. Although this

article uses ArcIMS as an example platform for In-ternet GIS application development, the same ap-proach I discussed can be used with MapServer andother platforms. Because Python offers strong sup-port for integration with other languages and tools,we can expect Internet GIS developers to continueexpanding the use and scope of Python applicationsin the coming months and years.

AcknowledgmentsI gratefully acknowledge Paul Dubois’s kind review andadvice for this article.

Xuan Shi is a PhD candidate in the Department of Geol-ogy and Geography at West Virginia University. His re-search focuses on implementing Web services technologyin GIS applications. Contact him at [email protected].

Figure 5. Elevation map. The digital elevationmodel’s raster layer focuses on eastern WestVirginia.

Figure 6. Elevation profile along the line shown inFigure 5. The red line shows the true elevationinformation, and the brown line shows the truelandscape.

Subscribenow for onlyIEEE Security & Privacy magazine is the premier magazine for security professionals. Each issue is packed with informationabout cybercrime, security & policy,privacy and legal issues, andintellectual property protection.

S&P features regular contributions bynoted security experts, includingBruce Schneier & Gary McGraw.

www.computer.org/services/nonmem/spbnr

Save 59%!

$29!

Page 13: Using Python to Solve Partial Differential Equations

Chaotic behavior in dynamical systems isa well-studied phenomenon. A partic-ularly illustrative class of such systemsis the so-called billiard systems, in which

a point particle moves freely along straight lines in-side a two-dimensional domain � with elastic re-flections at the boundary. Billiards are interestingbecause they’re mathematically well studied withvarious rigorous results, so they provide a good ba-sis for investigating the implications of classicalchaos in quantum mechanical systems. Apart fromthe basic research aspect, though, possible applica-tions range from the design of semiconductornanostructures to optical microlasers.

In this article, I give a brief overview of some as-pects of our research at the Technische UniversitätDresden, where numerical computations play animportant role and Python allows for an efficientway of implementation.

Billiard dynamicsThe boundary determines a billiard’s dynamicalproperties. Figure 1 demonstrates this, showing 50iterations of an initial point for two billiards, param-etrized in polar coordinates by �(�) = 1 + � cos(�)with � � [0, 2�] for parameters � = 0 (circular billiard)and � = 1 (cardioid billiard).1 The circular billiard is

an example of an integrable system showing regulardynamics. The opposite extreme is the cardioid bil-liard, which is fully chaotic—this means that nearbytrajectories separate exponentially as a function oftime (hyperbolicity) and that a typical trajectory willuniformly fill out the available space (ergodicity).

Because the billiard’s motion follows a straightline, it’s convenient to use the boundary to define aPoincaré section,

P := {(s, p) | s � [0, |��|], p � [–1, 1]}, (1)

where s is the arc length along the boundary ���and p = �v, T(s)� is the projection of the unit veloc-ity vector v after the reflection onto the unit tan-gent vector T(s) in point s � ��. We get a Poincarémap P of a point � = (s, p) � P by considering theray starting at point r(s) ��� in the direction spec-ified with p and then determining the first inter-section with the boundary, which ultimately leadsto a new point, � � = (s�, p�). Explicitly, the velocityin the T, N coordinate system is given by

), so in Cartesian coordinates,

(2)

Numerically, the main task here is to find thenext intersection for a given starting point on theboundary and direction, specified as s and p. If animplicit equation,

v = =⎛

⎝⎜⎞

⎠⎟

= + −

( , ) ( , )

( ,

v vT NT N

p n

T p N p T

x yx x

y y

x x 1 2yy yp N p+ −1 2 ).

( ,p n p= −1 2

60 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING

Quantum Chaos in Billiards

An important class of systems—billiards—can show a wide variety of dynamical behavior.Using tools developed in Python, researchers can interactively study the complexity of thesedynamics. Such behavior is directly reflected in properties of the corresponding quantumsystems, such as eigenvalue statistics or the structure of eigenfunctions.

ARND BÄCKER

Technische Universität Dresden

1521-9615/07/$25.00 © 2007 IEEE

Copublished by the IEEE CS and the AIP

P Y T H O N :B A T T E R I E S I N C L U D E D

Page 14: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 61

F(x, y) = 0, (3)

determines the boundary, we can determine thenew point r � by solving

F(x + tvx, y + tvy) = 0 (4)

for t > 0. In the case of the circular billiard, we caneasily obtain an analytical solution that will lead toan explicit prescription for the billiard mapping. Ingeneral, however, only a numerical solution toEquation 4 is possible.

For nonconvex billiards, there are points � = (s,p) � P for which there is more than one solution ofEquation 4 (apart from t = 0); obviously, we mustchoose the one with the smallest t > 0. We cansometimes use the condition in Equation 3 to re-move the t = 0 solution analytically from Equation4. If F is a polynomial in x and y, this reduces theorder of Equation 4 by one—for example, for thecardioid billiard to get a cubic equation for t.2 Fromthat solution, we can get the coordinates (x�, y�) =(x, y) + tv.

Numerically, though, we must find one or sev-eral solutions to Equation 4, depending on thetype of boundary, so we should try to bracket thezero with the smallest t, such as by evaluating F(x+ tvx, y + tvy) for sufficiently many values of t, andthen use scipy.optimize.brentq to determinethe zero as precisely as possible. We must pay spe-cial attention to glancing motion—when p is near±1—because t can get very close to 0. Moreover,in nonconvex billiards, zeros of F(x + tvx, y + tvy)can get very close to each other and are thereforeeasily missed.

Visualization with PythonBoth for research and teaching, it’s extremely use-ful to interactively explore the dynamics in billiardsby using visualizations of the trajectories specifiedin the Poincaré section via the mouse. For this pur-pose, my students and I developed an applicationwritten in Python, using wxPython (www.wxpython.org) for the GUI and a special widget toquickly plot several points (www.physik.tudresden.de/~baecker/python/plot.html). Figure 2 shows atypical screenshot (� = 0.3). In this case, the systemshows both regular and irregular motion, depend-ing on the initial point.

Quantum BilliardsAlthough classical mechanics can correctly de-scribe macroscopic objects, a quantum mechanicaldescription is necessary at small scales. Due to theHeisenberg uncertainty principle, it’s impossible

to simultaneously specify a particle’s position andmomentum—instead, the particle state is specifiedwith a wavefunction whose absolute value squaredis interpreted as the probability density. For quan-tum billiards, finding the stationary solutions ofthe Schrödinger equation reduces to the determi-nation of eigenvalues and eigenfunctions of theHelmholtz equation,

–��n(q) = En�n(q), q � �� �

with, for example, Dirichlet boundary conditions—that is, �n(q) = 0 for q � ��. Here, � denotes theLaplace operator, which reads in two dimensions

.

The interpretation of � is that �D|�(q)|2d2q is theprobability of finding the particle inside the domainD �.

For some simple domains �, it’s possible to solveEquation 5 analytically. For the billiard in a rec-tangle with sides a and b, for example, the (non-normalized) eigenfunctions are given by �n1,n2(q)= sin(�n1q1/a)sin(�n2q2/b) with correspondingeigenvalues

and (n1, n2) � �2. For the billiard in a circle, theeigenfunctions are given in polar coordinates by�mn(r, �) = Jm(jmnr) exp(im�), where jmn is the nthzero of the Bessel function Jm(x) and m � Z, n � N.Using scipy.special.jnzeros, we can easilyobtain a given number of zeros of Jm(x) for fixed m.

In general, though, no analytical solutions ofEquation 5 exist, so we must use numerical meth-ods to compute eigenvalues and eigenfunctions.Among the many different possibilities, the so-

E n a n bn n1 2

212 2

22 2

, ( / / )= +π

Δ = ∂∂

+ ∂∂

⎝⎜

⎠⎟

2

12

2

22q q

(a) (b)

Figure 1. Billiard dynamics. Note the difference between (a) thecircular billiard and (b) the chaotic dynamics in the cardioid billiard.

Page 15: Using Python to Solve Partial Differential Equations

62 COMPUTING IN SCIENCE & ENGINEERING

called boundary-integral method is popular.3 Byusing Green’s theorem, we can transform thetwo-dimensional problem in Equation 5 into aone-dimensional integral equation. Discretiza-tion leads to a matrix equation for which we mustdetermine the zeros of a determinant as a func-tion of energy E. To detect nearly degenerate en-ergy levels, it’s most efficient4 to use a singularvalue decomposition. For the numerical imple-mentation, this is the most time-consuming step,which involves running scipy.linalg.

flapack.zgesdd. Using this approach, we caneasily compute 2,000 to 10,000 (and more, if nec-essary) eigenvalues and eigenfunctions. In partic-ular, due to the computation’s independence atdifferent E, we can parallelize this problem ratherstraightforwardly by using as many CPUs asavailable. Communication between differentCPUs isn’t needed, only the initial value of Emust be transferred to each CPU (via the mes-sage-passing interface, for example).

Quantum ChaosA fundamental question in quantum chaos is theimpact of the underlying classical dynamical prop-erties on the statistical behavior of eigenvalues. It

has been conjectured that the statistics of randommatrices obeying appropriate symmetries can de-scribe the eigenvalue statistics of fully chaotic sys-tems.5 For generic integrable systems, we expect aPoissonian random process to describe the energy-level statistics.6

The simplest spectral statistics is the level-spac-ing distribution P(s) obtained from the histogramof the spacings

sn := xn+1 – xn , (6)

where xn are rescaled eigenvalues such that theiraverage spacing is 1. Once we compute the eigen-values, we can determine the level-spacing distrib-ution as follows:

from pylab import *

# x: rescaled eigenvalues

spacings = x[1:]-x[0:-1]

hist(dat, normed=True, bins=100)

show()

We compare the resulting distribution with theexpectation for integrable systems,

Figure 2. Screenshot of a Python application. To interactively explore billiard systems, we can define initial conditions inthe Poincaré section on the left, for which the iterates are computed and the corresponding trajectory is shown inposition space on the right. Magnified views of parts of the Poincaré section are possible, as shown in the lower rightwindow.

Page 16: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 63

PPoisson(s) = exp(–s). (7)

Because P(s) � 1 for s � 0, this behavior is calledlevel attraction. We can use the Wigner distributionto describe the level-spacing distribution for theGaussian orthogonal random matrix ensemble(GOE) in a very good approximation7

. (8)

In this case, we have P(s) � 0 for s � 0, which iscalled level repulsion. Figure 3 illustrates this be-havior with a level-spacing distribution for the cir-cle and the cardioid billiard, showing a goodagreement with the expected distributions.

For the eigenfunctions of Equation 5, we wouldexpect that they reflect the underlying classicaldynamics. According to the semiclassical eigen-function hypothesis, the eigenstates should con-centrate on those regions where a generic orbitexplores the long-time limit.8 For integrable sys-tems, motion is restricted to invariant tori,whereas the whole energy surface is filled uni-formly for ergodic systems. For ergodic systems,the semiclassical eigenfunction hypothesis is ac-tually proven by the quantum ergodicity theo-rem,9 which states that almost all eigenfunctionsbecome equidistributed in the semiclassical limit.Restricted to position space, we have

(9)

for a subsequence {�n} of density one. So,for almost all eigenfunctions, the probability offinding a particle in a certain region D of the posi-

tion space � in the semiclassical limit is just thesame as for the classical system.

Figure 4 illustrates this for an integrable circlebilliard and a chaotic cardioid billiard. We canclearly see that in the former case, the probabilityis restricted to subregions of the billiard, whereasfor the ergodic case, the probability density is uni-formly distributed over the full billiard region(apart from the inevitable fluctuations).

For systems with a mixed phase space, the dy-namics is more complicated because regular andchaotic motion coexist (as in Figure 2). This is alsoreflected in the structure of quantum eigenstates,which are either located in the regular islands orextend over the chaotic region (see Figure 5). Re-cent results show that in certain situations this sim-ple picture doesn’t hold.10

Our experience with using Python for ourresearch purposes has been extremelypositive. When people think of scientificcomputing, typically Fortran, C, or C++

immediately come to mind, but many tasks involvefairly small amounts of time-critical code. Due toPython’s efficient use of numerical libraries, no signif-icant speed reduction arose in our applications. More-over, all the illustrations displayed here involvePython, via PyX (http://pyx.sourceforge.net) orMayaVi (http://mayavi.sourceforge.net). Clearly, it hascome into its own for many different purposes.

AcknowledgmentsI thank Lars Bittrich for his useful comments on themanuscript and Bittrich, Steffen Löck, Nikolai Hlubek, andJan-Matthias Braun for their contributions to theIterator in Figure 2.

{ }ψn j

lim | ( )|( )( )j

nD

jd q

vol Dvol→∞

=∫ ψ q 2 2Ω

P s P s s sGOE Wigner( ) ( ) exp≈ = −⎛⎝⎜

⎞⎠⎟

π π2 4

2

0 1 2 3 4

(b)

P(s)

1.0

0.0

0.5

s

GOEexp(–s)Cardioid billiard

0 1 2 3 4

(a)

P(s)

1.0

0.0

0.5

s

GOEexp(–s)Circle billiard

Figure 3. Level-spacing distribution. For (a) the circle billiard (100,000 eigenvalues) and (b) the cardioid billiard (11,000eigenvalues), we find good agreement with the expected behavior of a Poissonian random process and of the Gaussianorthogonal random matrix ensemble (GOE), respectively.

Page 17: Using Python to Solve Partial Differential Equations

64 COMPUTING IN SCIENCE & ENGINEERING

References1. M. Robnik, “Classical Dynamics of a Family of Billiards with Ana-

lytic Boundaries,” J. Physics A, vol. 16, 1983, pp. 3971–3986.

2. A. Bäcker and H.R. Dullin, “Symbolic Dynamics and Periodic Orbitsfor the Cardioid Billiard,” J. Physics A, vol. 30, 1997, pp. 1991–2020.

3. A. Bäcker, “Numerical Aspects of Eigenvalue and EigenfunctionComputations for Chaotic Quantum Systems,” The MathematicalAspects of Quantum Maps, Lecture Notes in Physics 618, M. DegliEsposti and S. Graffi, eds., Springer-Verlag, 2003, pp. 91–144.

4. R. Aurich and F. Steiner, “Statistical Properties of Highly ExcitedQuantum Eigenstates of a Strongly Chaotic System,” Physica D,vol. 64, 1993, pp. 185–214.

5. O. Bohigas, M.-J. Giannoni, and C. Schmit, “Characterization ofChaotic Quantum Spectra and Universality of Level FluctuationLaws,” Physical Rev. Letters, vol. 52, 1984, pp. 1–4.

6. M.V. Berry and M. Tabor, “Level Clustering in the Regular Spec-trum,” Proc. Royal Soc. London A, vol. 356, 1977, pp. 375–394.

7. M.L. Mehta, Random Matrices, Academic Press, 1991.

8. M.V. Berry, “Semiclassical Mechanics of Regular and IrregularMotion,” Comportement Chaotique des Systemes Deterministes:Chaotic Behaviour of Deterministic Systems, G. Iooss, R.H.G. Helle-mann, and R. Stora, eds., North-Holland, 1983, pp. 171–271.

9. A. Bäcker, R. Schubert, and P. Stifter, “Rate of Quantum Ergod-icity in Euclidean Billiards,” Physical Rev. E, vol. 57, 1998, pp.5425–5447.

10. A. Bäcker, R. Ketzmerick, and A. Monastra, “Flooding of RegularIslands by Chaotic States,” Physical Rev. Letters, vol. 94, 2005, p.054102.

Arnd Bäcker is a scientific researcher at the TechnischeUniversität Dresden. His research interests include nonlin-ear dynamics, quantum chaos, and mesoscopic systems.Bäcker has a PhD in theoretical physics from the Univer-sität Ulm. Contact him at [email protected].

Figure 4. Behavior of eigenstates. The eigenstates of the integrable circular billiard and the chaotic cardioid billiardreflect the structure of the corresponding classical dynamics.

1

–1–4 0 4

0p

s

1

–1–4 0 4

0p

s

1

–1–4 0 4

0p

s

Figure 5. Mixed phase space. Eigenstates in billiards with mixedphase space typically either concentrate in the regular islands (firsttwo lines) or extend over the chaotic region (last line). This is mostclearly seen in the quantum Poincaré Husimi representationdisplayed in the last column for each case.

Page 18: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 THIS ARTICLE HAS BEEN PEER-REVIEWED. 65

I N T E R N A T I O N A LP O L A R Y E A R

Aprimary strength of 3D general circula-tion models (GCMs) is how well theysimulate the coupled interactions be-tween sea ice, the land surface, the at-

mosphere, and the ocean, all of which are essentialfor understanding the climate system’s response toforcing perturbations. However, GCMs have lim-ited spatial and temporal resolution (because of to-tal integration time) and sometimes fail to capturethe fundamentally important processes that affectclimate variability. Moreover, the computationalconstraints on large models restrict the number andlength of sensitivity experiments.

Component models, on the other hand, usespecified forcing at the boundaries, and althoughthey can’t study the coupled system’s response,

they are easier to interpret and are useful forstudying individual forcing parameters. Re-searchers can also use models of intermediatecomplexity, such as regional ice-ocean coupledmodels, to study certain processes in partially cou-pled modes. Perhaps the best option of all is to usea hierarchy of models—a combination of interme-diate-complexity models, process models, andGCMs—to gain a clearer understanding of howmultiple processes can affect, say, the high-latitudeclimate system.

The field of climate variability involves a widerange of spatial and temporal scales. Small spatial-scale processes such as turbulence, mixing, andconvection, for example, affect large-scale ice-ocean-atmosphere circulation patterns, which de-termine the system’s basic state, which in turnaffects small-scale processes. Small spatial-scaleprocesses also typically operate over shortertimescales. Resolving (or parameterizing) the cli-mate system’s smaller-scale features while per-forming long-term integrations on complex GCMsconstitutes the principal challenge for computa-tional scientists interested in the field.

In this article, the authors discuss future pro-jections of the Arctic sea-ice cover from sophisti-cated GCMs, the uncertainties associated withthese projections, and how the use of simpler com-ponent models can help in the interpretation ofcomplex GCMs.

An Ice-Free Arctic?Opportunities for Computational Science

The authors discuss modeling’s role in understanding the ice-ocean system, as well as itsimportance in predicting the future state of Arctic sea ice. In doing so, this article presentsresults from a hierarchy of models of different complexity, their strengths and weaknesses,and how they could help forecast the future state of the ice-ocean system.

L. BRUNO TREMBLAY

McGill UniversityMARIKA M. HOLLAND

US National Center for Atmospheric ResearchIRINA V. GORODETSKAYA

Columbia UniversityGAVIN A. SCHMIDT

NASA Goddard Institute

1521-9615/07/$25.00 © 2007 IEEE

Copublished by the IEEE CS and the AIP

Page 19: Using Python to Solve Partial Differential Equations

66 COMPUTING IN SCIENCE & ENGINEERING

Arctic Sea IceOver the past few decades, the Arctic has witnessedlarge changes in its land, atmosphere, ice, and oceancomponents. These changes include a decrease in sea-ice extent and thickness,1,2 a warming of surface airtemperatures,3 a decrease in the sea-level pressure,4

deeper penetration of storms in the eastern Arctic,5 awarming of the North Atlantic drift current and itsflow at depth beneath the fresher Arctic surface wa-ters (see Figure 1),6 the melting of the permafrost,7

increased river runoff,8 and changes in vegetation,9

among others. Of all these changes, the best docu-mented is the decrease of the minimum sea-ice extentas observed by satellite (see Figure 2). Scientists haveseen a decrease in September sea-ice extent of 8 per-cent per decade since the late 1970s,10 with three min-imum ice records broken in the past four years.

All these observations are internally consistentwith local feedbacks from the ice, clouds, and thesurface energy budget (the balance of energy com-ing in and then leaving the surface). At high lati-tudes, the dominant feedback mechanism believedto be responsible for increased local warming iscalled ice-albedo feedback. If the climate warms, thesea ice in the polar seas retreats, and the fraction ofsolar radiation absorbed by the ice-ocean systemincreases (sea ice reflects most of the incoming so-lar radiation; the ocean absorbs most of it). Thisleads to further warming of the ocean surface andthe overlying air, further retreat of the sea ice, fur-ther warming, and so on. This positive feedbackcan cause large and very rapid changes in surfaceconditions and local climate—early models basedon sea-ice albedo feedback alone predicted several

ICE-OCEAN MODELINGBy Uma Bhatt and David Newman, University of AlaskaFairbanks

This issue’s article for the International Polar Year focuseson various methods for modeling ice-ocean inter-

actions. This is timely not just because of the IPY but alsobecause of the much publicized shrinking Arctic ice cap andexpected changes in climate due to shifts in ocean currents.

The following Web sites highlight different aspects of achanging Arctic from satellite data to model projections andintercomparisons:

• NASA’s Scientific Visualization Studio site is a great placeto start because it plays movies created from satellite mea-surements as well as from model projections (http://svs.gsfc.nasa.gov/). To see beautiful animations of sea-icechanges, search for “sea ice” on this site; one of the bestdepicts the minimum ice concentration from 1979 to2006 (http://svs.gsfc.nasa.gov/vis/a000000/a003300/a003378/). For a variety of other movies of Arctic data, goto http://svs.gsfc.nasa.gov/search/Keyword/Arctic.html.

• Scientists at the Geophysical Fluid Dynamics Laboratory(GFDL; www.gfdl.noaa.gov) are investigating climatevariability and prediction from annual to centennialtimescales. You can see one of their state-of-the-art mod-els of the shrinking Arctic ice cap at www.gfdl.noaa.gov/research/climate/highlights/GFDL_V1N1_gallery.html.

• The US National Center for Atmospheric Research is hometo the Community Climate System Model (www.ccsm.ucar.edu); as the name indicates, the climate communityis heavily involved in the model’s development. You canfind an overview of the model at www.ucar.edu/communications/CCSM and more about high-latitude

simulations at www.ccsm.ucar.edu/working_groups/Polar.• To make more sense of the results of different models

worldwide, Lawrence Livermore National Laboratory hasestablished a program to facilitate model comparison(www-pcmdi.llnl.gov/projects/cmip/).

The article in this issue describes two extremes in the hier-archy of models used to investigate ice-ocean interactions—namely, large-scale global models and small-scale icemodels. In between these is a class of models called regionalmodels, which are typically forced with either real climatedata or GCM data at their boundaries and can be run athigher resolution to investigate smaller-scale effects, such aslocal orography, smaller-scale weather forcing effects, and soon. You can find more information on the intercomparisonof these arctic regional ice-ocean models at the Arctic OceanModel Intercomparison Project (AOMIP; http://fish.cims.nyu.edu/project_aomip/overview.html).

The next article in our series dedicated to the IPY will moveonto land and provide insights into modeling high-latitudeterrestrial vegetation dynamics, once again using a hierarchyof models of varying complexity. Of course, due to space con-straints, we can’t cover all the relevant topics in this series, andmost notable among our omissions is coverage of biogeo-chemical processes. One of particular relevance for the polaroceans is the carbon cycle in the ocean; recent studies showthat the acidification of the ocean due to enhanced carbondioxide is particularly important in the cold polar waters. Thisacidification is expected to dissolve the calcium-based shells ofsmall marine organisms, unleashing a major impact on thefood chain. Models of these chemical-biological processes areat an early stage of development, although researchers expectbiogeochemistry models to become an integral part of whatare presently classified as climate models.

Page 20: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 67

degree changes in the global mean temperature ini-tiated by small changes in radiative forcing.11

In the real world (and in more complex models),negative feedback mechanisms damp or delay theclimate system’s response to changes in forcing.One such mechanism operating at high latitudes isthe cloud-albedo feedback. When sea ice retreats,more ocean water is exposed to the atmosphere.This leads to more evaporation (the overlyingwarmer air can hold more water vapor than thecolder atmosphere) and potentially more clouds,which are highly reflective of solar radiation. In ef-fect, we’ve replaced a highly reflective material atthe surface (sea ice) with an equally reflective sur-face up in the atmosphere (the clouds), but theydon’t cancel each other out entirely. Instead, thecombined effect of changes in cloud and sea ice hasa reduced but still significant effect on top-of-atmosphere (TOA) albedo, which is important toglobal temperature (see Figure 3). In fact, in acloudier Arctic, increased longwave radiationreaching the surface can intensify sea-ice melt, es-pecially during the spring.12

In the late 1980s and early 1990s, more stormsthan usual penetrated deep into the eastern Arctic,a phenomena that became part of a trend in theNorth Atlantic Oscillation (NAO). During a pos-itive NAO phase, storms preferentially movenorthward in the Icelandic and Barents Seas(rather than across the Atlantic or Baffin Bay), withthe sea-level pressure in the northern part of theNorth Atlantic relatively lower. These stormscarry sensible and latent heat north, create windpatterns that blow ice away from the coastlines ofthe Kara and Laptev Seas,13 and export thickmultiyear ice from the central Arctic through theFram Strait,14 which thins the ice in the peripheralseas. The associated heat flux from the relativelywarm ocean through the thin ice cover keeps theoverlying atmosphere warmer. These storms alsoresult in a greater poleward heat transport (both inthe ocean and in the atmosphere), warmer surfaceair temperature in the eastern Arctic, deeper pen-etration of North Atlantic drift waters along thecontinental shelf, less multiyear ice in the centralArctic, and increased precipitation and runoff fromthe Eurasian continent.

All the feedback mechanisms we mentioned ear-lier lead to a larger warming signal—called polaramplification—in the high latitudes, particularly inthe northern hemisphere, which has a perennialsea-ice cover and the potential for a stronger ice-albedo feedback signal. As a result, although cli-mate models predict a global mean warming of 3 to5 degrees Celsius by the end of the 21st century

(assuming a continued increase in greenhousegas15), the same models predict a warming of 10 to15 degrees Celsius and a much reduced sea-icecover in the Arctic for the same time frame.16 Be-cause of polar amplification, the Arctic regioncould be a place where scientists can more clearlyseparate a warming signal associated with human

Figure 1. Arctic Ocean surface circulation. Red arrows indicate warmAtlantic Ocean currents and blue arrows indicate cold Arctic surfacecurrents. North Atlantic drift waters entering the Arctic west ofSvalbard flow counterclockwise at depth (the warm core is atroughly 300 meters) and exit through the Fram Strait (not shown).

(a)

(b)

2002 2003 2004 2005

<–50 >50%

0

8.5

8.0

7.5

7.0

6.5

6.0

5.5

5.01978 1980 1982 1984 1986 1990 1992 1994 1996 1998 2000 2002 2004 2006

Ice

exte

nt (

mill

ions

sq

km

)

1979–2000 mean minimum sea-ice edge

Figure 2. Satellite observation. (a) The trend in September sea-iceextent in the Arctic, and (b) sea-ice extent anomalies for 2002,2003, 2004, and 2005. The pink line represents the mean ice-edgeposition averaged over the satellite era.

Page 21: Using Python to Solve Partial Differential Equations

68 COMPUTING IN SCIENCE & ENGINEERING

activity from naturally occurring climate variabil-ity. These are among the reasons why climate sci-entists are interested in monitoring and modelingthe high latitudes, and why they do field work insuch remote and harsh environments.

The Ice-Ocean System’s Mean StateSea ice and oceans are present in both hemispheresat high latitudes. Yet, the two systems’ natures andbehaviors are very different.

Sea ice forms when surface waters reach theirfreezing points (roughly –1.8� Celsius for typicalocean waters), but for this to occur, the surfaceocean must be stratified.17 When seawater cools, itbecomes denser—the heavier surface waters sink(convect) and mix with deeper waters. In a stratifiedocean, the depth to which the water convects islimited to the surface layer, so only the first fewtens of meters (roughly 40 meters, for the Arctic)must cool to the freezing point for sea ice to form.In an unstratified ocean, the entire water column(roughly 4,000 meters) would need to cool beforeice could form on the surface.

In the Arctic, surface stratification mainly comesfrom the input of fresh water from river runoff.The Arctic Ocean constitutes 2 percent of theEarth’s total ocean volume, yet it receives approx-imately 10 percent of total continental runoff. Thecold and relatively fresh surface waters (the mixedlayer) sit above an equally cold but somewhat saltierlayer of water called the cold halocline layer (CHL).Beneath this layer are warmer and saltier watersfrom the North Atlantic. The CHL’s presence inthe Arctic buffers the cold surface waters from thewarmer Atlantic waters by limiting the ocean heatflux into the mixed layer during the winter growthseason, which in turn helps the buildup of a peren-nial sea-ice cover. Two different mechanisms ex-plain the CHL’s formation. In the first, shelf-wateradvection feeds the cold halocline waters: when iceforms at the beginning of the cold season it rejectssalt, making the relatively fresh shelf waters saltier.These waters are advected offshore and find theirlevel of equilibrium between the lighter (fresher)surface waters and the heavier (saltier) Atlantic wa-ters. In the second, deep-ocean convection feedsthe cold halocline waters; the relatively fresh shelfwaters are advected offshore and remain near thesurface (http://psc.apl.washington.edu/HLD/Lomo/Lomonosov.html).

In the Southern Ocean, surface stratification ismuch weaker and is mainly due to melting iceshelves, melting sea ice, and the runoff from thecontinental ice sheet. The mixed layer sits directlyatop the warmer and saltier pycnocline waters. Inearly winter, once the shallower seasonal pycno-cline (from the previous summer’s sea-ice melt) iseliminated when ice grows in fall, further ice for-mation (and salt release) later in the winter causesconvection and entrainment of the warmer sub-pycnocline waters to the surface. This will melt orprevent from forming approximately 1.5 meters ofice each winter.

A Seasonally Ice-Free ArcticWhen scientists talk about an ice-free Arctic,they’re generally referring to a summer ice-freeArctic Ocean—that is, one that has lost its peren-nial sea-ice cover, a situation that’s sometimescalled the Antarctic analogue.18 In winter, no modelprojects a complete disappearance of the sea-icecover until at least the end of this century.

In the Arctic Ocean, approximately 1 meter of iceforms each year during winter, 0.7 meters melt dur-ing the summer, and an equivalent of 0.3 meters areexported south to the North Atlantic where it melts.We could achieve a seasonally ice-free Arctic througha sustained increase in sea-ice export out of the Arc-

Figure 3. Monthly mean top-of-atmosphere (TOA) albedo (greendots) against northern hemisphere sea-ice concentrations (SICs).Purple dots connected with a line represent area-weighted TOAalbedo averages for each 0, 100, and 10 percent bin of SIC. Thedashed lines are standard deviations, and the thin red lines connectmaximum and minimum observed surface albedo values for boththe open ocean and sea ice, corresponding to the TOA albedoenvelope in the absence of an atmosphere. (The TOA albedo isfrom the Earth Radiation Budget Experiment’s [http://asd-www.larc.nasa.gov/erbe/ASDerbe.html] data, and the SICs are from theHadley Centre’s sea-ice and sea-surface temperature data set[http://hadobs.metoffice.com].23)

Page 22: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 69

tic via the Fram Strait,14 a decline in winter sea-iceproduction, or an increase in summer sea-ice melt.

Anomalous Ice ExportThe mean time sea ice resides in the Arctic is ap-proximately seven years. For the sea-ice export tohave a significant impact on the volume of ice re-maining in the Arctic, anomalous wind patternsmust be maintained for at least this amount of time.However, as we mentioned earlier, a strong nega-tive feedback limits the impact of enhanced sea-iceexport on Arctic ice volume.14

When export is anomalously high, the volume(or mean thickness) of ice left behind decreases,and the heat lost from the ocean to the atmosphere(and concomitant sea-ice formation) increases. Inthe late 1980s and early 1990s, researchers ob-served a trend toward a more positive NAO index,with deeper penetrations of storms in the easternArctic and winds blowing the thick multiyear icefrom the central Arctic out through the FramStrait. Some scientists have hypothesized that thevery low ice observed in subsequent years is the re-sult of this trend.3 However, since the mid-1990s,the NAO index hasn’t been as positive, yet the sys-tem hasn’t recovered.

Anomalous Winter Sea-Ice GrowthThe typical heat loss from the Arctic Ocean to theatmosphere is 15 Watts per square meter (W m–2),which is equivalent to a 1-meter ice growth over an8-month growing season. In winter, the dominantfactors in the surface heat balance are upwellingand downwelling longwave radiation and the con-ductive heat flux through the sea ice.19 On a typi-cal clear-sky day, the net longwave radiationemitted from the surface is approximately 30 W m–2, whereas the net longwave radiative fluxdrops to almost zero during cloudy skies.

The expected increase in downwelling longwaveradiation by 2050, as predicted by the latest gener-ation of GCMs from the International Panel onClimate Change’s 4th Assessment (IPCC-AR4),ranges from roughly 10 to 25 W m–2, depending onthe model used and the future CO2 increase sce-nario considered. This is significantly larger thanthe same models’ global average, which rangesfrom 3 to 15 W m–2, and is of the same order ofmagnitude as the net heat loss to the atmosphereduring the winter months. An increase in thedownwelling longwave radiation will result in awarmer surface-ice temperature, a reduced tem-perature gradient from the ice base to its surface,and a reduced winter ice growth. These changeswould gradually decrease the winter sea ice’s

growth over time, if no other feedback mechanismswere present. Of all the IPCC models participat-ing in the 4th assessment that have a realistic sea-sonal ice extent cycle, 40 percent display thisgradual decrease.20

Anomalous Summer Sea-Ice MeltIn summer, the main balance in the Arctic sea ice’ssurface heat budget is between the net shortwaveradiation absorbed at the surface, the energy re-quired to melt the sea ice, and to a lesser extent thenet longwave radiation lost by the surface.19 Cloudshave a large impact on surface melt as well.Whereas winter clouds have a warming effect (in-creased downwelling longwave radiation), summerclouds reduce the amount of shortwave radiationthat reaches the surface and typically have a cool-ing effect (the increased downwelling longwave ra-diation associated with clouds doesn’t compensatefor the decreased downwelling shortwave radia-tion21). Depending on microphysical properties(such as cloud-particle radius and ice versus liquid),clouds can affect the surface radiation balance dif-ferently in winter and summer.22,23

How the summer melt will change in responseto future greenhouse gas production dependslargely on the projected changes in Arctic cloudcover and type. At present, satellite observationsshow an increase in the melt season by a few weeks,associated with the NAO’s more positive phase,10

and possibly with an increase in the downwellinglongwave radiation reaching the surface.12

Feedback MechanismsIncreased ice export, decreased winter growth, andincreased summer melt will all result in a gradualchange in sea-ice conditions in the Arctic Ocean.Let’s examine how slowly varying CO2 increases inthe atmosphere could lead to a rapid decline inArctic sea-ice volume if we reach certain thresholdsin sea-ice thickness or surface ocean temperatureand salinity structure.

Dynamic FeedbackEnergy input by the wind dissipates due to bothbottom friction between the ice base and oceansurface and lateral friction between ice floes rub-bing against one another along shear lines. Whenlocal convergence is present,24 ridges form, lead-ing to an increase in the system’s potential energy.A thinner sea-ice cover has a lower mechanicalstrength and deforms more easily (sea ice com-pressive and shear strengths are functions of thick-ness, but sea ice tensile strength is invariably muchlower because the pack ice is a highly fractured

Page 23: Using Python to Solve Partial Differential Equations

70 COMPUTING IN SCIENCE & ENGINEERING

material that can’t sustain large tensile stresses be-fore deforming). Moreover, faster-moving iceyields more mixing at the surface.

Localized high shear deformation can also raisethe pycnocline depth (due to Ekman upwelling)and increase turbulent heat fluxes in the surfaceocean boundary layer.25 Figure 4 shows typicallead (a narrow opening in the sea-ice pack that ex-poses the open ocean) patterns as well as the spa-tial distribution of leads from three years ofRadarsat Geophysical Processor System (RGPS)data. A faster-moving sea-ice cover has a shorterlife span in the Arctic (with wind forcing remain-ing the same), which results in decreased thermo-dynamic growth.

Loss of the Cold Halocline LayerAs discussed earlier, the CHL buffers the Arctic’scold surface water from the warm Atlantic layer be-neath. In the early 1990s, Michael Steele and Tim-othy Boyd26 showed that the eastern-central ArcticCHL was weak in 1993 and completely absent in1995. Without a CHL, the Arctic Ocean assumescharacteristics of the Antarctic water column, andwith that, presumably an increased ice-ocean heatexchange and a behavior similar to the Antarcticocean-ice system (that is, a seasonal ice cover). Inthe late 1990s, the CHL returned.27 Researchersargued that this excursion was due to a change inthe large-scale atmospheric circulation and a con-comitant change in river inflow paths along theEurasian shelf.28 During that time, the river runofffrom Eurasia formed an eastward-flowing coastalcurrent in response to large-scale wind patternchanges (as opposed to flowing off the shelf andalong the Lomonosov ridge in the central Arctic).

It’s unclear how the CHL will respond to reduc-tions in ice cover. Weakening or loss of the CHLwould constitute a large positive feedback mecha-nism accelerating the decline of Arctic sea-ice cover.To quantify the amount of heat brought up to thesurface when sea ice forms, rejects salt, and enablessurface convection, Douglas Martinson and RichardIannuzzi29 developed a simple bulk model based onthe upper ocean’s temperature and salinity profile.Douglas Martinson and Michael Steele18 later cal-culated (using all available temperature and salinityprofiles from the Arctic) the latent ocean heat fluxesthat would be released in the event of a CHL loss.The values of heat fluxes ranged from 17 W m–2

north of Greenland in the Amundsen Basin to ap-proximately 9 W m–2 in the Canadian Basin. Giventhat the CHL’s presence is linked with river runoffpaths into the Arctic Ocean and shelf water’s hy-drographic properties, whether we could lose theCHL over the entire Arctic Ocean at once remainsan open question. However, even a partial loss overa limited region of the Arctic would significantlyimpact the sea-ice cover’s thinning.

Ice-Ocean-Albedo FeedbackAnother possible mechanism for the rapid declinein Arctic sea-ice cover is linked with ice-ocean-albedofeedback. As the sea ice gradually thins due to in-creased greenhouse gases in the atmosphere, we’llreach a threshold when an anomalously warm year(associated with natural interannual variability)causes a significant increase in open water. This willbe followed by increased absorption of solar radia-tion in the mixed layer and an increase in basal meltalong with the usual surface melt. The natural vari-

(a)

(b)

2,000

1,500

1,000

500

0

–500

–1,000–20 –15 –10 0 5–5

km from Pole (hundreds)

5.04.54.03.53.02.52.01.51.00.50.0

15–18 January 1997

cis ( j –1)0.10

0.09

0.08

0.06

0.05

0.04

0.03

0.02

0.01

0.00

Figure 4. Lead pattern in Arctic sea ice. (a) Sheardeformation from the Radarsat GeophysicalProcessor System (RGPS; www-radar.jpl.nasa.gov/rgps/radarsat.html); (b) mean spatial distributionof open water over the Arctic Ocean for thewinters of 1996–1999, derived from RGPS thin icethickness data. Open water is defined as the icesurface area covered by ice thinner than 10 cm.

Page 24: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 71

ability that can trigger these events includes higherthan normal atmospheric and oceanic heat fluxesto the Arctic.20 Observations taken along theEurasian continental shelf show a pulse-like warm-ing of the Atlantic water circulating cyclonicallyalong the shelf in the Arctic Ocean, and scientistsalso recorded a few warm events of roughly 1� Cel-sius in 199030 and 2004.6 Researchers simulatedsimilar warming events with a regional ice-oceanmodel forced with specified atmospheric forcing.31

The Arctic Sea-Ice Cover’s FutureThe models participating in the IPCC’s 4th assess-ment represent the state of the art in global climatemodeling. They incorporate ocean, atmosphere,terrestrial, and sea-ice components as well as thelinkages among them.

IPCC-AR4 models consistently exhibit a de-crease in Arctic sea-ice cover in response to in-creasing greenhouse gases, but this retreat rangeswidely in its rate and magnitude. Some of this scat-ter is related to the simulated present-day climateconditions; models with more extensive ice coverin the current climate tend to be less sensitive. Ananalysis of 14 IPCC-AR4 models shows that the re-treat of sea ice by the mid-21st century is correlatedto late 20th century conditions with a correlationcoefficient of R = 0.42, where R stands for the cor-relation coefficient between the simulated sea-iceextent in the middle of the 21st century and thoseof the late 20th century. By the end of the 21st cen-tury, however, the correlation degrades to R = 0.09,such that, although initial ice conditions are im-portant, other processes dominate. These processesaffect climate-feedback strength and can includesimulated cloud cover, atmospheric circulation’smeridionality (the north–south heat-moisturetransport), and the mean and variability of oceanheat transport to the Arctic.

The mechanism for losing perennial sea-icecover is thermodynamic in nature—that is, it’s dueto an increased net surface and basal heat budgetand melt. For the multimodel ensemble mean, theice melt for the 2040–2060 average increases overthat of the 1980–2000 average by 1.2 meters,whereas the net ice growth increases by 0.2 metersand the ice export decreases by 0.1 meters. Thisclearly shows that the dominant term for the en-semble mean is sea-ice melt, with ice export andwinter growth acting as negative feedbacks. How-ever, the variability in winter growth and ice exportis larger, and, for some models, they act as positivefeedbacks. Of all the processes that could be re-sponsible for Arctic sea-ice decline, increased sum-mer melt is thus the main player.

Limitation of Current GCMsWe noticed major improvements in Arctic simula-tion quality from the latest generation of modelsparticipating in the IPCC’s 4th assessment whencompared to the previous generation of models.These include the representation of sea-ice thick-ness distribution, the simulation of sea-level pres-sure at high latitude, the atmospheric circulationpattern’s meridionality, and precipitation patternsat high latitudes (a discussion of Arctic climate bi-ases associated with the previous generation ofGCMs appears elsewhere32). Important issues re-main, however, and they’ll require further attentionbefore we can rely entirely on model predictions ofthe Arctic’s future climate.

Clouds, for example, are inherently difficult tomodel because of the small-scale nature of theprocesses that govern their formation. Moreover,Arctic clouds are difficult to measure remotely be-cause distinguishing them from the sea ice surfacein infrared and visible satellite images is difficult.The lack of data and difficulty in collecting it alsoposes a challenge to the study of Arctic clouds. For-tunately, several cloud detection algorithms specificto the polar regions have been developed recentlyand validated against ground-based campaigns,whereas the newly launched satellite programs havemuch improved capabilities in differentiatingclouds from sea ice and quantifying the clouds’ mi-crophysical properties (such as cloud ice and liquidwater content and particle size).33,34

Measurements conducted during the SurfaceHEat Budget of the Arctic (SHEBA; http://sheba.apl.washington.edu) experiment showed that liquidwater dominates cloud water content over the icephase in summer, and even winter clouds containsignificant amounts of liquid water.21 Moreover, liq-uid clouds reflect more shortwave radiation, whereasice clouds are relatively more transparent. Themodel parameterizations that researchers use to de-cide whether a cloud is liquid or solid are simple andoften based on relatively few field campaigns thataren’t always applicable to Arctic conditions. Figure5 shows the partitioning between liquid and ice inthe Arctic from SHEBA observations and three cou-pled models participating in the IPCC’s 4th assess-ment report. Models with the largest liquid watercontent show the smallest downwelling shortwaveflux during the summer, which is partly compen-sated for with an increased longwave flux. Duringthe winter months, models with liquid-dominatedclouds have higher downwelling longwave radiationcompared to models with ice-dominated clouds.

Another small-scale process that isn’t well re-solved in current GCMs is linked with determin-

Page 25: Using Python to Solve Partial Differential Equations

72 COMPUTING IN SCIENCE & ENGINEERING

ing the upper ocean’s vertical structure—in partic-ular, the CHL. GCMs often have problems whenresolving the sharp salinity gradients at the base ofthe mixed layer; instead, they tend to producesaltier surface waters and fresher pycnocline waters,which results in a warm Atlantic layer that isn’tbuffered from the surface. However, the warm At-lantic layer is often deeper than observed becauseof the upper water column’s salinity structure (seeFigure 6). The two effects compensate for eachother, often giving realistic sea-ice heat and massbalance. Whether the variability around the meanor the response in a changing climate is realistic re-mains an open question.

A Simple Modeling ApproachTo separate the effect of anomalous atmosphericcirculation, increases in the downwelling longwaveradiation associated with increased greenhouse gas,and the loss of the CHL in creating a summer ice-free Arctic, we can use a simple stand-alone viscousplastic sea-ice model coupled to a slab ocean and at-mospheric energy balance model (a detailed de-scription of the model appears elsewhere35). To thisend, we ran the model continuously with 1989 windforcing because that year had an anomalously highNAO index and sea-ice export; with an increased

downwelling longwave radiation of 20 W m–2, atypical value simulated by IPCC-AR4 models for2050; and with a specified ocean heat flux of 20 W m–2, mimicking the loss of the CHL in the Arc-tic Ocean. Figure 7 shows a present-day climatesimulation. We forced the model run with atmos-pheric forcing fields for the 1949–2005 time period.In our sensitivity studies, we modified only oneforcing field at a time; the other fields remained thesame as for the present climate run. In all cases, weused a 10-year mean sea-ice thickness field, calcu-lated from the past 10 years of a 50-year run.

The main features of the present-day climatestand-alone model simulation include thicker icenorth of the Canadian Arctic Archipelago (5 to 6meters) and thinner ice along the Eurasian Basin (1meter), in good agreement with observations fromsubmarine and satellite altimeter estimates(http://nsidc.org). This sea ice thickness pattern isdue to the dominant winter winds that tend to pushice from the Eurasian continent toward NorthAmerica as well as the longer life span of sea icecaught in the Beaufort Gyre. The asymmetry in icethickness is particularly interesting: ice is thicker inthe Beaufort Gyre than in the Lincoln Sea (northof Greenland) because of the advection by theBeaufort Gyre of thick multiyear ice westward infront of the Canadian coastline.

When forcing the model with continuous 1989wind forcing (and keeping everything else thesame), the steady-state response (achieved afterseven years) results in a change in sea ice thick-ness, with thinner ice primarily in the East Siber-ian Sea and the Beaufort Gyre (see Figure 7c).The export of thick multiyear ice from the Lin-coln Sea is also clearly visible in the Fram Straitand along the East Greenland coastline. In con-trast, the simulation with increased downwellinglongwave radiation results in much thinner iceover the entire Arctic Ocean (see Figure 7b). Ofthe three effects that scientists believe have thebiggest impact on sea-ice cover, the loss of theCHL is greater because it reduces winter icegrowth and contributes to a thinner end-of-win-ter sea-ice thickness that’s more prone to sub-stantial summer melt.

The timescale associated with the decline of sea-ice cover in these simulations also differs. Changesin the longwave forcing are gradual and occur overlonger timescales than the ice-surface-ocean sys-tem’s steady-state response (roughly seven to eightyears). The sea-ice cover’s response in this simula-tion is therefore in equilibrium with the forcing.On the other hand, both the changes in large-scaleatmospheric circulation and in the CHL can occur

227 12%

88% 9623%

77%

179

89%

11%

109

62%

38%0

50

100

150

200

250

300

350IceLiquid

Clo

ud w

ater

pat

h, g

m −2

HadCM3 CCSM3 SHEBA GISS−ER

Figure 5. Liquid vs. ice clouds. In comparing the May–Septembercloud ice vs. liquid water paths from SHEBA’s ground-basedobservations and the GISS-ER, HadCM3, and CCSM3 models, wesee a dominance of liquid water clouds in summer, which issometimes underestimated in some models. The data are averagedfrom 1959–1998; the numbers above each bar indicate the totalcloud water paths (g m–2), and the percentages show thepartitioning into liquid and ice phases.

Page 26: Using Python to Solve Partial Differential Equations

MAY/JUNE 2007 73

on much shorter timescales, so the ice-surface-ocean response time governs the system’s re-sponse, which in turn leads to rapid changes insea-ice conditions. For this reason, while the mag-nitude of the surface forcing for both the increaseddownwelling longwave radiation and the loss ofthe CHL is of the same order of magnitude, a lossof the CHL could lead to a much more rapid de-cline of the sea-ice cover.

The study of polar oceans, sea ice, andthe high-latitude climate relies heav-ily on regional models and GCMsthat incorporate several critical Earth

system components. Climate models suggest atransition to ice-free Arctic conditions in the sum-mer in the near future (in 50 to 100 years). Thisrepresents an unprecedented change in the Arcticclimate, with potentially far-reaching effects.

Fortunately, several institutions, including na-tional research centers and universities, havegroups of researchers working on the develop-ment, numerical implementation, and coupling ofnew and improved climate models. These groupefforts provide a unique opportunity for scientistsin the computational sciences to tackle importantclimate issues in a stimulating multidisciplinaryresearch environment.

AcknowledgmentsThis survey was supported by the National Science andEngineering Research Council of Canada’s DiscoveryGrant program and the US National Science Foundationunder grants OPP-0230264, OPP-0230325, and ARC-05-20496. We thank the modeling groups for providingtheir data for analysis, the Program for Climate ModelDiagnosis and Intercomparison (PCMDI) for collectingand archiving the model output, and the Joint ScientificCommittee/Climate Variability and Predictability WorkingGroup on Coupled Modeling (WGCM) for organizing themodel data analysis activity. The US Department ofEnergy’s Office of Science supports the data archive.

References1. D.J. Cavalieri et al., “Observed Hemispheric Asymmetry in Global

Sea Ice Changes,” Science, vol. 278, no. 5340, 1997, pp.1104–1106.

2. D. Rothrock, Y. Yu, and G. Maykut, “Thinning of the Arctic Sea-Ice Cover,” Geophysical Research Letters, vol. 26, no. 23, 2000,pp. 3469–3472.

3. I.G. Rigor et al., “Variations in Surface Air Temperature Observa-tions in the Arctic,” J. Climate, vol. 13, no. 5, 2000, pp. 896–914.

4. J.M. Walsh, W.L. Chapman, and T.L. Shy, “Recent Decrease ofSea Level Pressure in the Central Arctic,” J. Climate, vol. 9, no. 2,1996, pp. 480–488.

(a)

(c)

(b)

(d)

Figure 7. Sensitivity studies using the stand-alone model. Wesimulated the 10-year mean September sea-ice thicknessdistribution from a stand-alone granular sea-ice model forced with(a) climatological forcing, (b) increased downwelling longwaveradiation, (c) continuous daily varying 1989 wind forcing, and (d)an increased ocean heat flux (17 W m–2). The black arrows show theeastern Arctic’s dominant winter wind pattern in 1989.

(a)

0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000D

epth

(m

)–2 –1 0 1 2

Temperature (°C)

March: 87°N, 0°W

GFDLGISSMIROCCCSM3PHC

(b)

0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000

Dep

th (

m)

31 32 33 34 35S (psu)

March: 87°N, 0°W

GFDLGISSMIROCCCSM3PHC

36

Figure 6. Vertical ocean temperature and salinity structure. Fourmodels give different (a) simulated March temperatures and (b)vertical salinity profiles. Of particular interest is the fact that mostmodels have a reduced stratification in the upper ocean and awarm Atlantic layer deeper than observed. The exception isCCSM3, which has a good representation of surface stratification,even though it shows Atlantic waters as 1� Celsius too warm andsomewhat saltier.

Page 27: Using Python to Solve Partial Differential Equations

74 COMPUTING IN SCIENCE & ENGINEERING

5. G.J. McCabe, M.P. Clark, and M.C. Serreze, “Trends in NorthernHemisphere Surface Cyclone Frequency and Intensity,” J. Climate,vol. 14, no. 12, 2001, pp. 2763–2768.

6. I.V. Polyakov et al., “One More Step Toward a Warmer Arctic,”Geophysical Research Letters, vol. 32, 2005; doi:10.1029/2005GL023740.

7. L. Hinzman et al., “Evidence and Implications of Recent ClimateChange in Northern Alaska and Other Arctic Regions,” ClimaticChange, vol. 72, no. 3, 2005, pp. 251–298.

8. B.J. Peterson et al., “Increasing River Discharge to the ArcticOcean,” Science, vol. 298, no. 5601, 2002, pp. 2171–2173.

9. M. Sturm, C. Racine, and K. Tape, “Climate Change: IncreasingShrub Abundance in the Arctic,” Nature, vol. 411, no. 6837,2001, pp. 546–547.

10. J.C. Stroeve et al., “Tracking the Arctic’s Shrinking Ice Cover: An-other Extreme September Minimum in 2005,” Geophysical Re-search Letters, vol. 32, no. 4, 2004; doi:10.1029/2004GL021810.

11. M.I. Budyko, “The Effects of Solar Radiation on the Climate of theEarth,” Tellus, vol. 21, no. 611, 1969, pp. 611–619.

12. J.A. Francis et al., “Clues to Variability in Arctic Sea Ice MinimumExtent,” Geophysical Research Letters, vol. 32, no. 21, 2005.

13. A.E. Armstrong, L.-B. Tremblay, and L.A. Mysak, “A Data-ModelIntercomparison Study of Arctic Sea-Ice Variability,” Climate Dy-namics, vol. 20, no. 5, 2003, pp. 465–476.

14. G. Arfeuille, L.A. Mysak, and L.-B. Tremblay, “Simulation of theInterannual Variability of the Wind-Driven Arctic Sea-Ice Coverduring 1958–1998,” Climate Dynamics, vol. 16, Feb. 2000, pp.107–121.

15. J.T. Houghton et al., Climate Change 2001: The Scientific Basis,Cambridge Univ. Press, 2001.

16. M.M. Holland and C. Bitz, “Polar Amplification of ClimateChange in Coupled Models,” Climate Dynamics, vol. 21, Sept.2003, pp. 221–232.

17. K. Aagaard and E.C. Carmack, “The Role of Sea Ice and OtherFresh Water in the Arctic Circulation,” J. Geophysical Research,vol. 94, Oct. 1989, pp. 14485–14498.

18. D.G. Martinson and M. Steele, “Loss of the Arctic Cold Halocline:Is the Arctic becoming More Like the Antarctic?,” Geophysical Re-search Letters, vol. 28, no. 2, 2001, pp. 307–310.

19. H. Huwald, L.-B. Tremblay, and B. Blatter, “Reconciling DifferentDatasets from the Surface Heat Budget of the Arctic (SHEBA),” J. Geo-physical Research, vol. 110, May 2005; doi:10.1029/2003JC002221.

20. M.M. Holland, C. Bitz, and L.B. Tremblay, “Future Abrupt Re-ductions in the Summer Arctic Sea Ice,” Geophysical Research Let-ters, vol. 7, Dec. 2006.

21. J.M. Intrieri et al., “An Annual Cycle of Arctic Surface Cloud Forc-ing at SHEBA,” Annals of Glaciology, vol. 107, no. 10, 2002;doi:1029/2000JC000439.

22. Y. Chen et al., “Observed Relationships between Arctic Long-wave Cloud Forcing and Cloud Parameters Using a Neural Net-work,” J. Climate, vol. 19, no. 16, 2006, pp. 4087–4104.

23. I. Gorodetskaya et al., “The Influence of Cloud and Surface Prop-erties on the Arctic Ocean Short Wave Radiation Budget in Cou-pled Models,” to be published in J. Climate, 2007.

24. H.L. Stern, D.A. Rothrock, and R. Kwok, “Open Water Produc-tion in Arctic Sea Ice: Satellite Measurements and Model Para-meterizations,” J. Geophysical Research, vol. 100, Oct. 1995, pp.20601–20612.

25. M.G. McPhee et al., “Upwelling of Arctic Pycnocline Associatedwith Shear Motion of Sea Ice,” Geophysical Research Letters, vol.32, May 2005; doi:10.1029/2004GL021819.

26. M. Steele and T. Boyd, “Retreat of the Cold Halocline Layer in theArctic Ocean,” J. Geophysical Research, vol. 103, May 1998, pp.10419–10435.

27. G. Bjork et al., “Return of the Cold Halocline Layer to the Amund-sen Basin of the Arctic Ocean: Implications for the Sea Ice Mass

Balance,” Geophysical Research Letters, vol. 29, June 2002;doi:10.1029/2001GL014157.

28. P. Schlosser et al., “Decrease of River Runoff in the Upper Watersof the Eurasian Basin, Arctic Ocean, between 1991 and 1996: Ev-idence from Delta 0-18 Data,” Geophysical Research Letters, vol.29, no. 9, 2002.

29. D.G. Martinson and R. Iannuzzi, “Antarctic Ocean-Ice Interac-tions: Implication from Ocean Bulk Property Distributions in theWeddell Gyre,” Antarctic Sea Ice: Physical Processes, Interactions,and Variability, Antarctic Research Series, vol. 74, 1998, pp.243–271.

30. D. Quadfasel et al., “Warming in the Arctic,” Nature, vol. 350,no. 6317, 1991, p. 385.

31. M.J. Karcher et al., “Arctic Warming: Evolution and Spreading of the1990s Warm Event in the Nordic Seas and the Arctic Ocean,” J. Geo-physical Research, vol. 108, June 2003; doi:10.1029/2001JC001265.

32. C. Bitz, J.C. Fyfe, and G.M. Flato, “Sea Ice Response to WindForcing from AMIP Models,” J. Climate, vol. 15, no. 5, 2002, pp.522–536.

33. G.L. Stephens et al., “The CloudSat Mission and the A-Train: ANew Dimension of Space-Based Observations of Clouds and Pre-cipitation,” Bulletin Am. Meteorological Soc., vol. 83, no. 12, 2002,pp. 1771–1790.

34. S. Kato and N.G. Loeb, “Top-of-Atmosphere Shortwave Broad-band Observed Radiance and Estimated Irradiance over Polar Re-gions from Clouds and the Earth’s Radiant Energy System(CERES) Instruments on Terra,” J. Geophysical Research, vol. 110,Apr. 2005; doi: 10.1029/2004JD005308.

35. L.-B. Tremblay and L.A. Mysak, “Modeling Sea Ice as a GranularMaterial, Including the Dilatancy Effect,” J. Physical Oceanogra-phy, vol. 27, no. 11, 1997, pp. 2342–2360.

L. Bruno Tremblay is an assistant professor at McGillUniversity and an Adjunct Doherty Research Scientist atthe Lamont-Doherty Earth Observatory of Columbia Uni-versity. His research interests include Arctic climate andclimate change, Arctic hydrology, sea-ice dynamics, andthermodynamic modeling. Tremblay has a PhD in at-mospheric and oceanic sciences from McGill University.Contact him at [email protected].

Marika M. Holland is a climate scientist at the US Na-tional Center for Atmospheric Research (NCAR). Her re-search interests include high-latitude climate variabilitywith a particular focus on the role of sea ice in the climatesystem. Holland has a PhD in atmosphere and ocean sci-ences from the University of Colorado. Contact her [email protected].

Irina V. Gorodetskaya is a PhD candidate in the De-partment of Earth and Environmental Sciences at Co-lumbia University. Her interests include clouds and ice inthe Arctic , large-scale data set analysis, and processmodeling. Contact her at [email protected].

Gavin A. Schmidt is a climate scientist at the NASA God-dard Institute for Space Studies in New York, where healso works on climate-model development and evalua-tion. Schmidt has a PhD in applied mathematics from theUniversity of London. Contact him at [email protected].