26th biennial conference on numerical analysis · 2020. 7. 22. · finite element methods for...

60
newlogo-eps-converted-to.pdf 26th Biennial Conference on Numerical Analysis 23 – 26 June, 2015 Contents Introduction 1 Invited Speakers 3 Abstracts of Invited Talks 4 Abstracts of Minisymposia 8 Abstracts of Contributed Talks 38

Upload: others

Post on 10-Feb-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

  • newlogo-eps-converted-to.pdf

    26th Biennial Conferenceon

    Numerical Analysis

    23 – 26 June, 2015

    Contents

    Introduction 1Invited Speakers 3Abstracts of Invited Talks 4Abstracts of Minisymposia 8Abstracts of Contributed Talks 38

  • Nothing in here

  • Introduction

    Dear Participant,

    On behalf of the Strathclyde Numerical Analysis and Scientific Computing Group, it is our plea-sure to welcome you to the 26th Biennial Numerical Analysis conference. This is the fourth timethe meeting has been held at Strathclyde, continuing the long series of conferences originallyhosted in Dundee. This year we are particularly delighted to celebrate the 50th anniversary ofthe very first meeting, which took place at the University of St Andrews in 1965, with fourplenary speakers (D. Kershaw, J.D. Lambert, A.R. Mitchell and M.R. Osborne) and around 25participants from the UK. The conference has since gone from strength to strength, with thisyear’s meeting involving over 200 participants from 22 different countries across the world.

    The conference is rather unusual in the sense that it seeks to encompass all areas of numericalanalysis, and the list of invited speakers reflects this aim. We have once again been extremelyfortunate in securing a top line-up of plenary speakers, and we very much hope that you enjoysampling the wide range of interesting topics which their presentations will cover.

    The meeting is funded almost entirely from the registration fees of the participants. Additionalfinancial support for some overseas participants has come from the Dundee Numerical AnalysisFund, started by Professor Gene Golub from Stanford University in 2007. We are also indebtedto the City of Glasgow for once again generously sponsoring a wine reception at the City Cham-bers on Tuesday evening, to which you are all invited.

    We hope you will enjoy both the scientific and social aspects of the meeting, and look forward inparticular to celebrating our Golden Anniversary in style at Wednesday evening’s ceilidh danceand Thursday’s Conference Dinner!

    Philip KnightJohn MackenzieAlison Ramage

    Conference Organising Committee

    1

  • Information for participants

    • General. There will be a registration desk in the foyer of the John Anderson building(building 16 on the campus map, entry on Level 4 from Taylor Street as indicated). Theorganisers can be contacted there during tea and coffee breaks.

    • Accommodation. All rooms are in the Campus Village. Check-out time is 10:00 on dayof departure. On Friday morning, luggage may be left in room JA3.27.

    • Meals. Most meals will be served in the Aroma Dining Room in the Lord Todd building(building 26 on the campus map, entry as indicated). Breakfast is available from 07.30 until09.00. The times of lunches and dinners are as indicated in the conference programme. Abuffet lunch will be served on Friday in the foyer outside JA3.25. Coffee and tea will beprovided at the advertised times in the foyer outside JA3.25.

    • Lecture rooms. These are in the John Anderson building (building 16, enter on Level4 from Taylor Street). The main auditorium (JA3.25) is down one floor from the mainentrance, along with rooms JA3.14, JA3.17, JA3.26 and JA3.27. The additional rooms forparallel sessions are JA4.12 (on the entrance level of the John Anderson building near theregistration desk), and JA5.05 and JA5.07 (on level 5 of the John Anderson building).

    • Chairing sessions. It is hoped that if you are listed as chairing a session, you will bewilling to help in this way. Minisymposium organisers should organise chairpeople for theirown sessions (including any contributed talks which follow) as appropriate. A break of 5minutes has been allowed for moving between rooms. Please keep speakers to the timetable!

    • Book displays. There will be books on display for the duration of the conference in roomJA3.26.

    • Reception. A reception for all participants hosted by Glasgow City Council will be heldin the City Chambers on Tuesday 23rd June from 20.00 to 21.00. The City Chambers ismarked on the campus map: entry is from George Square.

    • Ceilidh dance. A celebration to mark 50 years since the first Biennial Numerical Analysisconference will be held on Wednesday 24th June at 18:30 (for 19:00 dinner) in the BaronyHall (building 30 on the campus map). This will take the form of a buffet meal, followedby some Scottish dancing.

    • Conference dinner. The conference dinner will be held in the Auditorium of Òran Móron Thursday 25th June at 19.30 (for 20:00 dinner). The venue is located at the top ofByres Road (G12 8QX): directions on how to travel there are provided in your conferencefolder. The guest speaker will be Professor Nick Higham, University of Manchester.

    • Internet Access. Wireless access is available in all of the meeting rooms and in the LordTodd bar/restaurant. If you require access to a fixed terminal, please contact the organisersto obtain a username/password.

    • Bar. There is a bar in the Lord Todd building (building 26) next to the dining room.

    • Sports facilities. Conference delegates can use the University sports facilities (building3) by obtaining a card from the Student Village Office. The cost of the various facilitiesvaries.

    2

  • Invited Speakers

    Folkmar Bornemann TU München [email protected]

    Susanne Brenner Louisiana State University [email protected]

    Albert Cohen Université Pierre et Marie Curie [email protected]

    Charlie Elliott University of Warwick [email protected]

    Martin Gander Université de Genève [email protected]

    Mike Giles University of Oxford [email protected]

    Jan Hesthaven EPFL [email protected]

    Tamara Kolda Sandia National laboratories [email protected]

    Cleve Moler Mathworks [email protected]

    Michael Saunders Stanford University [email protected]

    Rob Scheichl University of Bath [email protected]

    Karen Willcox MIT [email protected]

    3

  • Abstracts of Invited Talks

    Random Matrix Distributions, Operator De-terminants, and Numerical Noise

    Folkmar Bornemann (Technische UniversitätMünchen, Germany)

    Because of universal scaling laws, distributions andcorrelation functions of classical random matrix en-sembles and combinatorial growth processes in thelarge size limits have become increasingly important inphysics and statistics. Their effective numerical com-putation has been made possible by evaluating higherderivatives of operator determinants. We review theunderlying mathematical ideas and demonstrate hownumerical explorations have led to new formulae, tonew numerical algorithms, and finally allowed to ex-hibit universal scaling in some concrete physical exper-iments. Special attention is given to the sharp assess-ment of numerical errors: we relate them to a robuststatistics of numerical noise in the tail of Chebyshevexpansions.

    Finite Element Methods for Fourth Order El-liptic Variational Inequalities

    Susanne C. Brenner (Louisiana State University)

    Fourth order elliptic variational inequalities appearin obstacle problems for Kirchhoff plates and optimalcontrol problems constrained by second order ellipticpartial differential equations. The numerical analy-sis of these variational inequalities is more challengingthan the analysis in the second order case because thecomplementarity forms of fourth order variational in-equalities only exist in a weak sense. In this talk wewill present a new approach to the analysis of finiteelement methods for fourth order elliptic variationalinequalities that are applicable to C1 finite elementmethods, classical nonconforming finite element meth-ods, and discontinuous Galerkin methods.

    Adaptive algorithms for high dimensional in-terpolation

    Albert Cohen (Université Pierre et Marie Curie,Paris)

    There exist many classical methods for interpolating afunction of one or several variables. Practically all ofthese methods however face difficulties when consider-ing functions of a large number of variables. We shalldiscuss and compare two approaches that can handlehigh dimensional data, the first based on Gaussianprocesses and the second based on sparse polynomial

    expansions. Both approaches give rise to adaptivegreedy algorithms in which the interpolation pointsare chosen in a sequential manner, and which are stillnot well understood from a theoretical point of view.Our main motivation for this work is the non-intrusivetreatment of parametric PDEs.

    Parabolic PDEs on evolving domains

    Charles Elliott (University of Warwick)

    I will discuss recent progress on the numerical analysisof PDEs on moving domains and in particular on theuse of evolving finite element spaces.

    Linear and Non-Linear Preconditioning

    Martin J. Gander (Université de Genève)

    The idea of preconditioning iterative methods for thesolution of linear systems goes back to Jacobi (1845),who used rotations to obtain a system with more diag-onal dominance, before he applied what is now calledJacobi’s method. The preconditioning of linear sys-tems for their solution by Krylov methods has becomea major field of research over the past decades, andthere are two main approaches for constructing pre-conditioners: either one has very good intuition andcan propose directly a preconditioner which leads toa favorable spectrum of the preconditioned system, orone uses the splitting matrix of an effective stationaryiterative method like multigrid or domain decomposi-tion as the preconditioner.

    Much less is known about the preconditioning of non-linear systems of equations. The standard iterativesolver in that case is Newton’s method (1671) or avariant thereof, but what would it mean to precondi-tion the non-linear problem? An important contribu-tion in this field is ASPIN (Additive Schwarz Precon-ditioned Inexact Newton) by Cai and Keyes (2002),where the authors use their intuition about domaindecomposition methods to propose a transformationof the non-linear equations before solving them by aninexact Newton method. Using the relation betweenstationary iterative methods and preconditioning forlinear systems, we show in this presentation how onecan systematically obtain a non-linear preconditionerfrom classical fixed point iterations, and present asan example a new two level non-linear preconditionercalled RASPEN (Restricted Additive Schwarz Precon-ditioned Exact Newton) with substantially improvedconvergence properties compared to ASPIN.

    4

  • Multilevel Monte Carlo methods

    Mike Giles (University of Oxford)

    Monte Carlo methods are a standard approach for theestimation of the expected value of functions of ran-dom input parameters. However, to achieve improvedaccuracy one often requires more expensive sampling(such as a finer timestep discretisation of a stochas-tic differential equation) in addition to more samples.Multilevel Monte Carlo methods aim to avoid this bycombining simulations with different levels of accu-racy. In the best cases, the average cost of each sampleis independent of the overall target accuracy, leadingto very large computational savings.

    The lecture will emphasise the simplicity of the ap-proach, give an overview of the range of applicationsbeing worked on by various researchers, and also men-tion some recent extensions. Applications to be dis-cussed will include financial modelling, engineeringuncertainty quantification, stochastic chemical reac-tions, and the Feynman-Kac formula for high-dimensional parabolic PDEs.

    Further information can be obtained fromhttp://people.maths.ox.ac.uk/gilesm/acta/

    High-order methods for fractional differentialequations

    Jan S Hesthaven (Ecole Polytechnique Fédérale deLausanne)

    During the last few years, fractional calculus hasemerged as an interesting tool to enable the mod-eling of a variety of problems often thought to bepoorly modeled with classic calculus. Prominent ex-amples suggesting a fractional model can be found inporous and granular flows, highly anisotropic prob-lems, problems with inherent memory effects and caneven emerge as a result of homogenization in multi-scale problems.

    While the idea of fractional calculus is as old as clas-sic calculus, its theoretical and computational founda-tion is considerably less developed. Answers to seem-ingly simple questions of boundary conditions for frac-tional partial differential equations are often unknown.Furthermore, the nature of the fractional operatorsmakes the development of computational techniqueschallenging because of the global and often singularnature of the solutions.

    In this presentation we begin by providing some gen-eral background on fractional models and then discussa few examples, including both fractional differential

    equations and fractional partial differential equations,of different high-order accurate computational meth-ods, their analysis, and use. We shall conclude with abrief discussion of some of the many open problems.

    A Survey of Optimization Challenges in TensorDecomposition

    Tamara G Kolda, (Sandia National Laboratories)

    Tensors are multiway arrays, and tensor decomposi-tion is a powerful tool for compression and data inter-pretation. In this talk, we survey the optimization ap-proaches that have proved useful as well as open ques-tions. We focus primarily on the canonical polyadic(CP) decomposition, also known as CANDECOMP orPARAFAC, which decomposes a tensor into a sum ofrank-1 tensors. The standard approach to fitting aCP factorization is based on alternating least squares,but we also consider all-at-once optimization methodsand discuss the special structure of the Hessian. Formany real-world problems, the data is incomplete sowe have to reformulate the optimization problem toaccount for the lack of data. A major open questionis how to determine the number of components in thedecomposition, and we present new results based onstatistical cross-validation. We also consider specialformulations of the CP problem, including symmetryconstraints, coupled decompositions, alternate objec-tive functions, and the best rank-one approximation(which finds a “tensor eigenpair”). We’ll also brieflydiscuss optimization methods for the Tucker decom-position, which requires optimization on a manifold,and special methods for symmetric Tucker decompo-sitions.

    Evolution of MATLAB

    Cleve Moler (Mathworks)

    We show how MATLAB has evolved over more than30 years from a simple matrix calculator to a power-ful technical computing environment.We demonstrateseveral examples of MATLAB applications. We con-clude with a discussion of current developments, in-cluding Parallel MATLAB for multicore and multi-computer systems.

    Cleve Moler is the original author of MATLAB andone of the founders of the MathWorks. He is cur-rently chairman and chief scientist of the company, aswell as a member of the National Academy of Engi-neeringand past president of the Society for Industrialand Applied Mathematics.

    See http://www.mathworks.com/company/aboutus/founders/clevemoler.html.

    5

  • Experiments with linear and nonlinear opti-mization using Quad precision

    Michael Saunders & Ding Ma (Stanford University)

    For challenging numerical problems, William Kahanhas said that “default evaluation in Quad is the hu-mane option” for reducing the risk of embarrassmentdue to rounding errors. Fortunately the gfortran com-piler now has a real(16) datatype. This is the humaneoption for producing Quad-precision software. It hasenabled us to build a Quad version of MINOS.

    The motivating influence has been increasingly largeLP and NLP problems arising in systems biology. Fluxbalance analysis (FBA) models of metabolic networksgenerate multiscale problems involving some large datavalues in the constraints (stoichiometric coefficients oforder 10,000) and some very small values in the solu-tion (chemical fluxes of order 10−10). Standard solversare not sufficiently accurate, and exact simplex solversare extremely slow. Quad precision offers a reliableand practical compromise even via software. On arange of multiscale LP examples we find that 34-digitQuad floating-point achieves primal and dual infeasi-bilities of order 10−30 when “only” 10−15 is requested.

    Multilevel Uncertainty Quantification

    Robert Scheichl (University of Bath)

    The term Uncertainty Quantification is as old as thedisciplines of probability and statistics, but as a fieldof study it is newly emerging. It combines proba-bility and statistics with mathematical and numeri-cal analysis, large-scale scientific computing, experi-mental data, model development and application sci-ences to provide a computational framework for quan-tifying input and response uncertainties which ulti-mately can be used for more meaningful predictionswith quantified and reduced uncertainty. We will mo-tivate the central questions in computational uncer-tainty quantification through some illustrative exam-ples from subsurface flow, weather and climate pre-diction and material science. The key challenge thatwe face in all those applications is the need for fast(tractable) computational tools for high-dimensionalquadrature. Due to their tractability, Monte Carloand other ensemble-type methods are the most widelyused techniques, but especially when we want to com-bine input and output data in a Bayesian approach toinference they very quickly become too costly to bepractically useful for large-scale scientific or engineer-ing applications.

    I will focus on multilevel (or multiscale) Monte Carlomethods that exploit the natural model hierarchies in

    numerical methods for differential equations to over-come this difficulty with a rigorous theoretical andpractical control over bias and sampling errors. Mostimportantly it provides the genuine possibility to ap-ply powerful, but typically expensive statistical tools,such as Metropolis-Hastings Monte Carlo MarkovChain (MCMC) or sequential Monte Carlo methods,in actual large-scale applications. In particular, I willpresent a multilevel MCMC algorithm with a com-putational cost that scales (at least in practice) op-timally with respect to the required accuracy for amodel problem in subsurface flow, as well as a com-plete numerical analysis of the method. I will finishmy talk by pointing to some possible future researchdirections and potential applications for this promis-ing new technology.

    The work for this talk is based on collaborations witha large number of people, most notably my young col-leagues Julia Charrier (Marseille), Tim Dodwell (Bath)and Aretha Teckentrup (Warwick).

    Data-Driven Model Reduction to Support De-cision Under Uncertainty

    Karen Willcox (Massachusetts Institute of Technol-ogy)

    Model reduction has become a powerful approach forextracting low-order, cheap surrogate models fromhigh-fidelity simulation models, such as those aris-ing from discretization of systems of partial differ-ential equations. Recent developments in combiningtraditional projection-based model reduction methodswith machine learning methods have led to new ap-proaches that tune and adapt reduced models in theface of changing system properties and dynamic data.This provides new opportunities to discover and learnmodels, informed by physics-based first principles butguided by data. Our target is the next generation ofcomplex engineered systems, which will be endowedwith sensors and computing capabilities that enablenew modes of decision-making. For example, newsensing capabilities on aircraft will be exploited to as-similate data on system state, make inferences aboutsystem health, and issue predictions on future vehi-cle behavior—with quantified uncertainties—to sup-port critical operational decisions. Model reduction isone way to achieve this challenging task, in particu-lar through data-driven reduced models that exploitthe synergies of physics-based computational model-ing and physical data.

    This talk will provide a brief overview of the basics ofmodel reduction and discuss some of our recent workin data-driven reduced model localization and adapta-tion. I will also offer some future directions in multi-

    6

  • information-source approaches that aim to create aprincipled strategy for managing high-fidelity models,reduced models, and data in a design setting.

    7

  • Minisymposia abstracts

    Minisymposium M1

    Stable and accurate discretisations forconvection-dominated problems

    OrganisersGabriel Barrenechea and Natalia

    Kopteva

    Adaptive time step control with variationaltime stepping schemes for convection-diffusion-reaction equations

    Naveed Ahmed & Volker John (Weierstrass Insti-tute for Applied Analysis and Stochastics (WIAS), Ger-many,)

    Higher order variational time stepping schemes allowan efficient post-processing for computing a higher or-der solution. In this talk, we present an adaptivealgorithm whose time step control utilizes the postprocessed solution. As a model problem, we will con-sider the convection-dominated convection-diffusion-reaction equations. In particular, for the space dis-cretization of convection-diffusion-reaction equations,we will consider the streamline upwind-Petrov Galerkinmethod as stabilization. Moreover, the continuousGalerkin-Petrov (cGP) and discontinuous Galerkin(dG) schemes will be used as time discretization.

    At first step, for fixed time step and for smooth so-lution, we will show that the cGP(k) and dG(k) withk the polynomial degree with respect to time are oforder k+ 1 in the integrated L2(L2) time-space normand the post processed solution is of order k+2 in thecorresponding norm. Furthermore, at discrete timepoints, we obtain a super-convergence of order 2k and2k + 1 for cGP(k) and dG(k) methods, respectively.

    The availability of two solutions with different orderenables also the application of well understood tech-niques from the numerical analysis of ordinary dif-ferential equations for controlling the adaptive timestep. We will apply the adaptive algorithm based onthe PID and PC11 controllers. The performance ofthe cGP and dG methods will be compared with theperformance of (a slight modification of) the adaptiveCrank-Nicolson scheme proposed, which is based oncontrolling the time step with the solution obtainedwith the Adams-Bashforth scheme. Several numeri-cal tests have been made to show the performance ofdifferent time stepping schemes.

    A robust numerical method for a control prob-lem of singularly perturbed equations

    Alejandro Allendes & Erwin Hernández & EnriqueOtarola (University Técnica Federico Santa Maŕıa,Chile, University of Maryland and George Mason Uni-versity, USA)

    We consider a linear–quadratic optimal control prob-lem governed by a singularly perturbed convection–reaction–diffusion equation. Since we do not considerconstraints on the control, the optimality system as-sociated with the optimal control problem consists ofa coupled system involving both the state and adjointequations. We analyze and discretize the optimalitysystem by using standard bilinear finite elements onthe graded meshes introduced by Durán and Lombardi[1,2 ]. We derive quasi-optimal a priori error estimatesfor the optimal variables on anisotropic meshes. Fi-nally, we present several numerical experiments whichreveal a competitive performance of our method com-pared with adaptive stabilized schemes.

    References

    [1] R.G. Durán and A.L. Lombardi, Error estimateson anisotropic Q1 elements for functions in weightedSobolev spaces. Math. Comp., 74(252):1679–1706 (elec-tronic), 2005.

    [2] R.G. Durán and A.L. Lombardi, Finite elementapproximation of convection diffusion problems usinggraded meshes. Appl. Numer. Math., 56(10-11):13141325, 2006.

    Stability and error analysis of algebraic fluxcorrection schemes

    Gabriel R. Barrenechea & Volker John & PetrKnobloch (University of Strathclyde, Weierstrass In-stitute for Applied Analysis and Stochastics (WIAS),Germany, Charles University in Prague)

    A family of algebraic flux correction schemes for lin-ear boundary value problems in any space dimensionis studied. These methods’ main feature is that theylimit the fluxes along each one of the edges of thetriangulation, and we suppose that the limiters usedare symmetric. For an abstract problem, the exis-tence of a solution, existence and uniqueness of thesolution of a linearized problem, and an a priori er-ror estimate, are proved under rather general assump-tions on the limiters. For a particular (but standardin practice) choice of the limiters, it is shown thata local discrete maximum principle holds. The the-ory developed for the abstract problem is applied toconvection–diffusion–reaction equations, where in par-ticular an error estimate is derived. Numerical studies

    8

  • show its sharpness.

    hp–Version discontinuous Galerkin methods onpolytopic meshes

    Andrea Cangiani, Zhaonan Dong, Emmanuil Geor-goulis (University of Leicester) & Paul Houston (Uni-versity of Nottingham)

    We present recent work on hp–version interior penaltydiscontinuous Galerkin finite element methods(DGFEM) for advection–diffusion–reaction equationson general computational meshes consisting of polyg-onal/polyhedral (polytopic) elements [2, 1]. The pro-posed methods employ elemental polynomial bases oftotal degree p (Pp–basis) defined in the physical co-ordinate system, without requiring the mapping froma given reference or canonical frame. New hp–versiona priori error bounds are derived based on a specificchoice of the interior penalty parameter which allowsfor edge/face–degeneration. Numerical experimentshighlighting the performance of the proposed DGFEMare presented.

    References

    [1] A. Cangiani, Z. Dong, E.H. Georgoulis, and P. Hous-ton. hp–version discontinuous galerkin methods foradvection–diffusion–reaction problems on polytopicmeshes. Submitted.[2] A. Cangiani, E.H. Georgoulis, and P. Houston. hp–version discontinuous Galerkin methods on polygonaland polyhedral meshes. Math. Models Methods Appl.Sci., 24(10):2009–2041, 2014.

    Monotonicity Preserving Techniques for Con-tinuous and Discontinuous Galerkin Methods

    Alba Hierro & Santiago Badia (UPC,CIMNE)

    It is well known that spurious oscillations may appearwhen using finite elements for solving problems thatpresent shocks or sharp layers. In particular this mighthappen when solving the time-dependent transportequation or convection dominant convection-diffusionproblems. There are several ways to avoid this be-haviour, in particular it is quite common to add artifi-cial diffusion in the area where the shocks are detected.

    A particular desirable feature for the methods is toenjoy the discrete maximum principle (DMP). TheDMP ensures that the discrete solution inherits themonotonicity of the continuous problem. The au-thors propose a monotonicity preserving shock detec-tor that might be used for continuous and discontin-uous Galerkin methods.

    Moreover, for the continuous case, the nonlinear arti-ficial diffusion is usually combined with a linear stabi-lization term in order to improve the results. The au-thors have proposed a symmetric nodal projection sta-bilization together with a blending factor that keepsthe monotonicity of the method. On the other hand,also the Interior Penalty discontinuous Galerkin termshave been arranged in order for the method to bemonotonic in 1D.

    Currently the authors are working on the implementa-tion of such features on hp-adaptive meshes. In par-ticular, it is necessary to properly detect the shockregions to be able to impose linear order to the ele-ments.

    Local projection type stabilisation applied toinf-sup stable discretisations of the Oseenproblem

    Gunar Matthies & Lutz Tobiska (Technische Uni-versität Dresden)

    We consider the Oseen problem

    −ν4u+ (b · ∇)u+ σu+∇p = f in Ω,

    div u = 0 in Ω, u = 0 on ∂Ω,

    where ν > 0 and σ ≥ 0 are constants andb ∈

    (W 1,∞(Ω)

    )dwith div b = 0 is a given veloc-

    ity field. The Oseen problem can be considered asa linearisation of the steady (σ = 0) and the time-discretised non-steady (0 < σ ∼ 1/4t) Navier–Stokesequations, respectively.

    Inf-sup stable finite element discretisations of the Os-een problem are considered. Hence, no additionalpressure stabilisation is needed. However, the stan-dard Galerkin method still suffers in general fromglobal spurious oscillations in the velocity field whichare caused by the dominating convection.

    Local projection stabilisation methods will be used toovercome this difficulty. The stabilisation is based ona projection from the underlying approximation spaceonto a discontinuous projection space. Stabilisationis derived from additional weighted L2-control on thefluctuation of the gradient of the velocity or only partsof it like divergence and/or derivative in streamline di-rection.

    The convergence analysis for both the one-level andthe two-level local projection stabilisation applied toinf-sup stable discretisations of the Oseen problem willbe presented in a unified framework.

    Two different stabilisation terms are considered. The

    9

  • first stabilisation introduces control over the fluctua-tions of the derivative in streamline direction and overthe fluctuations of the divergence separately whereasthe second stabilisation controls the fluctuations of thegradient.

    We will propose new inf-sup stable pairs of finite el-ement spaces which approximate both velocity andpressure by elements of order r. In contrast to theclassical equal order interpolation, the velocity com-ponents and the pressure are discretised by differentelements. We will show the discrete inf-sup conditionfor these pairs of finite element spaces. For the caseof small viscosity, a uniform error estimate of orderr + 1/2 will be proved. In the case of discontinuouspressure approximations, an additional term control-ling the jumps of the pressure across inner cell facesbecomes necessary.

    Numerical tests which confirm the theoretical resultswill be given.

    Local error estimates for the SUPG methodapplied to evolutionary convection-reaction-diffusion equations

    Javier de Frutos, Bosco Garćıa-Archilla & Julia Novo(University of Seville)

    Local error estimates for the SUPG method applied toevolutionary convection-reaction-diffusion equationsare considered. The steady case is reviewed and lo-cal error bounds are obtained for general order finiteelement methods. For the evolutionary problem, lo-cal bounds are obtained when the SUPG method iscombined with the backward Euler scheme. The ar-guments used in the proof lead to estimates for thestabilization parameter that depend on the length onthe time step. The numerical experiments show thatlocal bounds seem to hold true both with a stabiliza-tion parameter depending only on the spatial meshgrid and with other time integrators.

    Multiscale Hybrid-Mixed Method for Advective-Reactive Dominated Problems with Heteroge-neous Coefficients

    Rodolfo Araya & Christopher Harder & Diego Pare-des & Frédérid Valentin ( Universidad de Concepción,Chile, Metropolitan State University of Denver, Pon-tifical Catholic University of Valparáıso - IMA/PUCV,Laboratório Nacional de Computação Cient́ıfica -LNCC, Brazil)

    A new family of finite element methods, named Mul-tiscale Hybrid-Mixed methods (or MHM for short),aims to solve reactive-advective dominated problems

    with multiscale coefficients on coarse meshes. Theunderlying upscaling procedure transfers to the basisfunctions the responsibility of achieving high ordersof accuracy. The upscaling is built inside the generalframework of hybridization, in which the continuityof the solution is relaxed a priori and imposed weaklythrough the action of Lagrange multipliers. This char-acterizes the unknowns as the solutions of local prob-lems with Robin boundary conditions driven by themultipliers. Such local problems are independent ofone another, yielding a process naturally shaped forparallelization and adaptivity. Moreover, the multi-scale decomposition indicates a new adaptive algo-rithm to set up local spaces defined using a face-baseda posteriori error estimator. Interestingly, it also em-beds a postprocessing of the dual variable (flux) whichpreserves local conservation properties of the exactsolution. Extensive numerical validations assess theclaimed optimal rates of convergence, the robustnessof the method with respect to the model’s coefficients,and the adaptivity algorithm.

    Keywords: reaction-advection-diffusion, multiscale,mixed-hybrid, adaptivityMathematics Subject Classifications (2000):

    References

    [1] C. Harder and D. Paredes and F. Valentin A Familyof Multiscale Hybrid-Mixed Finite Element Methodsfor the Darcy Equation with Rough Coefficients. In:Journal of Computational Physics Vol. 245, pp. 107-130, 2013.[2] R. Araya and C. Harder and D. Paredes and F.Valentin Multiscale Hybrid-Mixed Methods. In: SIAMJournal on Numerical Analysis Vol. 51(6), pp. 3505-3531, 2013.[3] C. Harder and D. Paredes and F. Valentin On aMultiscale Hybrid-Mixed Method for Advective-Reactive Dominated Problems with Heterogenous Co-efficients. In: Multiscale Modeling & Simulation Vol.13(2), pp. 491-518, 2015.

    Virtual Element Methods for EllipticProblems

    Oliver Sutton & Andrea Cangiani (University ofLeicester) & Gianmarco Manzini (LANL)

    The virtual element method is a recent generalisationof the standard conforming finite element method tomeshes consisting of arbitrary (convex or non-convex)polygonal elements, and may be viewed as the varia-tional analogue of the mimetic finite difference method.Despite being only recently introduced, the virtualelement method has already been applied to a widevariety of different problems, including the Poisson

    10

  • problem, plate-bending and linear elasticity problems,Stokes problem, the Steklov eigenvalue problem andthe simulation of discrete fracture networks. How-ever, until recently an exposition of the method forlinear elliptic problems with non-constant coefficientshas been lacking. In this talk, we will discuss ongo-ing work to extend the virtual element methodologyto this class of problems and the modifications to theoriginal virtual element framework which were neces-sary to produce a practical method.

    Minisymposium M2

    Kernel Methods in Numerical Analysisand Learning Theory

    OrganisersJeremy Levesley and Holger Wendland

    Learning functions on data defined manifolds

    Frank Filbir (Helmholtz Zentrum München)

    Many practical applications, for example, documentanalysis, face recognition, semi-supervised learning,and image processing, involve a large amount of veryhigh dimensional data. Typically, this data has alower intrinsic dimensionality; for example, one mayassume that it belongs to a low dimensional mani-fold in a high dimensional, ambient Euclidean space.The desire to take advantage of this low intrinsic di-mensionality has recently prompted a great deal ofresearch on diffusion geometry techniques. In thistalk we will demonstrate how kernel techniques canbe linked with this approach in order to learn func-tions on such data defined manifolds.

    Scaling inference for Gaussian processes usingstochastic linear algebra techniques

    Maurizio Filippone (University of Glasgow)

    Probabilistic kernel machines based on Gaussian Pro-cesses (GPs) are popular in several applied domainsdue to their flexible modelling capabilities and inter-pretability. In applications where quantification of un-certainty is of primary interest, it is necessary to carryout Bayesian inference of GP covariance parameters(kernel parameters), and this would require repeat-edly calculating the marginal likelihood of GP mod-els. The formidable computational challenge associ-ated with this is that the marginal likelihood is onlycomputable in the case of GP models applied to re-gression problems. Even then, for large datasets, itis not possible to compute the marginal likelihood ex-actly. This is because computing the marginal like-

    lihood entails storing and factorising the kernel ma-trix, which is generally dense, requiring O(n2) storageand O(n3) computations, n being the number of inputdata. This has motivated the research community todevelop a variety of approximation techniques. Eventhough such approximations make it possible to re-cover computational tractability, it is not possible todetermine to what extent they affect the performanceof GP-based models.

    A recent trend in the area of scalable Bayesian in-ference is developing inference techniques based on“mini-batches”, namely methods that combine com-putations carried out on subsets of the data withoutintroducing any bias in the inference process. TheStochastic Gradient Langevin Dynamics (SGLD) al-gorithm [Welling and Teh, ICML, 2011], for example,uses mini-batches to estimate the gradient of the log-likelihood unbiasedly. The appeal of SGLD is that, atthe expense of introducing an asymptotically vanish-ing amount of bias, inference can be carried out with-out having to compute the likelihood and without theneed to compute the gradient of the log-likelihood ex-actly, resulting in tremendous computational savings.

    The applicability of SGLD and many other inferencemethods based on mini-batches hinges on the factori-sation properties of the likelihood. When this is notthe case, these methods cannot be directly applied.GP-based statistical models offer a perfect example ofa class of models where the marginal likelihood doesnot factorise, but reducing computational complexityis highly desirable due to the poor scalability of tra-ditional likelihood-based inference methods.

    This work proposes Bayesian inference for GP regres-sion using an adaptation of SGLD where an unbi-ased estimate of the gradient of the log-marginal likeli-hood is calculated employing stochastic linear algebratechniques, leading to an expression involving denselinear systems only. This has the enormous advan-tage that dense linear systems can be solved iterat-ing matrix-vector products that (i) are highly paral-lelisable, (ii) do not require storing the kernel matrixleading to O(n) storage, and (iii) require O(n2) com-putations. Furthermore, a novel unbiased linear sys-tems solver is developed to accelerate the unbiased es-timation of the gradient of the log-marginal likelihoodneeded by SGLD. The results demonstrate the possi-bility to enable scalable and exact (in a Monte Carlosense) Bayesian inference for Gaussian processes.

    A High-Order, Analytically Divergence-freeApproximation Method for the Navier-StokesEquations

    Christopher Keim (University of Bayreuth)

    11

  • We present a new high-order approximation methodfor the incompressible Navier-Stokes equations usingcollocation in space and a method of line approach tosplit variables. Our spatial approximation space con-sists of smooth, analytically divergence-free functionsbased on specifically designed matrix-valued kernels.Besides yielding analytically divergence-free approxi-mations of the velocity, our approximation scheme hasthe additional advantages that no mesh has to be gen-erated, no numerical integration is required and thatthe velocity and pressure part of the approximate so-lution are computed simultaneously. No inf-sup condi-tion nor the solution of an additional Poisson problemis required.

    For simplicity, our analysis will be restricted to thecase of periodic boundary conditions. We will giveexact error bounds for the semi-discrete scheme andalso for the fully discretised system.

    Filtering and parameter estimation of partiallyobserved diffusion processes using GaussianRBFs

    Elisabeth Larsson, Josef Höök, Erik Lindström, &Lina von Sydow (Uppsala University)

    Financial asset prices can be modeled as stochasticdiffusion processes involving a number of parameters.Based on market observations over time, we want toestimate these parameters. However, due to the socalled ask-bid spread, there is an uncertainty in theobserved data. We model the spread as additive noise,and show that using Gaussian radial basis functions(RBFs), leads to a convenient mathematical represen-tation. Furthermore, substantial parts of the compu-tations can be performed analytically if RBFs are usedfor approximating transition densities. We present nu-merical results for a short term interest rate modelshowing that we can generate a smooth likelihood sur-face.

    Grid Refinement in the Construction of Lya-punov Functions Using Radial Basis Functions

    Najila Mohammed (Sussex University)

    Meshless collocation based on Radial basis functions,kernel functions, is an effective tool to solve LinearPDE’s. Moreover, RBFs can be used to constructLyapunov functions, such that their existence guaran-tees determining subsets of the domain of attractionof an equilibrium of an ODE.

    In this talk, a new grid refinement strategy associatedwith this construction method will be presented and

    illustrated in examples.

    Approximation of Lyapunov Functions fromData

    Kevin Webster, Peter Giesl, Boumediene Hamzi &Martin Rasmussen (Imperial College London)

    Methods have previously been developed for the ap-proximation of Lyapunov functions using radial ba-sis functions. However these methods assume thatthe evolution equations are known. We consider theproblem of approximating a given Lyapunov functionusing radial basis functions where the evolution equa-tions are not known, but we instead have sampleddata which is contaminated with noise. Our approachis to first approximate the underlying vector field, anduse this approximation to then approximate the Lya-punov function. Our approach combines elements ofmachine learning/statistical learning theory with theexisting theory of Lyapunov function approximation.Error estimates are provided for our algorithm.

    A high-order, analytically divergence-free ap-proximation method for the time-dependentStokes problem

    Holger Wendland & Christopher Keim (Universityof Bayreuth)

    In this talk, I will present and analyse a new high-order approximation method for the time-dependentStokes equation. The new method is based upon ananalytically divergence-free approximation space us-ing smooth, matrix-valued kernels. This method hasseveral advantages. It is a truly meshfree method. Itallows us to reconstruct the velocity and the pressurepart of the solution simultaneously, without solvingan additional Poisson problem and without the re-quirement of an inf-sup condition. Moreover, sincewe use collocation in space no numerical integrationis required. I will give explicit error bounds for thesemi-discrete approximation as well as for the fullydiscretised system.

    Radial Basis Functions Interpolation with Er-ror Indicator

    Qi Zhang & Jeremy Levesley (University of Leices-ter)

    Radial Basis Functions (RBF) methods have been gain-ing attention by their simplicity and ease of implemen-tation in multivariate scattered data approximation.My study focuses on applying RBF interpolation tobuild a surrogate model for an unknown target func-

    12

  • tion f within limited resources while providing a suf-ficient accuracy level.

    In order to achieve satisfying accuracy levels of themodel, the resources should be carefully selected andused. In this case, the resources are the scatterednodes measured form the target function. Comparedto using all the scattered nodes in one time to builda whole domain interpolant, domain decompositionmethods give out better accuracy and also less condi-tioning problems.

    In my model, a domain decomposition method is ap-plied. Firstly, a whole domain interpolant S based onwell distributed nodes is established which approxi-mates the target function. Secondly, Error Indicatoris involved to detect the error of S, it should be ableto detect the sub-domains where the error is large.Thirdly, local error correction interpolants are used tocorrect the error at previously decided sub-domains.By repeating the second and third steps, the modelcould reach its pre-set accuracy level.

    Error Indicators use polyharmonic spline basis func-tions to reconstruct a local approximation Slocal with-out using extra nodes. The deviation between Slocaland f indicates the error level in this sub-domain. Bycontinuously applying the Error Indicator, an adap-tive nodes distribution is gained.

    Analogues of Classical Results on Radial Ba-sis Functions for Zonal Basis Functions on theSphere

    Wolfgang zu Castell & Rick Beatson, Yuan Xu(Helmholtz Zentrum München)

    Several properties of basis functions used for interpo-lation and approximation in Euclidean spaces can bederived from Bochner’s Theorem characterising radialpositive definite functions. The same type of equiv-alence has been stated by Schoenberg for zonal basisfunctions on the sphere.

    The spherical functions characterising this geometricset-up are given by Gegenbauer polynomials. Aim-ing at transferring well-established results from radialbasis function approximation to the geometric settingof the sphere, certain properties of Gegenbauer poly-nomials are needed. While some of these propertiesare already known, others lead to interesting problemsdealing with this family of orthogonal polynomials.

    Minisymposium M3

    Recent developments of mathematicalaspects of computational chemistry

    OrganiserBenjamin Stamm

    Computational methods for the dynamics ofthe Gross-Pitaevskii/nonlinear Schrödingerequation with rotation and dipole-dipole inter-action

    Weizhu Bao (National University of Singapore)

    In this talk, I begin with the Gross-Pitaevskii/nonlinear Schrödinger equation (GPE/NLSE) with anangular momentum rotation and/or a dipole-dipole in-teraction for modeling rotating and/or dipolar Bose-Einstein condensation (BEC), and review some dy-namical properties of GPE/NLSE including conservedquantities and center-of-mass dynamics. Different nu-merical methods will be presented and compared. Ap-plications to simulate the dynamics of rotating BECand/or dipolar BEC will be reported. Finally, exten-sion to coupled GPEs with a spin-orbit coupling willbe discussed.

    References

    [1] W. Bao and Y. Cai, Ground states and dynamics ofspin-orbit-coupled Bose-Einstein condensates, SIAMJ. Appl. Math., 75 (2015), pp. 492-517.[2] S. Jiang, L. Greengard and W. Bao, Fast and accu-rate evaluation of nonlocal Coulomb and dipole-dipoleinteractions via the nonuniform FFT, SIAM J. Sci.Comput., 36 (2014), pp. B777-B794.[3] W. Bao, D. Marahrens, Q. Tang and Y. Zhang, Asimple and efficient numerical method for computingthe dynamics of rotating Bose-Einstein condensatesvia a rotating Lagrangian coordinate, SIAM J. Sci.Comput., 35 (2013), pp. A2671-A2695.[4] X. Antoine, W. Bao and C. Besse, Computationalmethods for the dynamics of the nonlinear Schröedinger/Gross-Pitaevskii equations, Comput. Phys. Com-mun., 184 (2013), pp. 2621-2633.[5] W. Bao and Y. Cai, Mathematical theory and nu-merical methods for Bose-Einstein condensation, Kinet.Relat. Mod., 6 (2013), pp. 1-135.

    Greedy algorithms for electronic structure cal-culations for molecules

    Eric Cancès & Virginie Ehrlacher & Majdi Hochlaf& Tony Lelièvre (CERMICS, Ecole des Ponts Paris-tech & INRIA)

    13

  • In this talk, a greedy algorithm will be presented inorder to compute the lowest eigenvalue and an asso-ciated eigenstate for high-dimensional problems andtheir numerical behaviour will be illustrated for thecomputation of the ground-state electronic wavefunc-tion of a molecule, solution of the many-bodySchrödinger equation. Usually, these algorithms areimplemented in practice using the Alternating Least-Square algorithm, which leads to some computationaldifficulties in this particular situation due to the anti-symmetry of the ground state wavefunction. A com-putational strategy to overcome this difficulty will bepresented and illustrated on several molecules.

    Large-scale real-space electronic structure cal-culations

    Vikram Gavini & Phani Motamarri (University ofMichigan, Ann Arbor)

    In this talk, the development of a real-space formula-tion for Kohn-Sham density functional theory (DFT)and a finite-element discretization of this formulation[1], which can handle arbitrary boundary conditionsand is amenable to adaptive coarse-graining, will bepresented. In particular, the accuracy afforded by us-ing higher-order finite-element discretizations, and theefficiency and scalability of the Chebyshev filteringalgorithm in pseudopotential and all-electron Kohn-Sham DFT calculations will be demonstrated. Fur-ther, the development of a subquadratic-scaling ap-proach (in the number of electrons) based on a sub-space projection and Fermi-operator expansion will bediscussed [2], which will be the basis for the futuredevelopment of coarse-graining techniques for Kohn-Sham DFT. The developed techniques have enabled,to date, pseudopotential calculations on non-periodicand periodic systems containing ∼ 10,000 atoms, aswell as all-electron calculations on systems containing∼ 5,000 electrons.

    [1] P. Motamarri, M.R. Nowak, K. Leiter, J. Knap, V.Gavini, Higher-order adaptive finite-element methodsfor Kohn-Sham density functional theory, J. Comp.Phys. 253, 308-343 (2013).[2] P. Motamarri, V. Gavini, A subquadratic-scalingsubspace projection method for large-scale Kohn-ShamDFT calculations using spectral finite-elementdiscretization, Phys. Rev. B 90, 115127 (2014).

    A posteriori error estimates for DiscontinuousGalerkin methods using non-polynomial basisfunctions with applications to solving Kohn-Sham density functional theory

    Lin Lin & Benjamin Stamm (University of Califor-nia, Berkeley and Lawrence Berkeley National Labo-

    ratory)

    Recently we have developed the adaptive local ba-sis functions in a discontinuous Galerkin frameworkfor reducing the dimension of the discretized systemfor solving the Kohn-Sham density functional theory.The discontinuous Galerkin method provides a natu-ral framework for reducing the dimension of the sys-tem obtained by discretizing a PDE, due to its abilityto incorporate an efficient non-polynomial basis set,without the need to match the value of such basisfunctions at the boundary. However, the use of non-polynomial basis functions lead to major difficulties inerror analysis, both in the a priori and the a posteriorisense. We present, to the extent of our knowledge, thefirst systematic work for deriving a posteriori error es-timates for general non-polynomial basis functions inan interior penalty discontinuous Galerkin (DG) for-mulation for solving second order linear PDEs. Ourresidual type upper and lower bound error estimatesmeasures the error in the energy norm. The mainmerit of our method is that the method is parameter-free, in the sense that all but one solution-dependentconstant appearing in the upper and lower bound es-timates are explicitly computable, and the only non-computable constant can be reasonably approximatedby a computable one without affecting the overall ef-fectiveness of the estimates in practice. We develop anefficient numerical procedure to compute the error es-timators. Numerical results for a variety of problemsin 1D and 2D demonstrate that both the upper boundand lower bound are effective. We demonstrate somepractical use of a posteriori error estimators in per-forming three-dimensional Kohn-Sham density func-tional theory calculations for quasi-2D aluminum sur-faces and single-layer graphene oxide in water.

    Fast domain decomposition methods for con-tinuum solvation models

    Filippo Lipparini & Benjamin Stamm, & Eric Cancès& Yvon Maday & Benedetta Mennucci (JohannesGutenberg Universität, Mainz)

    Continuum solvation models are very popular tools incomputational chemistry. In the last three decades,they have allowed computational chemists to repro-duce environmental effects on molecular structures andproperties in quantum-mechanical (QM) simulationsperformed with various levels of theory in a cost-effective, but fairly accurate, way. However, such mod-els were originally formulated thinking of a pure QMdescription of the solute: due to the high computa-tional cost associated with solving the QM equations,the systems treated have always been medium-sizedor small, which made the cost of solving the linearequations associated to the continuum solvation mod-

    14

  • els negligible with respect to the effort required by theoverall computation. Recently, thanks to the increasein computer power, to the development of state-of-the-art linear scaling algorithms and, most important,to the development of hybrid quantum-classical meth-ods, large and very large molecular systems have be-come accessible: for systems as large as a protein (be-yond one thousand atoms), the continuum solvationequations with existing implementations can becomeprohibitive.

    In this contribution, we present a new discretizationfor COSMO (Conductor-like screening model), a verypopular continuum solvation model, based on a domain-decomposition approach. We refer to our new algo-rithm as ddCOSMO. After summarizing some math-ematical aspects of the procedure, we will describethe algorithm and focus on the numerical details. Wewill show, including some examples, that our imple-mentation scales linearly in computational cost andmemory requirements with respect to the size of thesolute and allows us to treat large and very large sys-tems with a cost which is a small fraction of the costof the simulation in vacuo. Compared to the mostefficient existing implementations, ddCOSMO is twoto three orders of magnitude faster, more robust anddefined by a smaller number of discretization parame-ters: these features allow us to extend the applicabilityof continuum solvation models to large and complexbiological and chemical systems in combination withclassical, QM and hybrid descriptions of the solute.

    Quantum Calculations in Solution for Largeto Very Large Molecules: presentation of themathematical algorithm

    Yvon Maday & Filippo Lipparini, Louis Lagardère,Giovanni Scalmani, Benjamin Stamm, Eric Cancès ,Jean-Philip Piquemal, Michael J. Frisch and BenedettaMennucci (Sorbonne Universités, UPMC Univ Paris06, Univ. Paris Diderot, Sorbonne Paris Cité, CNRS,Institut Universitaire de France, Laboratoire Jacques-Louis Lions, France and Division of Applied Mathe-matics, Brown University )

    We present a new algorithm of continuum solvationmodels for semiempirical Hamiltonians that allows thedescription of environmental effects on large to verylarge molecular systems. In this approach based on adomain decomposition strategy of the COSMO model(ddCOSMO), the solution to the COSMO equations isno longer the computational bottleneck but becomes anegligible part of the overall computation time. In thispresentation, we present the algorithm in light of ex-isting works in the field of domain decomposition forelliptic problems, we analyze the computational im-pact of COSMO on the solution of the SCF equations

    for large to very large molecules, using semiempiricalHamiltonians, for both the new ddCOSMO implemen-tation and the most recent, linear scaling one, basedon the fast multipole method. The approach is illus-trated with a few examples that will be extended tolarger problems of interest in the chemical communityin the talk of F. Lipparini.

    A perturbation-method-based post-processingof planewave approximations for nonlinearSchrödinger equations

    Benjamin Stamm & Eric Cancès, Geneviève Dus-son, Yvon Maday, Martin Vohral̀ık (Sorbonne Univer-sité UPMC Paris 6 and CNRS)

    In this talk we consider a post-processing of planewaveapproximations for nonlinear Schrödinger equationsby considering the exact solution as a perturbationof the discrete, computable solution. Applying thenKatos perturbation theory leads to computable cor-rections with a provable increase of the convergencerate in the asymptotic range for a very little computa-tional overhead. We illustrate the key-features of thispost-processing for the Gross-Pitaevskii equation thatserves as a toy problem for DFT Kohn-Sham models.Finally some numerical illustrations in the context ofDFT Kohn-Sham models are presented.

    Absorption Spectrum Estimation via LinearResponse Time-dependent Density FunctionalTheory

    Chao Yang & Jiri Brabec & Lin Lin (Lawrence Berke-ley National Laboratory) & Yousef Saad (University ofMinnesota)

    The absorption spectrum of a molecular system canbe estimated from the dynamic dipole polarizabilityassociated with the linear response of a molecular sys-tem (at its ground state) subject to an external pertur-bation. Although an accurate description of the ab-sorption spectrum requires the diagonalization of theso-called Casida Hamiltonian, there are more efficientways to obtain a good approximation to the generalprofile of the absorption spectrum without comput-ing eigenvalues and eigenvectors. We describe thesemethods which only require multiplying the CasidaHamiltonian with a number of vectors. When highlyaccurate oscillator strength is required for a few se-lected excitation energies, we can use a special itera-tive method to obtain the eigenvalues and eigenvectorsassociated with these energies efficiently.

    15

  • Minisymposium M4

    Numerical Methods in StochasticProblems in Biology

    OrganisersS. Cotter and K. Zygalakis

    A constrained approach to the simulation ofmultiscale chemical kinetics

    Simon Cotter (University of Manchester)

    In many applications in cell biology, the inherent un-derlying stochasticity and discrete nature of individ-ual reactions can play a very important part in thedynamics. The Gillespie algorithm has been aroundsince the 1970s, which allows us to simulate trajec-tories from these systems, by simulating in turn eachreaction, giving us a Markov jump process. However,in multiscale systems, where there are some reactionswhich are occurring many times on a timescale forwhich others are unlikely to happen at all, this ap-proach can be computationally intractable. Severalapproaches exist for the efficient approximation of thedynamics of the “slow” reactions, many of which relyon the “quasi-steady state assumption”. In this talk,we will present the Constrained Multiscale Algorithm,a method which was previously used to construct dif-fusion approximations of the slowly changing quan-tities in the system, but which does not rely on theQSSA. We will show how this method can be used toapproximate an effective generator for the slow vari-ables in the system, and quantify the errors in that ap-proximation. If time permits, we will show how thesegenerators can then be used to sample approximatepaths conditioned on the values of their endpoints.

    A comparison of approximation methods forstochastic biochemical networks

    Ramon Grima (University of Edinburgh)

    Exact solutions of the chemical master equation areonly known for a handful of simple chemical systemsand stochastic simulations are typically computation-ally expensive. In the past decade, the linear-noise ap-proximation (LNA), the chemical Fokker-Planck equa-tion (CFPE) and moment-closure approximations(MA) have become a popular means to efficiently ap-proximate the master equation and to hence obtaininsight into the effect of noise on the dynamics ofbiochemical systems. However these approximationsare plagued by a number of problems in particu-lar their accuracy and their relationship to one an-other remains unclear furthermore, the most popu-lar of these methods, the LNA, can only be applied

    to a subset of reaction systems, namely those char-acterised by a unimodal probability distribution andprovided the molecule numbers are quite large. Inthis talk, I will present an overview of our work overthe past few years which clarifies the accuracy of theLNA, CFPE and MA approximations. I will alsopresent modifications to the LNA which enables itsapplication to multimodal systems and those in whichone more more species are present in few moleculenumbers. The usefulness of these methods to obtain-ing a more complete picture of stochastic biochemicaldynamics will be showcased on various biochemicalsystems involving gene expression, feedback control,enzyme-mediated catalysis and circadian rhythms.

    Hybrid modelling of stochastic chemical kinet-ics

    Kostas Zygalakis (University of Southampton)

    It is well known that stochasticity can play a funda-mental role in various biochemical processes, such ascell regulatory networks and enzyme cascades. Isother-mal, well-mixed systems can be adequately modelledby Markov processes and, for such systems, methodssuch as Gillespie’s algorithm are typically employed.While such schemes are easy to implement and areexact, the computational cost of simulating such sys-tems can become prohibitive as the frequency of thereaction events increases. This has motivated numer-ous coarse grained schemes, where the “fast” reactionsare approximated either using Langevin dynamics ordeterministically. While such approaches provide agood approximation for systems where all reactantsare present in large concentrations, the approximationbreaks down when the fast chemical species exist insmall concentrations, giving rise to significant errors inthe simulation. This is particularly problematic whenusing such methods to compute statistics of extinctiontimes for chemical species, as well as computing ob-servables of cell cycle models. In this talk, we present ahybrid scheme for simulating well-mixed stochastic ki-netics, using Gillepsie–type dynamics to simulate thenetwork in regions of low reactant concentration, andchemical Langevin dynamics when the concentrationsof all species is large. These two regimes are coupledvia an intermediate region in which a “blended” jump-diffusion model is introduced. Examples of gene regu-latory networks involving reactions occurring at multi-ple scales, as well as a cell-cycle model are simulated,using the exact and hybrid scheme, and compared,both in terms of weak error, as well as computationalcost.

    16

  • Minisymposium M5

    City AnalyticsOrganisers

    Des Higham and Jeremy Levesley

    Communicability Angles and the Spatial Effi-ciency of City Networks

    Ernesto Estrada & Naomichi Hatano (University ofStrathclyde, University of Tokyo, Japan)

    We introduce the concept of communicability anglesbetween a pair of nodes in a graph. We provide stronganalytical and empirical evidence that the average com-municability angle for a given network accounts for itsspatial efficiency on the basis of the effectivity of com-munication among the nodes in a network. We deter-mine the spatial efficiency characteristics of more than100 real-world complex networks representing com-plex systems arising in a diverse set of scenarios. Inparticular, we illustrate our results with the study ofa few urban street networks for different cities aroundthe world. We finally show how the spatial efficiencyof a network can be modulated by tuning the weightsof the edges of the networks. This allows us to predictthe effects of external stress over the spatial efficiencyof a network as well as to design strategies to improvethat important parameter in real-world complex sys-tems.

    Numerical Analyticity

    Desmond J. Higham, University of Strathclyde

    We may view the city as a living lab, where humanactivity is now generating vast arrays of data. For ex-ample, we can make observations around online socialmedia, telecommunication, geolocation, crime, health,transport, air quality, energy, utilities, weather, CCTV,wi-fi usage, retail footfall and satellite imaging. Theseemerging data sets, and accompanying research ques-tions, have the potential to motivate new and usefulmathematics. I will give a personal overview of someof the key challenges for applied and computationalmathematicians. In particular, I will discuss activ-ity taking place in Glasgow, which was chosen as theUK’s Smart Demonstrator City, and in the Universityof Strathclyde’s Institute for Future Cities.

    Urban Living: Towards a Comparison of City-based Digital Social Networks and of Individ-ual Demand Behaviour (Part 1 and Part 2)

    Peter Grindrod & Tamsin Lee (University of Ox-ford)

    Urban living will unceasingly be examined under thedigital microscope of Future City programmes. Asdigital communication platforms become more ubiqui-tous and the digital monitoring and demand behaviourfor consumers and lifestyle products and services be-comes more prevalent, we shall be able to move awayfrom a “one shoe fits all” approach to anticipating andfulfilling citizens’ demands, needs, and aspirations. Acrucial step is to allow human behaviour, and be-havioural data, of individuals, communities, boroughsand whole cities to speak for itself. Here we shall con-sider what analytics is available in order to extractinsights from two distinct types of behavioural data:peer-to-peer (P2P) communication within online so-cial media, indicating hidden social structures; andbusiness-to-customer (B2C) interactions, here regard-ing domestic and light commercial energy demand be-haviour. In the first case we shall consider ten largeBritish cities as they are represented by social net-works: our analysis of these networks deploys a rangeof approaches, and indicates that these P2P commu-nities are not necessarily the same—urban societieswithin cities may fall into classes, and this shouldevolve in time. In the latter case it is individualsbehaviour (private consumption) that is diverse andour analysis suggests that behavioural segmentationsare necessary to anticipate and manage smallish ag-gregates (100 home or less) on local distribution net-works. Looking to the future we must develop oper-ational algorithms to forecast and manage behaviourthat can scale up in size (up to full penetration ofour largest cities) and scale efficiencies to make 24/7management and response a reality—including withinemergency scenarios, such as power outages where en-ergy distribution and intelligence sourced form socialmedia intersect. We shall focus on the wide rangemathematical methods underpinning such analytics.

    Sparse interpolation and quasi-interpolation us-ing Gaussians

    Jeremy Levesley (University of Leicester)

    Sparse grid approximation provides a technique forapproximation of functions in moderate dimensions,5-10, say. In this talk we will discuss sparse approxi-mation with smooth kernels, which give potential forfaster than polynomial convergence for smooth input.We describe both interpolation and quasi-interpolationand describe multi-level versions of these algorithms.

    Parallel eigensolvers for electronic structurecomputations

    Antoine Levitt (Universite Pierre et Marie Curie,France)

    17

  • Density functional theory (DFT) aims to solve theSchrödinger equation by modelling electronic corre-lation as a function of density. Its relatively mod-est O(N3) scaling makes it the standard method inelectronic structure computation for condensed phasescontaining up to thousands of atoms. Computation-ally, its bottleneck is the partial diagonalisation of aHamiltonian operator, which is usually not formed ex-plicitly.

    Using the example of the Abinit code, I will discuss thechallenges involved in scaling plane-wave DFT com-putations to petascale supercomputers, and show howthe implementation of a new method results in goodparallel behaviour up to tens of thousands of proces-sors. I will also discuss some open problems in thenumerical analysis of eigensolvers and extrapolationmethods used to accelerate the convergence of fixedpoint iterations.

    Minisymposium M6

    Recent advances in numerical methodsfor hyperbolic conservation laws

    OrganiserTristan Pryer

    Conservation Based Moving-Mesh Methods forConservation Laws

    N. Arthurs, M. Baines, T. Pryer & P. Sweby (Uni-versity of Reading)

    Hyperbolic Conservation laws arise in many areas ofphysics such as fluid dynamics and traffic flow. Theexistence of shocks within entropy solutions to conser-vation laws leads to a need for mesh refinement for anaccurate solution. We present an r-refinement methodfor numerically solving conservation laws and showhow such methods may be analysed. The method re-lies on local conservation to determine the velocity ofnodes and is derived from the Lagrangian formulationof the PDE.

    Entropy based error estimates for fully discreteschemes for hyperbolic conservation laws

    Jan Giesselmann & Andreas Dedner (University ofStuttgart)

    This talk is concerned with nonlinear systems of hy-perbolic conservation laws

    ∂tu +∇ · (f(u)) = 0 (1)

    endowed with a strictly convex entropy/entropy flux

    pair and their numerical approximation via Runge-Kutta discontinuous Galerkin schemes. We are partic-ularly interested in the derivation of a posteriori errorestimates. Their derivation is based on combining areconstruction approach with the relative entropy sta-bility framework. The idea of reconstruction, in thiscontext, is to compute a Lipschitz continuous func-tion û from the numerical solution uh such that theresidual

    R := ∂tû +∇ · (f(û))

    is computable. Then, the relative entropy can be usedto bound the difference between û and the exact (en-tropy) solution u of (1) in terms ofR. Reconstructionsfor spatially semi-discrete schemes in one space dimen-sion have been suggested in previous works. The fo-cus of this talk is reconstruction in time allowing fornumerical solutions obtained by explicit Runge-Kuttatime integration schemes.

    Multi-level Monte-Carlo methods for entropymeasure valued solutions of hyperbolic conser-vation laws

    Kjetil Olsen Lye & Siddhartha Mishra (ETH Zurich)

    A major difficulty in the theory of multi-dimensionalsystems of conservation laws is the question of well-posedness. In a recent paper, Fjordholm et al [1] ad-vocated that the notion of an entropy measure valuedsolution (EMVS), is a suitable notion of solution formulti-dimensional systems. In the same paper, a nu-merical framework for computing the EMVS was de-veloped using Monte-Carlo sampling.

    We briefly review the theory of entropy measure val-ued solutions. We show how the numerical frameworkcan be improved by utilizing Multi-level Monte-Carlo(MLMC) approximations, obtaining a speedup com-pared to the previously proposed Monte-Carlo sam-pling procedure, even in the setting where we do nothave convergence of single samples. We show numeri-cal experiments using the Euler equations with unsta-ble initial data, and compare the MLMC method withthe traditional MC method for EMVS.

    References

    [1] U. S. Fjordholm, R. Käppeli, S. Mishra, and E. Tad-mor. Construction of approximate entropy measurevalued solutions for hyperbolic systems of conserva-tion laws. ArXiv e-prints, February 2014.

    Embedded Boundary Methods for flow in com-plex geometries

    Sandra May & Marsha Berger (ETH Zurich, New

    18

  • York University)

    Cut cells methods have been developed in recent yearsfor computing flow around bodies with complicatedgeometries. They are an alternative to body fitted orunstructured grids, which may be harder to generateand more complex in the bulk of the flowfield. Cut cellmethods “cut” the flow body out of a regular Carte-sian grid. Most of the grid is regular. Special methodsmust be developed for the “cut cells”, which are cellsthat intersect the boundary. Cut cells can have irreg-ular shapes and may be very small.

    We present a mixed explicit implicit time steppingscheme for solving the advection equation on a cutcell mesh. The scheme represents a new approach forovercoming the small cell problem: namely, that ex-plicit time stepping schemes are not stable on the ar-bitrarily small cut cells. Instead, we use an implicitscheme near the embedded boundary, and couple itto a standard explicit scheme used over most of themesh. We compare several ways of coupling the ex-plicit and implicit scheme, and prove a TVD resultfor one of them, which we call “flux bounding”. Wepresent numerical results in one and more dimensions.These results show second-order accuracy in the L1

    norm and between first- and second-order accuracy inthe L∞ norm.

    Optimal error estimates for discontinuousGalerkin methods based on upwind-biasedfluxes for linear hyperbolic equations

    Xiong Meng & Chi-Wang Shu, Boying Wu(University of East Anglia)

    In this talk, we analyze discontinuous Galerkin meth-ods using upwind-biased numerical fluxes for time-dependent linear conservation laws. In one dimen-sion, optimal a priori error estimates of order k + 1are obtained when piecewise polynomials of degree atmost k (k ≥ 0) are used. Our analysis is valid forarbitrary nonuniform regular meshes and for both pe-riodic boundary conditions and for initial-boundaryvalue problems. We extend the analysis to the multi-dimensional case on Cartesian meshes when piecewisetensor product polynomials are used. Numerical ex-periments are shown to demonstrate the theoreticalresults.

    Smoothness-Increasing Accuracy-Conserving(SIAC) Filtering for Discontinuous GalerkinSolutions over Nonuniform Meshes: Supercon-vergence and Optimal Accuracy

    Xiaozhou Li (Delft University of Technology) &Jennifer K. Ryan (University of East Anglia)

    Smoothness-Increasing Accuracy-Conserving (SIAC)filtering is an area of rising interests in view of thefact that it can extract the hidden accuracy in discon-tinuous Galerkin (DG) solutions. It has been proventhat by applying the SIAC filter to a DG solution, theaccuracy order of the DG solution is raised from orderk + 1 to order 2k + 1 for linear hyperbolic equationsover uniform meshes. However, applying the SIACfilter over nonuniform meshes is challenging and thequality of filtered solutions is usually dissatisfactory.The applicability for handling nonuniform meshes hasalready become the biggest obstacle to the develop-ment of SIAC filter. In this talk we discuss a relationbetween the filtered solutions and the unstructured-ness of nonuniform meshes. Further, we demonstratethat there exists an optimal accuracy of the filteredsolution for a given nonuniform mesh, and it is possi-ble to approximate the optimal accuracy by designingan appropriate filter scaling. By applying the newdesigned SIAC filter over nonuniform meshes, the fil-tered solution has improved accuracy order as well asimproved quality of the numerical solution.

    Convergence of a numerical scheme for a mixedhyperbolic-parabolic system in two space di-mensions

    Veronika Schleper & Elena Rossi (University ofStuttgart)

    We discuss the convergence of a numerical schemefor a mixed hyperbolic-parabolic system, modeling thebehaviour of predators and preys, given by

    ∂tu+ div(f(u)v(w)) = (αw − β)u∂tw − µ∆w = (γ − δu)wv(w) = v(η ∗ w)

    for a sufficiently smooth kernel function η

    (2)

    This system is an extension of the classical Lotka-Volterra ordinary differential equations for predatorprey systems that is able to include also the spatialvariation of such systems. More precisely, we assumethat the predators with density u can feel the presenceof preys in a (small) radius around them, such thatthe velocity field of predator propagation depends ona convolution of the prey density w with a compactlysupported kernel function η. Preys are assumed todiffuse independently of the predator density aroundthem. The source terms account for birth and deathof predators and preys respectively and are taken fromthe standard Lotka-Volterra equations.

    The numerical scheme for (1) is obtained by a Lax-Friedrichs like finite volume discretization for the hy-perbolic equation in combination with a standard ex-

    19

  • plicit first order finite difference discretization for theparabolic equation. In both equations, the sourceterms are included by operator splitting, which allowsa simple proof of positivity of u and w.

    After a short discussion of all relevant assumptions onthe system parameters, we will focus on the modifica-tions of the standard Lax-Friedrichs flux for (1) thatallows us to obtain L1-, L∞- and spatial TV -boundsfor u and w. Due to the parabolic character of thesecond equation in (1), no uniform L1-Lipschitz con-tinuity of w in time can be expected. To obtain con-vergence of the scheme, we show how we can exploitthe special structure of the hyperbolic part. Hereby,the convolution η ∗w in the velocity field plays an im-portant role to obtain strong convergence of u in L1,while—as a consequence of the missing L1-Lipschitzcontinuity—convergence of w can only be obtainedweakly* in L∞.

    To conclude the presentation, we show some numericalresults that confirm the convergence in L1 and showthat system (1) yields solutions with the characteristicLotka-Volterra structure.

    On the entropy dissipation of adaptive meshreconstruction techniques

    Nikolaos Sfakianakis & Maria Lukacova (Johannes-Gutenberg University of Mainz)

    Classical methods for the analysis of adaptive meshreconstruction (r-AMR) techniques address only thetime evolution step of the procedure. To get thougha deeper insight in the behaviour of r-AMR methods,the reconstruction of the mesh should also be takeninto account.

    To this end, we first present numerical evidence thatindicate that r-AMR methods (and in particular themesh reconstruction step) can have strong entropy sta-bilization properties, [3]. Further, we provide an an-alytical framework to study the entropy dissipationthat a particular class of r-AMR exhibits, includingthe reconstruction of the mesh and the update of thesolution, [1], [2].

    References

    [1] Lukacova-Medvidova, M. and Sfakianakis, N., En-tropy dissipation of moving mesh adaptation, J. Hyp.Diff. Eq., (2014)[2] Sfakianakis, N., Adaptive mesh reconstruction forhyperbolic conservation laws with total variation bound,Math. Comput., 2013[3] Arvanitis, Ch. and Makridakis, Ch. and Sfakianakis,N., Entropy conservative schemes and adaptive mesh

    selection for hyperbolic conservation laws, J. Hyp. Diff.Eq., 2010

    A well-balanced kinetic scheme for the shallowwater equations with rain

    Philip Townsend, Mehmet Ersoy, & Omar Lakkis(University of Sussex)

    The flow of water in rivers and oceans can, undercertain assumptions, be efficiently modelled using theshallow water equations, a system ofhyperbolic conservation laws which can be derivedfrom a starting point of incompressible Navier-Stokes.Work by Perthame et al. in the late nineties de-veloped a kinetic scheme for this hyperbolic systemwhich can be shown to have certain desirable, well-balanced features. In flood risk assessment models,these properties are additionally desirable, but sucha fully integrated model must account for the sourcesof flooding, an addition which no longer guaranteesthat such a kinetic scheme is applicable. We presenthere an extension of the standard shallow water sys-tem which incorporates an additional rain term onthe surface of the water and infiltration of the waterinto the ground below, and show how this system canbe modelled numerically through a careful extensionof the formulation in the kinetic schemes mentionedabove.

    Automated parameters for troubled-cell indi-cators using outlier detection

    Mathea J. Vuik & Jennifer K. Ryan (Delft Univer-sity of Technology)

    In general, solutions of nonlinear hyperbolic PDE’scontain shocks or develop discontinuities. One optionfor improving the numerical treatment of the spuriousoscillations that occur near these artifacts is throughthe application of a limiter. The cells where suchtreatment is necessary are referred to as troubled cells,which are selected using a troubled-cell indicator. Ex-amples are the KXRCF shock detector, the minmod-based TVB indicator, and the modified multiwavelettroubled-cell indicator.

    The current troubled-cell indicators perform well aslong as a suitable, problem-dependent parameter ischosen. An inappropriate choice of the parameter willresult in detection of too few or too many elements.Detection of too few elements leads to spurious os-cillations, since not enough elements are limited. Iftoo many elements are detected, then the limiter isapplied too often, and therefore the method is morecostly and the approximation smooths out after a longtime. The optimal parameter is chosen such that the

    20

  • minimal number of troubled cells is detected and theresulting approximation is free of spurious oscillations.In general, many tests are required to obtain this op-timal parameter for each problem.

    In this presentation, we will see that the sudden in-crease or decrease of the indicator value with respectto the neighboring values is important for detection.Indication basically reduces to detecting the outliers ofa vector (one dimension) or matrix (two dimensions).This is done using Tukey’s boxplot approach to detectwhich coefficients in a vector are straying far beyondothers [2].

    We provide an algorithm that can be applied to vari-ous troubled-cell indication variables. Using this tech-nique, the problem-dependent parameter that the orig-inal indicator requires, is no longer necessary, as theparameter will be chosen automatically.

    We will apply this technique to the modified multi-wavelet troubled-cell indicator [3, 4], which can beused to detect discontinuities in (the derivatives of)the DG approximation. Here, Alpert’s multiwaveletbasis is used [1]. We will use either the original in-dicator (with an optimal parameter), or the outlier-detection technique. In that way, the performance ofthe new technique can be easily compared to the cur-rent method.

    References

    [1] B.K. Alpert. A Class of Bases in L2 for the SparseRepresentation of Integral Operators. SIAM Journalon Mathematical Analysis 24: 246–262, 1993.[2] J.W. Tukey. Exploratory Data Analysis. Addison-Wesley Publishing Company, 1977.[3] M.J. Vuik, J.K. Ryan. Multiwavelet troubled-cellindicator for discontinuity detection of discontinuousGalerkin schemes. Journal of Computational Physics270: 138–160, 2014.[4] M.J. Vuik, J.K. Ryan. Multiwavelets and jumps inDG approximations. In: Proceedings of ICOSAHOM2014, to appear.

    Minisymposium M7

    Chebfun: new developments coolapplications and on the horizon

    OrganiserNick Trefethen

    Krylov methods for operators

    Jared L. Aurentz (University of Oxford)

    In this talk we will explore the convergence of Krylovmethods when used to solve Lu = f where L is anunbounded linear operator. We will show that for cer-tain problems, methods like Conjugate Gradients andGMRES still converge even though the spectrum of Lis unbounded. A theoretical justification for this be-havior is given in terms of polynomial approximationon unbounded domains.

    High-Accuracy Chebyshev Coefficients via Con-tour Integrals

    Anthony Austin (University of Oxford)

    Following Bornemann’s work on computing Taylor co-efficients to high precision by contour integrals overcircles of large radius, Wang and Huybrechs have re-cently published a paper about computing Chebyshevcoefficients to high precision by contour integrals overBernstein ellipses of large parameter. Under certaincircumstances, these methods make it possible to com-pute coefficients in ordinary floating-point arithmeticdown at the level of 10−100 or below. We investigatethe use of such methods for general-purpose compu-tation with functions as in Chebfun.

    Computing distinct solutions of nonlinear ODEswith Chebfun

    Asgeir Birkisson (University of Oxford)

    Like nonlinear system of equations, nonlinear differ-ential equations may permit several nontrivial distinctsolutions. This talk describes how distinct solutions ofnonlinear boundary-value problems can be computedin Chebfun. The first technique presented is called de-flation, which modifies the residual of the original dif-ferential equation, so that Newton’s method no longerconverges to currently known roots. The second tech-nique discussed is path-following, where the introduc-tion of a continuation parameter enables tracing curvesin solution spaces to yield further solutions, startingfrom a previously known solution.

    From 2D to 3D

    Behnam Hashemi (University of Oxford)

    Let f : [a, b]× [c, d]× [e, g]→ C be a trivariate contin-uous function. If f is sufficiently smooth then it hasa trivariate Chebyshev expansion

    f(x, y, z) =

    ∞∑i=0

    ∞∑j=0

    ∞∑k=0

    cijkTk(z)Tj(y)Ti(x).

    In this talk, we show how to create chebfun3 objectsin different ways. The first approach is an automatic

    21

  • tensor technique that relies on 1D Chebyshev math-ematics only. This is implemented in the new “cheb-fun3t” class. Given a smooth trivariate function f , itcomputes an order-3 tensor containing coefficients ofthe Chebyshev expansion of f . The second approachis based on low-rank approximations which is the ideabehind Chebfun2. This is implemented in the new“chebfun3” class and creates what we call a “slice de-composition” of f . We discuss a few other methods,describe some details of our current implementations,try to compare the approaches and summarize chal-lenges ahead.

    Analytic continuation, phase portraits, and thezeta function

    Mohsin Javed (University of Oxford)

    The aim of this talk is to learn something new aboutanalytic continuation via numerical experiments. Us-ing Chebfun, we will interpolate the Riemann zetafunction on a vertical line segment in the half planez > 1 and then find and compare the roots of this in-terpolant with the roots of the zeta function. Similarnumerical experiments for rational interpolants of thezeta function will also be discussed. We will also talkabout phase portraits and how they help us visualizeanalytic functions in the complex plane.

    Computing choreographies

    Hadrien Montanelli & Nikola I. Gushterov (Uni-versity of Oxford)

    Choreographies are periodic solutions of the n-bodyproblem in which all of the bodies have unit masses,share a common orbit and are uniformly spread alongit. In this talk, I will present an algorithm for nu-merical computation of choreographies in the plane ina Newtonian potential and on the sphere in a cotan-gent potential. It is based on stereographic projection,approximations by trigonometric polynomials, mini-mization of the action functional using a closed-formexpression for the gradient and quasi-Newton meth-ods.

    A fast and well-conditioned spectral methodfor singular integral equations

    Richard Mikaël Slevinsky (University of Oxford)& Sheehan Olver (The University of Sydney)

    We develop a spectral method for solving univariatesingular integral equations over unions of intervals andcircles, by utilizing Chebyshev, ultraspherical andLaurent polynomials to reformulate the equations as

    banded infinite-dimensional systems. Low rank ap-proximations are used for efficient representations ofthe bivariate kernels. The resulting system can besolved in O(nopt) operations using an adaptive QRfactorization, where nopt is the optimal number ofunknowns needed to resolve the true solution. Ap-plications considered include fracture mechanics, theFaraday cage, and acoustic scattering. The Julia soft-ware package SIE.jl implements our method with aconvenient, user-friendly interface.

    Initial value problems and a new ODE text-book

    Nick Trefethen (University of Oxford)

    ODEs are generally either initial-value or boundary-value problems, and both the behaviour and the ap-propriate numerical methods differ greatly. In the pastyear, however, Asgeir Birkisson has modified Chebfunso that it solves both kinds of problems by a uni-fied syntax: to solve a linear or nonlinear problemL(u) = f together with initial or boundary conditions,one can type u = L\f. The underlying algorithmsinvolve automatic Chebyshev spectral discretizationfor BVPs and marching via ODE113 for IVPs. Tak-ing advantage of this framework, a new textbook isabout half-drafted whose subject is ODEs, not nu-merics, even t