stationary space-time gaussian fields and their time autoregressive

24
http://smj.sagepub.com Statistical Modelling DOI: 10.1191/1471082x02st029oa 2002; 2; 139 Statistical Modelling Geir Storvik, Arnoldo Frigessi and David Hirst representation Stationary space-time Gaussian fields and their time autoregressive http://smj.sagepub.com/cgi/content/abstract/2/2/139 The online version of this article can be found at: Published by: http://www.sagepublications.com can be found at: Statistical Modelling Additional services and information for http://smj.sagepub.com/cgi/alerts Email Alerts: http://smj.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.in/about/permissions.asp Permissions: http://smj.sagepub.com/cgi/content/refs/2/2/139 Citations at Universitet I Oslo on May 10, 2010 http://smj.sagepub.com Downloaded from

Upload: phungdat

Post on 13-Feb-2017

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Stationary space-time Gaussian fields and their time autoregressive

http://smj.sagepub.com

Statistical Modelling

DOI: 10.1191/1471082x02st029oa 2002; 2; 139 Statistical Modelling

Geir Storvik, Arnoldo Frigessi and David Hirst representation

Stationary space-time Gaussian fields and their time autoregressive

http://smj.sagepub.com/cgi/content/abstract/2/2/139 The online version of this article can be found at:

Published by:

http://www.sagepublications.com

can be found at:Statistical Modelling Additional services and information for

http://smj.sagepub.com/cgi/alerts Email Alerts:

http://smj.sagepub.com/subscriptions Subscriptions:

http://www.sagepub.com/journalsReprints.navReprints:

http://www.sagepub.in/about/permissions.aspPermissions:

http://smj.sagepub.com/cgi/content/refs/2/2/139 Citations

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 2: Stationary space-time Gaussian fields and their time autoregressive

Stationary space–time Gaussian � elds and theirtime autoregressive representation

Geir Storvik1,2, Arnoldo Frigessi1,2 and David Hirst2

1Department of Mathematics, University of Oslo, Norway2Norwegian Computing Center, Oslo, Norway

Abstract: We compare two different modelling strategies for continuous space discrete time data. The �rststrategy is in the spirit of Gaussian kriging. The model is a general stationary space–time Gaussian �eldwhere the key point is the choice of a parametric form for the covariance function. In the main, covariancefunctions that are used are separable in space and time. Nonseparable covariance functions are useful inmany applications, but construction of these is not easy.

The second strategy is to model the time evolution of the process more directly. We consider models ofthe autoregressive type where the process at time t is obtained by convolving the process at time t ¡ 1 andadding spatially correlated noise.

Under speci�c conditions, the two strategies describe two different formulations of the same stochasticprocess. We show how the two representations look in different cases. Furthermore, by transforming time-dynamic convolution models to Gaussian �elds we can obtain new covariance functions and by writing aGaussian �eld as a time-dynamic convolution model, interesting properties are discovered.

The computational aspects of the two strategies are discussed through experiments on a dataset of dailyUK temperatures. Although algorithms for performing estimation, simulation, and so on are easy to do forthe �rst strategy, more computer-ef�cient algorithms based on the second strategy can be constructed.

Key words: convolution models; Kalman �lter; nonseparable; space–time covariance structure; spectralrepresentation; stationary Gaussian �elds

Data link available from: http:==stat.uibk.ac.at=SMIJReceived August 2000; revised November 2001 and April 2002; accepted May 2002

1 Introduction

Gaussian random �elds are the most common spatial models. They are either used formodelling the observed data directly (Ripley, 1981; Cressie 1991) or as building blocksin hierarchical models (Diggle et al., 1998; KnorrHeld and Besag, 1998). Recently,there has been a lot of interest in extending these models to spatio-temporal processes,where one of the main dif�culties lies in specifying the space–time covariance structure.In this paper, we compare two different approaches. The �rst is similar to strategiesused for spatial processes, in that parametric forms for the covariance functions arede�ned directly (Carroll et al., 1997; Jones and Zhang, 1997; Cressie and Huang,

Address for correspondence: Geir Storvik, Dept. of Mathematics, Statistics Division University of Oslo, POBox 1053 Blindern, N-0316 Oslo, Norway. Tel.: +47 2285 5894; Fax: ‡47 2285 4349; E-mail:[email protected]

Statistical Modelling 2002; 2: 139–161

# Arnold 2002 10.1191=1471082x02st029oa at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 3: Stationary space-time Gaussian fields and their time autoregressive

1999). The second strategy is to introduce latent structures that generate space–timecorrelation (Haas, 1995; H½st et al., 1995; S½lna and Switzer, 1996; Wikle and Cressie,1999), and to assume that the residual noise has a much more simple coloring than theoriginal process.

To be more precise, assume:

y…t; x† ˆ m…t; x† ‡ z…t; x†

where t is time and x is the space variable. We assume x 2 R2 . Here m…t; x† is somedeterministic regression part, which we equate to zero for simplicity. z…t; x† is a zero-mean time and space stationary stochastic �eld with covariance function c…s; u†, wheres and u are temporal and spatial lags, respectively.

In the �rst modelling strategy, a speci�c parametric form for c…¢ ; ¢† is assumed. Carehas to be taken in order to derive a non-negative de�nite function. Different classes ofsuch functions have been presented in the literature. These are mostly either isotropic inall dimensions, making no distinction between time and space, or separable (i.e.,c…s; u† ˆ c1…s†c2…u†). Recently Cressie and Huang (1999) have presented some non-separable space–time covariance functions. In practice, it is always dif�cult to justifyany particular form, especially with nonequally spaced data.

In the second modelling strategy the stochastic dependence is de�ned through latentvariables that have a more simple dependence structure than z…t; x† itself. Onepossibility is to make an explicit assumption about the time-evolution:

z…t; x† ˆ g…z…t ¡ 1; ¢†; e…t; x†† …1:1†

where z…t ¡ 1; ¢† ˆ fz…t ¡ 1; x†; x 2 R2g is the spatial process at time t ¡ 1 and fe…t; x†g isa further stationary process. This approach can be natural if data are available atdiscrete equidistant time points. If we further assume g to be linear and e…t; x† to beGaussian, then fz…t; x†g will be a Gaussian �eld, with some covariance function. Thiswill be the case considered here. Note, however, that this second type of model can beextended to non-Gaussian noise, higher order temporal dependence, and so on.

In this paper we compare these two strategies. In particular, we consider interpret-ability, model checking, inference, and computational dif�culties.

In what follows we compute the covariance function of Equation (1.1) explicitly forthe linear-Gaussian case. We are then able to say what type of (nonseparable)covariance we are implicitly assuming while using model (1.1). This provides us witha way to build nonseparable covariance functions that are interpretable in terms of thetime-dynamic representation. We also proceed in the opposite direction. We take aGaussian �eld with a nonseparable covariance function (as proposed by Cressie andHuang (1999) for example) and see �rst if it can be represented as a linear-Gaussianversion of the time-dynamic model (1.1) and second, if it can, we compute thecovariance function of the noise e and the linear transform from time t ¡ 1 to t. Sucha representation can be used for interpretation of the Gaussian �eld.

Whenever there are two different modelling strategies that lead to the same model forthe data, the following question arises: In practice can we consider only one of the twoapproaches (i.e., covariance or latent variables modelling)? Our �ndings clearly lead to

140 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 4: Stationary space-time Gaussian fields and their time autoregressive

a suggestive answer since each of the two modelling strategies can lead to models thatwould not likely be proposed when following the other approach.

We also note that there are Gaussian models that cannot be represented in the form(1.1) and in this case we explain why.

The outline of the paper is as follows: In Section 2 we de�ne our �elds and �xnotation. Section 3 considers the linear-Gaussian version of the time-dynamic model(1.1). In Section 4, conditions for equality between the two representations are derived,while Section 5 gives illustra tive examples. A discussion on the two representationsrelated to numerical computation is given in Section 6.

2 Gaussian � elds

A stationary, zero-mean space–time Gaussian �eld z…t; x† is speci�ed by the stationarycovariance function

c…s; u† ˆ cov‰z…t; x†; z…t ‡ s; x ‡ u†Š

where s and u are, respectively, the time and spatial lags. Here we assume that t iscontinuous and real and x is, say, in R2 . Covariance functions need to be positivede�nite. For this reason valid space–time covariance functions are not easy to construct.Assuming the covariance function is continuous, positive de�niteness is equivalent tothe process having a spectral distribution function (Matern, 1986, p. 12). Assumingfurther the covariance function to be isotropic in the full spatio-temporal domain, allvalid functions can be represented as (possible in�nite) mixtures of Bessel functions(Yaglom, 1987, p. 106). Such representations have been used for de�ning nonpara-metric covariance functions (see Ecker and Gelfand, 1997, and the references therein).Because the time dimension has a different interpretation compared to the spatial ones,an isotropic assumption is often unrealistic. Other approaches are therefore needed forspecifying covariance functions in space and time.

One simple way to build such a covariance function is to multiply a time stationarycovariance function c1…s† with a space stationary covariance function c2…u†

c…s; u† ˆ c1…s†c2…u†

Such a space–time covariance function is called separable. As discussed in Cressie andHuang (1999) the class of separable space–time covariance functions is quite limited. Inmany applications we need more general types of correlations. Jones and Zhang (1997)and Cressie and Huang (1999) introduce new classes of covariance functions forstationary spatio-temporal processes, but the tool box is still limited.

The Cressie–Huang class will be of particular interest in the following. For every�xed s 2 R let G…s; o† be a function over the (spatial) frequency domain. If one canwrite

G…s; w† ˆ r…s; w†k…w†

Stationary space–time Gaussian �elds 141

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 5: Stationary space-time Gaussian fields and their time autoregressive

and if the two following conditions are satis�ed;

(CH1): For each w 2 R2 ; r…¢; w† is a continuous autocorrelation function and„r…s; w†ds < 1.

(CH2): k…w† > 0 for all w and„

k…w†dw < 1,

then G…s; w† can be Fourier transformed (over the spatial frequency o) to produce avalid covariance function c…s; u†.

3 Time autoregressive spatial models

A different way to introduce space–time correlation is by means of time-dynamicalhierarchical models as de�ned by (1.1). We will consider the special case

z…t; x† ˆ…

R2h…v†z…t ¡ 1; x ‡ v†dv ‡ e…t; x†; x 2 R2; t 2 Z …3:1†

We name this a time autoregressive spatial model (of order one). The spatial convolu-tion kernel h…v† has to be chosen to guarantee stationarity of z…t; x†. It decays rapidlyfrom its mode located at the origin. Often it is assumed that h…v† ˆ h…kvk†. For instanceit could be a Gaussian kernel. The noise process fe…t; x†g is assumed to be a stationaryGaussian �eld, uncorrelated in time but colored in space, with spatial covariancefunction r…u†, so that

cov‰e…t; x†; e…t0; x ‡ u†Š ˆ 0; if t 6ˆ t0;

r…u† otherwise.

»

De�ne H…w† to be the inverse Fourier transform of h…v†. The following theorem statesconditions for when the process (3.1) is stationary in time.

Theorem 1Assume that jH…w†j < 1 for all w. Assume further that var‰z…0; x†Š < K for all x. Then,considering fz…t; x†g as a process evolving in time, a unique limit distribution exists.Further, this distribution is stationary in space.

ProofSee Appendix A.

A corollary, which will be important for performing numerical computation, is

142 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 6: Stationary space-time Gaussian fields and their time autoregressive

Corollary 1Given that the process fz…t; x†g is in its limit distribution, an alternative representationof the process is

z…t; x† ˆ…

oeiw0

xdZ…t; w† …3:2†

where

dZ…t; w† ˆ H…w†dZ…t ¡ 1; w† ‡ dE…t; w†: …3:3†

and fE…t; w†g is, for �xed t, a process with the following properties:

1) E‰dE…t; w†Š ˆ 0, for all w,2) E‰jdE…t; w†j2 Š ˆ dR…w†, for all w, where R…w† is the integrated spectrum of fe…t; ¢†g,3) For any two distinct frequences, w1 ; w2 ,

E‰dE¤…t; w1†; dE…t; w2†Š ˆ 0:

Further, fE…t; w†g is uncorrelated in time.

ProofFollows directly from the proof of Theorem 1 (Appendix A).

Assuming the process de�ned by Equation (3.1) is stationary with �nite �rst andsecond order moments, its covariance function c…s; u† is de�ned for all integers s, and allu 2 R2 . It must satisfy (see Appendix B)

c…0; u† ˆ…

R2

R2h…v†h…w†c…0; u ‡ w ¡ v†dvdw ‡ r…u† 8u …3:4†

c…s; u† ˆ…

R2h…v†c…s ¡ 1; u ¡ v†dv; 8u; s ˆ 1; 2; . . . : …3:5†

Assume that the following functions are all well de�ned: the inverse Fourier transformH…w† of the kernel function h…¢†; the inverse Fourier transform R…w† of r…¢†; and theinverse Fourier transform G…s; w† of c…s; ¢†. Note that all (inverse) Fourier transforms aretaken in the spatial domain. Then, by taking inverse Fourier transforms on both sides ofEquation (3.4), we obtain

G…0; w† ˆ H…w†H…¡w†G…0; w† ‡ R…w†

Stationary space–time Gaussian �elds 143

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 7: Stationary space-time Gaussian fields and their time autoregressive

This gives, for H…w†H…¡w† 6ˆ 1

G…0; w† ˆ R…w†1 ¡ H…w†H…¡w†

…3:6†

Similarly, by taking inverse Fourier transforms on both sides of Equation (3.5), weobtain

G…s; w† ˆ G…0; w†H…¡w†jsj …3:7†

(Note that this equation is also valid for negative s since c…¡s; u† ˆ c…s; u†.) Thecovariance function can then be obtained as the Fourier transform of G…s; w† givenby Equations (3.6) and (3.7).

Under mild assumptions on h…¢† and r…¢†, discussed in Theorem 2 (later), the inverseFourier transform G…s; w† exists, and the covariance function of Equation (3.1) can becomputed at least numerically. In some cases it is possible to compute c…s; u†analytically, as in the following example.

Example 1Assume in Equation (3.1) an isotropic Matern function for h…¢†

h…u† ˆ p¡1ya…n1†kukn1 Kn1…kuk† n1 > 0; 0 < y < 1

where Kn…¢† is the modi�ed Bessel function of the third kind of order n while

a…n† ˆ 12n¡1G…n†

Assume further a difference between two Matern functions for r…¢†:

r…u† ˆ a…n2†kukn2 Kn2…kuk† ¡ y2a…n2 ‡ 2n1 ‡ 2†kukn2 ‡2n1 ‡2Kn2 ‡2n1 ‡2…kuk†

where n2 > 0. Fourier transforms of many types of functions, including Besselfunctions, are given in Oberhettinger (1973). From this, we obtain

H…w† ˆ y…1 ‡ kwk2†¡n1 ¡1

R…w† ˆ p…1 ‡ kwk2†¡n2¡1 ¡ py2…1 ‡ kwk2†¡…n2 ‡2n1‡2†¡1

ˆ p…1 ‡ kwk2†¡n2¡1 ‰1 ¡ y2…1 ‡ kwk2†¡2…n1 ‡1†Š

which gives

G…s; w† ˆ pyjsj…1 ‡ kwk2†¡…n2 ‡…n1 ‡1†jsj†¡1 …3:8†

144 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 8: Stationary space-time Gaussian fields and their time autoregressive

for s ˆ 0; §1; §2; . . . . This can be Fourier transformed so that we obtain

c…s; u† ˆ yjsja…n2 ‡ …n1 ‡ 1†jsj†kukn2 ‡…n1 ‡1†js jKn2‡…n1 ‡1†jsj…kuk† …3:9†

For every �xed s this is a Matern covariance function. &

The example above illustra tes an important issue. The time lag one autoregressivemodel (3.1) is de�ned onlyfor s 2 Z. However, the covariance function c…s; u† in Equation(3.9) can be extended to continuous values s 2 R. It is a valid covariance function for aGaussian space–time �eld de�ned in continuous time. This can be seen by noting that thecovariance function is a member of the Cressie and Huang (1999) class (i.e., G…s; w† givenby Equation (3.8) can be seen to satisfy the conditions (CH1) and (CH2)).

4 A double representation

We assume that the data z…t; x† are available only at discrete, say integer, time points.One possible model is a time-continuous spatial Gaussian �eld with a speci�ccovariance function c…s; u†. Owing to the discrete nature of the data, in the likelihoodfunction the covariance function will appear only at integer time lags.

A second possible model is (3.1), for some kernel function h…¢† and spatial covariancefunction r…¢†. This model is de�ned only in discrete time. However, as we saw in theexample of the previous section, its space–time covariance function can sometimes beextended to continuous time. In these cases, there exists a continuous time spatialGaussian �eld. The likelihood function of the data assuming model (3.1) or assumingsuch a corresponding continuous time spatial Gaussian �eld would be identical.

Hence there are cases in which the same statistical model can be described using tworather different forms: by means of a space–time covariance function c…s; u† or througha spatial kernel h…x† and a spatial covariance function r…u†. In this section wecharacterize precisely when two such representations are available by answering thetwo following questions:

1) When can the discrete time spatial covariance function of model (3.1) be extended tocontinuous time?

2) When can the likelihood of discrete time data, modelled as a continuous time spatialGaussian �eld, be written using the time lag one autoregressive representation (3.1)?

The following theorem answers the �rst question.

Theorem 2Assume the process de�ned by (3.1) is stationary in time. Let the inverse Fouriertransforms H…w† of h…¢† and R…w† of r…¢† be well de�ned. For s 2 Z , let

G…s; w† ˆ R…w†1 ¡ H…w†H…¡w†H…¡w†jsj

Stationary space–time Gaussian �elds 145

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 9: Stationary space-time Gaussian fields and their time autoregressive

If jH…w†j < 1 for all w 2 R and„

G…0; w†dw < 1, then the Fourier transform c…s; u† ofG…s; w† exists for all s 2 Z and can be extended to s 2 R. c…s; u† is the covariancefunction of the stationary solution of (3.1). Such a space–(continuous) timecovariance function de�nes a space–(continuous) time Gaussian �eld.

ProofSee Appendix A.

Remarks.1) Concerning the integrability of G…0; w†, let h…x† be symmetric and unimodal with

mode at x ˆ 0. This is a common situation. Then H…w† µ H…0† for all w, so that

…G…0; w†dw µ 1

1 ¡ H…0†2

…R…w†dw ˆ r…0†

1 ¡ H…0†2 < 1:

This shows that this condition is also often ful�lled.2) Note that none of the calculations actually involves the assumption of a Gaussian

process. Spatial and spatio-temporal covariance functions are, however, mostly usedin connection with Gaussian processes.

The next theorem answers the second question.

Theorem 3Assume c…s; u† is a valid covariance function for s 2 R and u 2 R2 . Assume that for�xed s the (spatial) Fourier transform G…s; w† of c…s; ¢† exists. Assume one can write

G…s; w† ˆ G…0; w†H…¡w†jsj …4:1†

for some function H…w† with the following properties:

a) The Fourier transform h…¢† of H…¢† exists and jH…w†j < 1 for all wb) G…0; w† > 0 for all w, and

„G…0; w†dw < 1

Then there exists a stochastic process fz…t; x†g de�ned for s 2 Z and for all x 2 R2 ,which follows model (3.1) with covariance structure equal to c…s; u†. For this processh…¢† and r…¢† are uniquely de�ned as the the Fourier transforms of H…¢† and

R…w† ˆ G…0; w†…1 ¡ H…w†H…¡w†† …4:2†

ProofSee Appendix A.

Note that the uniqueness of h…¢† and r…¢† is established for the lag one autoregressionmodel (3.1) for the given time-discretization. If the continuous time scale were to bediscretized with different time steps, say D 6ˆ 1, then different h…¢† and r…¢† functions

146 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 10: Stationary space-time Gaussian fields and their time autoregressive

would be needed. More precisely, from Equation (4.2) we may write for an arbitrarytime scale D and integer s

G…sD; w† ˆ G…0; w†HD…¡w†jsj

where HD…¡w† ˆ H…¡w†D. Furthermore, we can now write

z…t; x† ˆ…

R2hD…u†z…t ¡ D; x ‡ u†du ‡ eD…t; x† …4:3†

where hD is the inverse Fourier transform of HD and the spatial covariance functionrD…u† of feD…t; x†g is de�ned through its spectral density

RD…w† ˆ G…0; w†‰1 ¡ HD…w†HD…¡w†Š

Note however that hD may not exist for small D even though it does for D ˆ 1. A veryimportant result is given by Brown et al. (2000). They show that if hD…x† exists for all D,if hD…x† > 0 8x; D, and if hD…¢† all have �nite second moments, then the kernelfunction hD…x† has to be Gaussian. This means that if other convolution functionsare to be used, all the conditions above cannot be satis�ed. As we will see in theexamples of the following section, typically hD will be negative in some areas for smallvalues of D or it does not exist at all.

5 Examples

We have seen that in certain situations there are two different modelling strategies thatlead to the same likelihood function. In this section we will see some examples. Theyshow that both modelling approaches are important: sometimes models seem natural ifexpressed with one type of parameterization, and less so once written in the other way.

We begin with a model that has a simple separable space–time covariance functionc…s; u† and look to its representation (3.1).

Example 2 (Separable covariance functions)Assume

c…s; u† ˆ c1…s†c2…u†

Then,

G…s; w† ˆ c1…s†C2…w†

where C2…w† is the spectral density of c2…u†. In order to represent this process as (3.1),by Theorem 3 since c1…s† does not depend on w, it must be of the form yjsj with jyj < 1.Because H…w† ˆ y, it follows that h…x† ˆ yd…x† where d…x† is Dirac’s delta. Henceseparable covariance functions correspond to the simple time autoregressive model

z…t; x† ˆ yz…t ¡ 1; x† ‡ e…t; x†

Stationary space–time Gaussian �elds 147

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 11: Stationary space-time Gaussian fields and their time autoregressive

where fe…t; x†g is spatially correlated with covariance function given byr…u† ˆ c2…0†…1 ¡ y2†c2…u†: &

Example 3We shall use Theorem 3 to obtain the time autoregressive representation of a slightlymodi�ed version of Example 2 of Cressie and Huang (1999). Let

c…s; u† ˆ yjsj c0

jsj ‡ c0e¡kuk2 c0 =…jsj‡c0 † …5:1†

Cressie and Huang (1999) considered the case y ˆ 1 (with a slightly differentparameterization). Its inverse (spatial) Fourier transform is given by

G…s; w† ˆ …4p†¡1e¡kwk2=4 ‰ye¡kwk2 =4c0 Šjs j

The space–time covariance function (5.1) is nonseparable. We take

H…w† ˆ ye¡kwk2=4c0 …5:2†

so that the conditions in Theorem 3 are ful�lled for y < 1. From Equation (4.3) we havethat

R…w† ˆ …4p†¡1e¡kwk2 ‰1 ¡ y2e¡kwk2=2c0 Š …5:3†

giving

h…u† ˆ 4ye¡c0 kuk2

and

r…u† ˆ c0e¡kuk2 =c0 1 ¡ y2 c0

‰2 ‡ c0 Šp e¡kuk2 c0 =‰2‡c0 Šµ ¶

The kernel function h…¢† turns out to be Gaussian, which is often the natural choice.However the corresponding spatial covariance function r…¢† is rather unusual. InFigure 1 r…u† is plotted for c0 ˆ 1 and different values of y. The covariance functionr…¢† becomes negative for larger spatial lags. This is the case for any choice of c0 and y.c0 ˆ 1 corresponds to a separable model.

We interpret the �ndings of this example in the following way. While it can beappropriate and natural to consider a likelihood built on a Gaussian �eld model withcovariance function (5.1), it would be rather unlikely that a modeller would end upwith the same likelihood if he started from a parameterization of type (3.1), since thiswould require considering an unnatural function r…¢†.

The spatial covariance function r…¢† is forced to become negative probably becausethe Gaussian �eld model with covariance function given by (5.1) is not a naturalcandidate for the parameterization (3.1). The form of the kernel h…¢† induces large

148 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 12: Stationary space-time Gaussian fields and their time autoregressive

positive correlations at large distances, which need to be compensated for by the errorterm e…t; x†.

We make a further remark on this example: Limits of covariance functions are alsovalid covariance functions (Matern, 1986). Letting y ! 1, we obtain the covariancefunction

c…s; u† ˆ 1jsj ‡ c0

e¡kuk2 =…js j‡c0 †

which is the one considered by Cressie and Huang (1999). Considering the representa-tion (3.1), we see from (5.2) and (5.3) that H…0† ! 1 while R…0† ! 0. Note that R…0†can be interpreted as the variance of the ‘spatial average’ of e…t; x† (properly scaled)meaning that at each time point a noise process with average value equal to zero isadded. In most cases, such a process will not be very plausible. However, there might besituations where the average value is constant (energy equilibrium or mass balance forexample), but where the values at speci�c sites change randomly. Note that in suchcases, negative correlations at some sites are necessary. &

Example 4 (Matern kernel and Matern spatial covariance function)We now start with a plausible model (3.1) given by a Matern kernel and spatialcovariance functions

h…u† ˆ y2n1 ¡1G…n1† …2a¡1

1�����n1

p kuk†n1 Kn1…2a¡1

1�����n1

p kuk†

r…u† ˆ 12n2 ¡1G…n2† …2a¡1

2�����n2

p kuk†n2 Kn2…2a¡1

2�����n2

p kuk†

Figure 1 Plot of r…u† for y ˆ 0:5; 0:7; 0:9 and 0:99. The upper curve is for the lowest value of y and so on. In allcases c0 ˆ 1

Stationary space–time Gaussian �elds 149

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 13: Stationary space-time Gaussian fields and their time autoregressive

The ai are (positive) spatial scaling parameters while the ni are shape parameters, bothgreater than 0.5. These Matern functions are very �exible and include as special casesthe exponential (ni ˆ 0:5) and the Gaussian function (ni ˆ 1). We obtain

H…w† ˆ ypa21 1 ‡ 1

4a2

1n¡11 kwk2

³ ´¡n1¡1

R…w† ˆ pa22 1 ‡ 1

4a2

2n¡12 kwk2

³ ´¡n2 ¡1

showing that jH…w†j < 1 when y < p¡1a¡21 . Since h…x† in this case is real, symmetric

and unimodal, the integrability of G…0; w† is directly obtained (see Remark 2 afterTheorem 2).

From Equations (3.6) and (3.7) we have that

G…s; w† ˆpa2

2…1 ‡ 14 a2

2n¡12 kwk2†¡n2¡1

1 ¡ …ypa21†2…1 ‡ 1

4 a21n¡1

1 kwk2†¡2…n1 ‡1† …ypa21†js j 1 ‡ 1

4a2

1n¡11 kwk2

³ ´¡…n1‡1†js j

…5:4†

The Fourier transform c…s; x† of G…s; w† is a legal covariance function. However, noanalytical expression is available, except when n1 and n2 are integers (Vecchia, 1985).Even in the integer cases, complex expressions are involved. This shows that in this

Figure 2 Plot of covariance function c…s; kuk† for example 4 with n1 ˆ n2 ˆ 1, a1 ˆ a2 ˆ 2 and y ˆ 0:07

150 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 14: Stationary space-time Gaussian fields and their time autoregressive

example it is extremely unlikely that a modeller designing parametric space–timecovariance functions c…s; x† for Gaussian �elds would consider such a model and thecorresponding likelihood. This shows that modelling within framework (3.1) can beuseful and sometimes unavoidable.

It is possible to compute c…s; x† numerically, given values of all the parameters. Thisallows us to investigate the main features of c…s; x†.

Figure 2 is a plot of c…s; d†, where d ˆ sign…u†kuk, for a1 ˆ a2 ˆ 2 and n1 ˆ n2 ˆ 1.We notice the sharp discontinuity of the derivative with respect to s, and a somewhatslow decay.

In Figure 3 we plot some projections to emphasize the properties of the covariancefunction c…s; u† ˆ c…s; kuk†. In Figure 3(a) we see that c…s ˆ 0; u† (plotted with a solidline) has heavier tails in u compared to r…u†(dashed). This is due to the fact that theautoregressive term in (3.1) contributes positively to the covariance in space. Theexponentially shaped temporal correlation function c…s; u ˆ 0†, plotted in Figure 3(b),represents the time autoregressive structure.

Figure 3 Example 4 with n1 ˆ n2 ˆ 1, a1 ˆ a2 ˆ 2 and y ˆ 0:9=…pa21†. (a) Plot of c…0; kuk† (solid line) and r…kuk†

(dashed line); (b) Plot of c…s; 0†; (c): Plot of rD…kuk† for D ˆ 1; 0:7; 0:4 (scaled such that maximum value is equalto one) (d) Plot of hD…kuk† for D ˆ 1; 0:7; 0:4 (scaled such that maximum value is equal to one)

Stationary space–time Gaussian �elds 151

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 15: Stationary space-time Gaussian fields and their time autoregressive

It is possible to approximate c…s; u† given through Equation (5.4) by means of aseparable space–time covariance function. One approach is the following:

~cc…s; u† ˆ c…s; 0† ¢ c…0; u†c…0; 0†

The covariance function ~cc…s; u† is now separable with ~cc…s; 0† ˆ c…s; 0† and~cc…0; u† ˆ c…0; u†. In Figure 4 the ratio c…s; kuk†=~cc…s; kuk† is plotted. As expected, dueto the structure of model (3.1), the correlations along the space–time ‘diagonals’ areconsiderably larger than those we would have obtained from a separable covariancefunction. It remains to be seen whether the separable approximation would still workwell enough in a practical setting despite its signi�cantly different theoretical properties.

In Figure 3(c) the correlation function rD…u† ˆ rD…u†=rD…0† is plotted for the differenttime discretizations D ˆ 1:0; 0:7; 0:4. Although slowly changing, the spatial correla-tions become smaller for decreasing D. Also the variance decreases (not shown).

In Figure 3(d) the convolution function hD is plotted (scaled such that the maximumvalue is equal to one). This function is changing much faster and becomes closer to aDirac’s delta function for decreasing D. Both the variance and the spatial correlation ofthe noise term decrease when D gets smaller, although the changes are slower than thoseobserved for the convolution function.

Figure 4 Plot of c…s; kuk†= ~cc…s; kuk† for Example 4 with n1 ˆ n2 ˆ 1 and a1 ˆ a2 ˆ 2

152 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 16: Stationary space-time Gaussian fields and their time autoregressive

For D small, the representation (3.1) is no longer valid. This can be seen by noting that

HD…w† ˆ …ypa21†D 1 ‡ 1

4a2

1n¡11 kwk2

³ ´¡D…n1 ‡1†

ˆ …ypa21†D 1 ‡ 1

4a2

1n¡11 kwk2

³ ´¡nD¡1

where nD ˆ D…n1 ‡ 1† ¡ 1. For nD µ ¡0:5 (corresponding to D µ 1=2…1 ‡ n1†), thefunction is not integrable, and so the Fourier transform does not exist. Note that fora Gaussian convolution function (n1 ˆ 1), the representation (3.1) is valid for all D,which is in agreement with the results of Brown et al. (2000). &

6 Experimental results and computational considerations

In the previous sections, we have seen that time autoregressive spatial models of type(3.1) are Gaussian processes with a covariance structure that can be found at leastnumerically, and that some models de�ned by their covariance structure correspond totime autoregressive spatial models. If we have a model that can be described in bothways, we have by reparameterization two main options for performing numericalcomputation. In this section we discuss the advantages and disadvantages of workingwith the time autoregressive spatial structure or the covariance directly.

In order to illustra te the two modelling approaches, we will consider a dataset ofdaily UK temperatures from n ˆ 17 measurement stations in the time period1959–1995. Although an important part of an analysis of such data is speci�cationof a trend model, a nonparametric estimate of the trend was removed from the data inorder to concentrate on the covariance structure. QQ-plots of the residual processindicated that a Gaussian distribution is reasonable for these data. Figure 5 (left panel)shows empirical estimates of correlation against distance showing a clear spatialstructure. To make a visual check for nonseparability, nonparametric estimates(using the S-PLUS function lowess) of c…s; kuk†=c…0; kuk† are plotted in the rightpanel of Figure 5 (normalized to have value equal to one for kuk ˆ 0). With a separablecovariance structure, these lines should be horizontal, while departure from constancyindicates non-separability. Increase in c…s; kuk†=c…0; kuk† for small values of kuk, whichis clearly present for these data, indicate spatial convolution over time.

Consider calculation of likelihoods of such data. For simplicity we will assume thedata (after removing the temporal trend) has expectation zero. One approach is tocalculate the full covariance matrix for the given data and use the de�nition ofmultivariate normal densities. Such an approach is easy to implement and will workwell for small datasets. With large datasets, such an approach will be impractical (ingeneral the computational complexity, mainly involving calculation of the Choleskydecomposition of the covariance matrix, is of order N3 where N is the number ofobservations). Approximations can be applied (and have been used in the literature)utilizing the fact that for large distances in space or time correlations will be negligible.

Stationary space–time Gaussian �elds 153

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 17: Stationary space-time Gaussian fields and their time autoregressive

Such approaches will however be somewhat ad hoc and also require more compleximplementation.

An alternative approach is based on the spectral representation of the timeautoregressive spatial model given in Equations (3.2) and (3.3). Together, these twoequations de�ne a dynamic state space model (West and Harrison, 1997) with anin�nite dimensional state vector. Discretizing dZ…t; w† to a �nite grid in the frequencyspace makes Kalman �ltering techniques possible to apply (the full details of thiscomputational procedure will be reported elsewhere).

The two computational approaches are compared on the given dataset. An extensionof the covariance structure of Example 3 was used, incorporating a variance term and ascaling in space, i.e.,

ce…s; u† ˆ s20 ‡ s2c…0; 0† if s ˆ u ˆ 0

s2c…s; au† otherwise

»

with s20; s2; a all positive. Likelihoods were calculated using parameters obtained by

maximum likelihood based on data from the �rst 100 time points. The maximumlikelihood values were y ˆ 0:99; a ˆ 0:46; c0 ˆ 9:79; s2

0 ˆ 99:48, and s2 ˆ 432:54.In the left panel of Figure 6, the likelihood is calculated based on data from the �rst

t days for t ˆ 1; . . . ; 100. The scale on the x-axis is the number of observations onwhich the likelihood is based (because of some missing data, this is a bit smaller thant £ n). The scale on the y-axis is log-likelihood divided by the number of observations.This division is made in order to get likelihoods on approximatively the same scale. Thesolid curve is log-likelihood based on direct calculation of the complete covariancematrix, while the dashed and dotted lines are based on the approximative state spacemodel combined with Kalman �ltering (using a 12 £ 6 and a 8 £ 4 grid, respectively)for approximating dZ…t; w†. We see that the approximations perform very well (it isactually not possible to see any difference between the lines). We also see that errors do

Figure 5 Left panel: Empirical estimates of correlation against distance. Each point corresponds to correlationbetween observations from two measurement stations. Right panel: Nonparametric estimates ofc…s; kuk†=c…0; kuk† for s ˆ 1; 2; 3

154 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 18: Stationary space-time Gaussian fields and their time autoregressive

not accumulate in time, which can be shown theoretically. This is in contrast to anapproach where the process fz…t; x†g itself is discretized in space.

In the right-hand panel, computer time used for each likelihood calculation is shown.The linear increase in Kalman-�ltering-based likelihood calculations is clearly seen,while a more dramatic increase in computer time is seen for the covariance-matrix-based method.

Figure 7 gives pro�le likelihoods based on 100 time points with varying a while allother parameters are �xed equal to their maximum likelihood estimates. Solid linesshows the likelihood function calculated from the full covariance function, while theother lines shows the results based on Kalman �ltering using a 12 £ 6 grid (dashed) anda 8 £ 4 grid points (dotted). With 72 grid points, the two approaches are indistinguish-able, but for 32 grid points the approximation is also very good.

In the case of no missing data, the N £ N covariance matrix of the complete datasetwill have a block Toeplitz form. Such structures can be utilized to construct algorithmsrequiring N2 (or even N…log N†2 ) operations (Lin, 2001), compared to N3 operations inthe general case. Writing N ˆ Tn where T is the number of data points, and n is thenumber of spatial locations, the Kalman �lter approach requires an order of Tn3

Figure 6 Left panel: Likelihoods as functions of number of observations. Right panel: Computer times asfunctions of number of observations. Solid line: Calculations based on full covariance matrix. Dashed line:Kalman � lter with a 12 £ 6 grid in the spectral domain. Dotted line: Kalman � lter with a 8 £ 4 grid

Figure 7 Pro� le likelihood as a function of a. All other parameters � xed at their maximum likelihood values.Solid line: Calculations based on full covariance matrix. Dashed line: Kalman � lter with 12 £ 6 grid in thespectral domain. Dotted line: Kalman � lter with 8 £ 4 grid

Stationary space–time Gaussian �elds 155

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 19: Stationary space-time Gaussian fields and their time autoregressive

operations, which still will be bene�cial when T is large. Further, the Kalman �lterapproach can easily handle missing data, which is not the case for the approach basedon using the full covariance matrix.

The properties of the two modelling strategies for performing numerical calculationof likelihoods can be transfered to prediction and conditional simulation becauseCholesky decomposition of the covariance matrix and running the Kalman �lter willalso be the main computational tasks in these cases. In particular, conditional simula-tion based on the spectral state space representation can be performed by the simulationsmoother (De Jong and Shephard, 1995).

7 Discussion

In this paper we have discussed two alternative formulations of a spatio-temporal model.We have shown how in some circumstances theycan lead to the same process, and how inthis case one form can be translated into the other. We have considered the computa-tional and interpretational advantages and disadvantages of both representations.

One aim has been to make it easier to choose a model that could represent a plausiblephysical process. Often a parametric form for a covariance function is chosen withoutregard to any interpretation. In some circumstances informative empirical covariancefunctions can be calculated, and used to inform this choice. In most applicationshowever, either space or time is sparsely sampled, making the empirical covariancefunction much less informative. In this situation it may be very dif�cult to justify aparticular parametric form, but because of the physical interpretation of the timeautoregressive spatial model, speci�cation of the convolution function h…¢† and thespatial covariance function r…¢† may be easier. Equally the fact that some models,particularly but not exclusively separable ones, cannot be interpreted as timeautoregressive spatial models may itself help modelling.

Our discussion has concentrated on stationary processes. Recently there has beeninterest in the problem of spatial heterogeneity. Two main approaches havebeen applied. In Sampson and Guttorp (1992), heterogeneity was modelled throughdeformation of the spatial domain, using stationary covariance structures on thedeformed space. Higdon et al. (1998) model spatial processes through local convolu-tions of stationary processes, heterogeneity being incorporated through allowing theconvolution kernels to vary smoothly over space (a parametric version of this approachwas used in Hirst et al., 2001). In each case, the spatial heterogeneity is modelledthrough transformation of stationary spatial processes. The stationary processesdiscussed in this paper can be used as building blocks for modelling spatial hetero-geneity using either of the two representations, with temporal dependence beingmodelled through time autoregressive spatial convolution of the underlying stationaryprocesses.

Acknowledgements

We are grateful to the British Atmospheric Data Center, which provided us with accessto the Met Of�ce Land Surface Observation Stations Data. The authors also wish to

156 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 20: Stationary space-time Gaussian fields and their time autoregressive

thank the anonymous referees for valuable comments on this work. The authors aresupported by grant no. 121144=420 of the Norwegian Research Council, by the ESFHSSS programme, and by the EU-TMR programme ERB-FMRX-CT96-0095.

References

Bouleau N, Lepingle D (1994) Numerical methodsfor stochastic processes. Wiley series inProbability and mathematical statistics. NewYork: John Wiley & Sons, Inc.

Brown PE, KaKresen KF, Roberts GO, Tonellato S(2000) Blur-generated non-separable space-timemodels. Journal of the Royal Statistical Society,Series B, 62, 847–60.

Carroll RJ, Chen R, George EI, Li TH et al. (1997)Ozone exposure and population density inHarris county, Texas. Journal of the AmericanStatistical Association, 92, 392–415, withdiscussion.

Cressie N (1991) Statistics for spatial data. NewYork: Wiley.

Cressie N, Huang H (1999) Classes ofnonseparable spatio-temporal stationarycovariance functions. Journal of theAmerican Statistical Asscoiation, 94, 1330–40.

De Jong P, Shephard N (1995) The simulationsmoother for time series models. Biometrika, 82,339–50.

Diggle PJ, Tawn JA, Moyeed RA (1998) Model-based geostatistics. Journal of Royal StatisticalSociety, Series C (Applied Statistics), 47,299–326.

Ecker MD, Gelfand AE (1997) Bayesian variogrammodeling for an isotropic spatial process.Journal of Agricultural, Biological, andEnvironmental Statistics, 2, 347–69.

Haas TC (1995) Local prediction of a spatio-temporal process with an application towet sulfate deposition. Journal of theAmerican Statistical Association, 90, 1189–99.

Higdon D, Swall J, Kern J (1998) Non-stationaryspatial modeling. In Bernardo JM, Berger JO,Dawid AP, Smith AFM, eds., Bayesianstatistics 6. Oxford: Oxford University Press,761–68.

Hirst D, Storvik G, Syversveen AR (2001) Ahierarchical modelling approach to combiningenvironmental data at different scales.Submitted for publication.

Høst G, Omre H, Switzer P (1995) Spatialinterpolation errors for monitoring data.

Journal of the American Statistical Association,90, 853–61.

Jones RH, Zhang Y (1997) Models for continuousstationary space–time processes. In GregoireTG, Brillinger DR, Diggle PJ et al. eds.,Modelling longitudinal and spatially correlateddata of lecture notes in statistics, Vol. 122. NewYork: Springer-Verlag, 289–98.

KnorrHeld L, Besag J (1998) Modelling risk from adisease in time and space. Statistics in Medicine,17, 2045–60.

Lin F (2001) Preconditioners for block Toeplitzsystems based on circulant preconditioners.Numerical Algorithms, 26, 365–79.

Matern, B (1986) Spatial variation. Number 36 inLecture Notes in Statistics. New York: Springer-Verlag.

Oberhettinger F (1973) Fourier transformsof distributions and their inverses.Probability and mathematical statistics,Academic Press.

Ripley BD (1981) Spatial statistics. Wiley series inprobability and mathematical statistics. NewYork: John Wiley & Sons.

Sampson PD, Guttorp P (1992) Nonparametric-estimation of nonstationary spatial co-variancestructure. Journal of the American StatisticalAssociation, 87, 108–19.

Sølna, K, Switzer P (1996) Time trend estimationfor a geographic region. Journal of theAmerican Statistical Association, 91, 577–89.

Vecchia AV (1985) A general class of models forstationary two-dimensional random processes.Biometrika, 72, 281–91.

West M, Harrison J (1997) Bayesian forecastingand dynamic models, 2nd ed., Springer Series inStatistics. New York: Springer-Verlag.

Wikle CK, Cressie N (1999) A dimension-reducedapproach to space–time Kalman �ltering.Biometrika, 86, 815–29.

Yaglom A (1987) Correlation theory of stationaryand related random functions I. New York:Springer-Verlag.

Stationary space–time Gaussian �elds 157

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 21: Stationary space-time Gaussian fields and their time autoregressive

Appendix A: Proof of Theorem 1, Theorem 2 and Theorem 3

Proof of Theorem 1Using Equation (3.1) recursively, we can write

z…t; x† ˆ bt…x† ‡ ~zz…t; x†

where

bt…x† ˆ…

u1 ;...;ut

z 0; x ‡Xt

sˆ1

us

Á !Yt

sˆ1

h…us†dus

while f~zz…t; x†g is a process de�ned by

~zz…0; x† ˆ 0 8x …A:1†

~zz…t; x† ˆ…

uh…u†~zz…t ¡ 1; x ‡ u†du ‡ e…t; x† t > 0 …A:2†

We will show that bt…x† converges towards zero in probability while f~zz…t; x†g convergestowards an unique stationary distribution.

Consider �rst bt…x†. Now

var‰b1…x†Š ˆ var…

uh…u†z…0; x ‡ u†du

µ ¶

ˆ…

u

vh…u†h…v†cov‰z…0; x ‡ u†; z…0; x ‡ v†Šdudv

µ…

u

vh…u†h…v†Kdudv ˆ KH…0†2

Repeating this argument recursively, we obtain

var‰bt…x†Š µ KH…0†2t

Using jH…0†j < 1, we see that the variance converges towards zero, showing that theprocess itself converges towards zero in probability.

Consider next the process f~zz…t; x†g. Because fe…t; x†g is stationary in space, by thespectral representation theorem (Bouleau and Lepingle, 1994, Theorem B.2.4) thereexist processes fE…t; w†g such that

e…t; x† ˆ…

weiw0xdE…t; w†

158 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 22: Stationary space-time Gaussian fields and their time autoregressive

where E‰dE…t; w†Š ˆ 0 and the processes are orthogonal, i.e.,

cov‰dE…t; w1†; dE…t; w2†Š ˆ 0:

for w1 6ˆ w2 . Now ~zz…1; x† ˆ e…1; x†, giving d ~ZZ…1; w† ˆ dE…1; w†. Using Equation (A.2),we obtain

~zz…2; x† ˆ…

vh…v†

weiw…x‡v†d ~ZZ…1; w†dv ‡

weiwxdE…2; w†

ˆ…

w

vh…v†eiw0vdveiw0xd ~ZZ…1; w† ‡

weiw0xdE…2; w†

ˆ…

wH…w†eiw0xd ~ZZ…1; w† ‡

weiwxdE…2; w†

ˆ…

weiw0x‰H…w†d ~ZZ…1; w† ‡ dE…2; w†Š

showing that f~zz…2; x†g is a process with spectral representation

~zz…2; x† ˆ…

weiw0xd ~ZZ…2; w†

where d ~ZZ…2; w† ˆ H…w†d ~ZZ…1; w† ‡ dE…2; w†. The zero-mean and orthogonalityproperties of fd ~ZZ…1; w†g and fdE…2; w†g implies also that fd ~ZZ…2; w†g has zero meanand is orthogonal, resulting in fz…2; x†gbeing a process stationary in space (Bouleau andLepingle, 1994, Proposition B.2.3). By induction we have

~zz…t; x† ˆ…

weiw0xd ~ZZ…t; w† …A:3†

where

d ~ZZ…t; w† ˆ H…w†d ~ZZ…t ¡ 1; w† ‡ dE…t; w† …A:4†

with similar orthogonality properties. The process is therefore stationary in space for allt ¶ 0.

Equations (A.3) and (A.4) are an alternative representation of the model (A.2). Foreach w, (A.4) de�nes an ordinary AR(1) process which has an unique stationarydistribution provided jH…w†j < 1. Because f~zz…t; x†g is completely de�ned byfd ~ZZ…t; w†g, an unique stationary distribution for fz…t; x†g must exist. Further, thisunique stationary distribution must be stationary in space since f~zz…t; x†g is stationaryin space for all t. &

Stationary space–time Gaussian �elds 159

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 23: Stationary space-time Gaussian fields and their time autoregressive

Proof of Theorem 2Note that since R…w† is the spectral density of the stationary process fe…t; ¢†g, R…w† > 0 forall w. Combined with the condition that jH…w†j < 1 , this implies that G…0; w† > 0 for allw. Further, since jH…w†j < 1 ,

…jG…s; w†jdw µ

…jG…0; w†jdw ˆ

…G…0; w†dw < 1

showing that the Fourier transform of G…s; w† exists for all integer s.To show that this extends to s 2 R, we use the main result in Cressie and Huang

(1999), which speci�es a suf�cient condition for a function c…s; u† with s 2 R to bepositive de�nite. In our case we now take k…w† ˆ G…0; w† and r…s; w† ˆ H…¡w†js j. SincejH…¡w†j < 1, r…s; w† is integrable, implying that it is an autocorrelation function,making the (CH1) criterion satis�ed. In fact it is the autocorrelation function for thetime-continuous autoregressive model of order one. Also k…w† ˆ G…0; w† > 0. Finally„

k…w†dw < 1, is directly assumed. This also makes (CH2) satis�ed implying that c is avalid covariance function. &

Proof of Theorem 3Since jH…w†j < 1, by Equation (4.3) jR…w†j < G…0; w†. Furthermore, since„

G…0; w†do < 1, a similar property is valid for R…w†, which implies the existence ofthe Fourier transform r…x† of R…w†.

Now de�ne the process fz…t; x†g for integer time points s through model (3.1) usingthe given choices of h…¢† and r…¢†. By Equations (3.6) and (3.7), we easily see that thede�ned process has covariance function c…s; u†.

In order to prove uniqueness, assume there exist some other functions ~hh…x† and ~rr…x†,which, through (3.1), de�ne a new process ~zz with the same covariance function c. Thenalso G…s; w† must be common for the two processes. De�ne ~HH…w† and ~RR…w† to be theinverse Fourier transforms of ~hh…x† and ~rr…x†, respectively. By Equation (3.7)

G…0; w†H…¡w†js j ˆ G…0; w† ~HH…¡w†jsj

showing that ~HH…w† ˆ H…w†. Further, by Equation (3.6)

~RR…w†1 ¡ ~HH…w† ~HH…¡w†

ˆ R…w†1 ¡ H…w†H…¡w†

Hence

~RR…w† ˆ 1 ¡ ~HH…w† ~HH…¡w†1 ¡ H…w†H…¡w†R…w† ˆ R…w†

completing the proof. &

160 G Storvik et al.

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from

Page 24: Stationary space-time Gaussian fields and their time autoregressive

Appendix B: Proof of Equations (3.4) and (3.5)

We have from Equation (3.1)

c…0; u† ˆ cov‰z…t; x†; z…t; x ‡ u†Š

ˆ cov…

R2h…v†z…t ¡ 1; x ‡ v†dv ‡ e…t; x†

µ

R2h…w†z…t ¡ 1; x ‡ u ‡ w†dw ‡ e…t; x†

ˆ…

R2

R2h…v†h…w†cov‰z…t ¡ 1; x ‡ v†; z…t ¡ 1; x ‡ u ‡ w†Šdvdw

‡ cov‰e…t; x†; e…t; x ‡ u†Š

ˆ…

R2

R2h…v†h…w†c…0; u ‡ w ¡ vdvdw ‡ r…u†

where we have used independence between fe…t; x†g and the past. This demonstratesEquation (3.4). Similarly,

c…s; u† ˆ cov‰z…t; x†; z…t ¡ s; x ‡ u†Š

ˆ…

R2h…v†cov‰z…t ¡ 1; x ‡ v†; z…t ¡ s; x ‡ u†Šdv ‡ cov‰e…t; x†; z…t ¡ s; x ‡ u†Š

ˆ…

R2

R2h…v†c…s ¡ 1; u ¡ v†dvdw ‡ 0

demonstrating Equation (3.5).

Stationary space–time Gaussian �elds 161

at Universitet I Oslo on May 10, 2010 http://smj.sagepub.comDownloaded from