conditional simulation of geologically averaged block permeabilities

13
Journal of Hydrology 183 (1996) 23-35 Conditional simulation of geologically averaged block permeabilities A.G. Journel Petroleum Engineering Department, Green Earth Sciences Building, School of Earth Sciences, Stanford University, Stanford, CA 943052220, USA Received 23 June 1994; revision accepted 15 November 1994 Abstract Currently available hardware and software for flow simulation can handle up to hundreds of thousands of blocks, or more comfortably tens of thousands of blocks. This limits the discre- tization of the reservoir model to an extremely coarse grid, say 200 x 200 x 25 for lo6 blocks. Such a coarse grid cannot represent the structural and petrophysical variability at the resolu- tion provided to geologists by well logs and outcrops. Thus there is no alternative to averaging the impact of all small-scale, within-block, heterogeneities into block ‘pseudos’ or average values. The flow simulator will account for geological description only through those pseudos, hence detailed modelling of geological heterogeneity should not go beyond the information that block pseudos can carry, at least for flow simulation purposes. It is suggested that the present drive in outcrop sampling be clearly redirected at evaluating ‘geopseudos’, i.e. at evaluating how small-scale variability (both structural and petrophysical) of typical depositional units averages out into large blocks’ effective transmissivities and relative permeabilities. Outcrop data would allow the building of generic, high-resolution, numerical models of the geo-varia- bility within a typical depositional unit: this is where geology intervenes. Then, this numerical model would be input into a generic flow simulator, single or multiphase, yielding genetic block averages, for blocks of various sizes and geometries: this is where the reservoir engineer inter- venes. Next, the spatial statistics of these block averages (histograms, variograms, . . .) would be inferred: this is where the geostatistician intervenes. Last comes the problem of filling-in the actual reservoir volume with simulated block averages specific to each depositional unit. Because each reservoir is unique, random drawing of block average values from the previously inferred generic distributions would not be enough. The placement of block average values in the specific reservoir volume must be made conditional on local data whether well log, seismic or production-derived. This non-trivial task of ‘conditional simulation’ of block average is the challenge of both the reservoir geologist and geostatistician. This paper proposes an avenue of approach that draws from the pioneering works of Steve Begg at BP-Alaska (1992, 1994) and Jaime Gomez-Hernandez at Universidad of Valencia (1990, 199 1). 0022-1694/96/%15.00 0 1996 - Elsevier Science B.V. All rights reserved SSDI 0022-1694(95)02820-X

Upload: stanford

Post on 03-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Hydrology 183 (1996) 23-35

Conditional simulation of geologically averaged block permeabilities

A.G. Journel Petroleum Engineering Department, Green Earth Sciences Building, School of Earth Sciences,

Stanford University, Stanford, CA 943052220, USA

Received 23 June 1994; revision accepted 15 November 1994

Abstract

Currently available hardware and software for flow simulation can handle up to hundreds of thousands of blocks, or more comfortably tens of thousands of blocks. This limits the discre- tization of the reservoir model to an extremely coarse grid, say 200 x 200 x 25 for lo6 blocks. Such a coarse grid cannot represent the structural and petrophysical variability at the resolu- tion provided to geologists by well logs and outcrops. Thus there is no alternative to averaging the impact of all small-scale, within-block, heterogeneities into block ‘pseudos’ or average values. The flow simulator will account for geological description only through those pseudos, hence detailed modelling of geological heterogeneity should not go beyond the information that block pseudos can carry, at least for flow simulation purposes. It is suggested that the present drive in outcrop sampling be clearly redirected at evaluating ‘geopseudos’, i.e. at evaluating how small-scale variability (both structural and petrophysical) of typical depositional units averages out into large blocks’ effective transmissivities and relative permeabilities. Outcrop data would allow the building of generic, high-resolution, numerical models of the geo-varia- bility within a typical depositional unit: this is where geology intervenes. Then, this numerical model would be input into a generic flow simulator, single or multiphase, yielding genetic block averages, for blocks of various sizes and geometries: this is where the reservoir engineer inter- venes. Next, the spatial statistics of these block averages (histograms, variograms, . . .) would be inferred: this is where the geostatistician intervenes. Last comes the problem of filling-in the actual reservoir volume with simulated block averages specific to each depositional unit. Because each reservoir is unique, random drawing of block average values from the previously inferred generic distributions would not be enough. The placement of block average values in the specific reservoir volume must be made conditional on local data whether well log, seismic or production-derived. This non-trivial task of ‘conditional simulation’ of block average is the challenge of both the reservoir geologist and geostatistician. This paper proposes an avenue of approach that draws from the pioneering works of Steve Begg at BP-Alaska (1992, 1994) and Jaime Gomez-Hernandez at Universidad of Valencia (1990, 199 1).

0022-1694/96/%15.00 0 1996 - Elsevier Science B.V. All rights reserved SSDI 0022-1694(95)02820-X

24 A.G. Journel/ Journal of Hydrology 183 (1996) 23-35

1. Upscaling and the support effect

The statistical and visual characteristics of the distribution in space of a variable depends on the volume (size and geometry) on which it is defined. In geostatistical jargon, this volume is called ‘support’ and its influence on the spatial distribution of the variable is called the ‘support effect’ (Journel and Huijbregts, 1978, p. 77).

Let V(U) be a support volume v centred at location u, and z,,(u) the variable of interest defined on that volume. z,(u) represents some averaging of properties defined on supports smaller than U. In the simplest case, that averaging involves only the same attribute z at locations only within w(u); for example, the average porosity Z,,(U) can be seen as the linear value of the L values z,(ui) defined on smaller support c(u:) c v(u):

z,(u) = ; g z&d) 1=1

with U(U) = 0 c(u:) i=l

(1)

Fig. l(a) gives the distribution in space of 36 elementary values z,(u:), i = 1,. . . ,36, with mean m = 7.8 and variance a2 = 14.5. Figs. l(b)-l(e) give the spatial distribu- tions of linear averages of type (1) over increasingly larger supports, the w(u)s.

lb mz7.8 02s13.1 pmS6.60

I 7 6

E 65 ‘l.76 7.5

9.75 12 9

Id m=7.8 IS2 iQ.0 gw7.59

lc m-7.8 o2 =4.9 gm.7.53

F--l--A /

le m=7.8 2

lpnE7.83 o =.04

Fig. 1. Statistics and visual trends of an image are scale dependent. (Note the variability of the ‘ten’-contour line and the geometric average as the support size increases.) gm, Geometric mean; m, mean.

A.G. Journel / Journal of Hydrology 183 (1996) 23-35 25

It should be noted how the variance and visual trends of the averaged values z,(u) change with the v-support size and also geometry (compare Figs. l(b) and l(c)). Because the averaging is linear, the mean (7.8) remains constant; however, were the averaging non-linear that mean would also change with the support size and geometry (see the gm values in Fig. 1 for geometric means).

The example of Fig. 1 intends to show that statistics and visual representation of cell or block averaged values are different from those of data defined on a different and usually smaller support. The ‘image’ of the reservoir perceived by the flow simu- lator run on large blocks need not be the same as that ‘seen’ by the geologist from much smaller-scale data. In the practice of reservoir modelling the problem is com- pounded by several distressing facts: (1) data are extremely sparse, as opposed to being exhaustive as in the example of Fig. 1; (2) The ratio of support sizes between the building block of reservoir modelling and core data is huge, around 1012 instead of two or nine as in Figs. l(b) and l(e); (3) The averaging process is not congenially linear as in Fig. 1; worse, it is not even known analytically, nor is it limited to points within the averaging volume W(U) (see the later discussion on boundary conditions). We will make the case that a numerical modelling of block average values z,(u), particularly those related to hydrodynamic properties such as permeability, calls for much more than the ordinarily available subsurface well data.

2. Averaging hydrodynamic properties

The linear averaging expression (1) represents an extremely congenial case applic- able to scalar variables such as absolute porosity and saturations, whose averaging is solely controlled by the law of conservation of mass. This is not the case for effective porosity or saturations associated with notions of connectivity, capillary retention and recovery (Corbett et al., 1992). Most importantly, this is not the case for perme- ability whether seen as a scalar or as a tensor, and whether related to a single-phase flow or to multiple-phase flow.

Let us consider, for example, the case of absolute permeability (single phase) measured, and hence defined, on non-directional core support. We denote this vari- able k(u’), omitting the reference to the core support c(u’) assimilated to a quasi- point. When this non-directional permeability k(u’) is averaged over a medium with anisotropic heterogeneities, or over a volume V(U) that is not a perfect sphere, it becomes a symmetric tensor:

with &,Ju), &,(u), K,,,(u) being the diagonal components of that tensor in the three coordinate directions, x, y, z. It should be noted that u = (x, y, z) is the coordi- nates’ vector.

Each of the six terms of the tensor (2) is a complex non-linear averaging of the

26 A.G. Journel / Journal of Hydrology 183 (19%) 23-35

elementary quasi-point permeability values k(i),u’ E w(u), worse, that average depends on the usually unknown boundary conditions (pressure) that prevail at the surface borders of the volume w(u); that is, the permeability tensor K,,(U) is not intrinsic to volume W(U), it depends also and sometimes critically on the pressure field, itself dependent on permeability values k(u’) outside V(U): u’ $ w(u).

In practice, full tensors such as (2) are rarely considered in flow simulations for inference reasons (six terms!) but also because of software limitations. Assuming that the rectangular block coordinate directions are aligned with the major tensor direc- tions, one can approximate the full tensor (2) by a diagonal tensor with all cross-terms set to zero:

[

K&) 0

K,(u) = J&J (4

I

(3) 0 K&u)

It still remains to evaluate three diagonal terms, generally unequal even though the measured quasi-point permeability k(u’) is a non-directional scalar. Indeed, the rec- tangular block w(u) is usually not a cube - it is made longer in the plane and direction of principal flow. Even if w(u) were a cube, any anisotropy of the hetero- geneities of the petrophysical media embedding w(u) would generate (after averaging) anisotropic, i.e. different, diagonal values: K,,.(u) # K,,,,(u) # K,.,,(u).

Let us consider now a particular diagonal component, say K,,,,(u) in the direction of principal flow (see Fig. 2). The tensor (2) or (3), and hence all of its components including K,,,(u), depends on the specific boundary conditions assumed to prevail on the outer faces of the rectangular volume w(u). Typically, for the calculation of K,,,(u), the two faces (x, y) and two faces (x, z) parallel to the direction x are con- sidered no-flow boundaries; for example, flow in the direction z is allowed only within the block w(u) but not across the two (x,y) faces. Notwithstanding all these approx- imations, the directional average value K,,.(u) remains a complex non-linear function cp of all elementary scalar values k(u’) within w(u):

Kv,&) = cpk+‘), u’ E w(u)1

Except for trivial oversimplified cases, the complex function cp is not known analy- tically. Typically, a numerical flow simulator would be applied to a grid of values k(u’) discretizing the volume w(u) under the boundary conditions of Fig. 2, yielding a numerical approximation for K,,,(u) (Aziz and Settari, 1979). However, such a numerical approach would have to be repeated for each different block w(u). Also, each block w(u) must be discretized by a grid fine enough for each node1 value k(r/) to be representative of its cell of influence; indeed, the flow equations assimilate each value k(u’) to the corresponding cell average permeability or, more precisely, its face transmissivity. Rarely are subsurface data dense enough to allow such a high- resolution description of the heterogeneities specific to any one particular block w(u).

It thus appears that, if the averaging process is non-linear, statistics of the spatial distribution of block averages, such as K&u), cannot be inferred directly from smaller support data. Yet the final reservoir model must be built from such blocks.

A.G. Journel/ Journal of Hydrology 183 (1996) 23-35 21

‘/////j/// p2

-0 OAx

‘///////I Kv,# b) - -

(4’Pp)

Fig. 2. Calculation of directional effective permeability.

It is suggested that the block support statistics be synthesized from numerical aver- aging of generic images computer-built at the resolution of the quasi-point support of well log or core plug data. Undoubtedly, there exist heterogeneities at scales smaller than core plug; however, and for simplicity, by quasi-point we refer to the smallest support of information widely available.

3. Generic distribution of geoaverages

(Rather than the term ‘geopseudo’, which relates to multiphase relative perme- ability, we will use the term ‘geoaverage’, for ‘geologically averaged single-phase block permeability’.)

As the reservoir model is to be defined on the support w of flow simulation blocks, one would wish to have subsurface data, say k,+(u), defined on the same support ZI. As this is never the case, the next best would be a generic spatial distribution of K_(u) values in a depositional environment similar to that of the formation being modelled in the actual reservoir. Statistics such as the histogram and variogram of the generic K,,, Ju) values would be extracted, then used (after proper calibration, which includes correction to account for reduction of pore size and permeability owing to packing at depth) for the task of generating reservoir-specific block average values.

Direct measurements of block average values on outcrop formations are not prac- tical, at least with block ZI the size of typical flow simulation blocks or modelling cells (1OOm x 1OOm x 1 m). Hence there is no alternative but to obtain the generic block values from numerical averaging using flow simulation. However, as these values need only be generic (although specific to a particular depositional type or facies association), the elementary quasi-point support values k(u’) can be simulated using data from outcrops. Dense outcrop data can be afforded as opposed to similarly dense subsurface data.

Let us consider then an outcrop A of a given formation (say, a fluvial channel) deemed representative of the same formation in the subsurface reservoir. Dense mini- permeameter sampling of outcrop A provides a network of quasi-point data values {k(uh), ub, E A} in addition to a detailed geometrical description of bedding and other depositional heterogeneities within A. The extent of the sampling area A should span the dimensions of an assemblage of several building blocks V, say five times the size of w in each principal direction, to allow inference of not only the block average values K,,,(u) but also of their spatial correlation. Indeed, two contiguous block averages

28 A.G. Journel / Journal of Hydrology 183 (19%) 23-35

K&4 and K&4 in th e same formation are probably not independent one from another and that dependence is likely to be direction (x)-specific. If the building block size is 100 m x 100 m x 1 m, the previous factor of five would call for sampling and modelling of an outcrop volume of 500m x 500m x 5m - by no means an easy or inexpensive task.

No matter how dense is the sample data set {k(rr&), U& E A}, it will always be far from exhaustive. Some interpolation will be needed to complete it into a gridded data set {k(J), U’ E A}, which is still not exhaustive, but is deemed dense enough to repre- sent the exhaustive distribution of k(i) within A. That gridded data set is then input into a flow simulator that will generate the block average values {K,,,(u), u E A}. If the outcrop volume is 500m x 500m x 5m, a discretization of decimetre cell size would call for an interpolated grid of [5 x 103J2 x 50 = 125 x 10’. Such a formidable grid is still beyond the capacity of present-day hardware and software for flow simulation.

There are two ways to cut down such a number: (1) accept a coarser discretization, say at the metre scale, resulting in a grid with

(only!) 125 x lo4 nodes, although then centimetre- and decimetre-scale geological heterogeneities would not be accounted for.

(2) Limit the outcrop volume to the size of one single building block ZJ, say 100 m x 100 m x 1 m. With the decimetre discretization the resulting grid would still include 10’ nodes. With a metre-scale discretization the interpolated grid would include a reasonable lo4 nodes. However, then the spatial correlation of con- tiguous block average values could not be inferred. Also, as one could not rely on a single generic block value, that exercise would have to be repeated, say ten times on ten independent outcrop volumes. Last, the whole task would have to be repeated for each different facies association or depositional type constituting the large-scale flow architecture of the reservoir. An average of 3-5 such facies associations per reservoir is to be expected (Begg et al., 1994). These numbers indicate that detail-thirsty, outcrop-prone, field geologists may not have evaluated fully the extent (and cost) of outcrop sampling necessary to provide statistically significant generic information about the process of block averaging.

3.1. Remark

As minipermeameter measurements are somewhat expensive and, to some, not representative of subsurface permeability values, the description of the outcrop A may be limited to the geometry of its component facies. That geometric description can be fully deterministic (difficult in 3D) or partially stochastic: the outcrop deter- ministic description is completed using Boolean, object-oriented, imaging algorithms (Hirst et al., 1993). Once the 3D geometric image(s) of the facies distributions within A is obtained, permeability values specific to each facies can be drawn from the corresponding subsurface sample distribution, for example, that of core plug measurements. In other words, the high-resolution modelling of a permeability distribution over volume A draws from the geometric data observed in the surface

A.G. Journel / Journal of Hydrology 183 (19%) 23-35 29

outcrop, and from the permeability distributions (facies specific) obtained from sub- surface data. Such an approach allows saving on minipermeameter measurement but fails to reproduce the spatial correlation of permeability values within any specific facies. Subsurface data are rarely sufficient to evaluate the necessary covariance or variogram.

4. Geoaverage statistics and simulation

In the following, it is assumed that the generation of outcrop-based geoaverages, say {K,,,(U), u E A} in the direction x of maximum continuity, has allowed inference of block-support statistics such as:

(1) the cumulative distribution function (cdf), or cumulative histogram:

(5)

(2) the covariance

G,X(~) = cov WU,X(U), KJ,X(U + h)],Vh (6)

(3) the covariance of any transform of K,,.(u) such as In&.(u) or the uniform transform Y,,,(U):

where Y,,,(u) = F,,,[&,(u)] E [O, l] is th e uniform transform of the block effective value K.&u). The usefulness of this uniform transform will be appreciated below.

All these statistics are specific to each of the depositional types or facies associa- tions retained for modelling the large-scale architecture (geometry) of the actual reservoir. Pixel-based algorithms allow the generation of spatial distributions of block averages consistent with the statistics (5), (6) or (5), (7). It should be recalled that a pixel or voxel need not be a point support, it can relate to the block support of geoaverages.

At the block scale the criterion of visual realism no longer applies, if only because geopseudos cannot be seen from outcrop. Visual realism would call for mapping the full tensor of effective permeability (2) or (3), instead of any single component, to appreciate the geometry of flow patterns. However, and because the simulation of block average is reservoir specific, it should be made conditional to the actual sub- surface data.

5. Conditioning the simulation of block averages

A pixel-based non-conditional simulation of block averages reproducing the statistics (5) and (6) is trivial. The difficulty resides again in the support problem. These statistics relate to the geoaverage support 21, which is vastly larger than the quasi-point support of the subsurface data. Any particular block volume U(U)

30 A.G. Journelj Journal of Hydrology 183 (1996) 23-35

intersected by one or more wells would include many quasi-point data {k(r&), u& E V(U)}. The question is how to condition the block simulated value K&U) to such an internal data set.

Because of the large difference in support size, one can argue that the local infor- mation supplied by the internal data set is only one of ranking for block effective values. A block W(Q) with internal data on average higher than the internal data of block u(uz) should have a higher pseudo, i.e. k,,,(ut) > &+(u~). It is suggested that some power average k,(u) of the internal data be retained to rank the block averages, provided it is computed on sufficient data. The w-power average is defined as

where n(u) is the number of quasi-point data values k(rr&) available within block w(u), and w is an averaging parameter to be calibrated, e.g. from prior small-scale flow simulations (Deutsch 1989). The w value may be direction dependent in anisotropic media, with typically w = + 1 (arithmetic average) in a direction parallel to lamination (at the block scale) and w negative in a direction orthogonal. In isotropic media the geometric average (w = 0) should be considered. If n(u) is below a certain limit, or if these data originate from less than a specified number of wells, these data may be ignored, which amounts to ignoring the rank conditioning provided by these too few data.

The idea is to condition the simulation of the block pseudos {K&u), u E depositional type c A} to the spatial ranks of the data averages k,(u). The con- sequence of such rank conditioning is that the spatial trends of the simulated block pseudos will mimic those of point-support data without having to identify block simulated values to point data values. An algorithm for such rank-conditioned simu- lation is proposed next.

5.1. An algorithm for conditioning to ranks

Just as in Gaussian simulations, where normal score transforms are simulated to capitalize on the properties of the Gaussian model, the idea here is to simulate rank transforms of the block average values to capitalize on the possibility of conditioning on rank data; then the simulated rank values are back-transformed using the original variable cdf. The simulation paradigm proposed is in all points similar to that of sequential Gaussian simulation as implemented in program sgsim of GSLIB (Deutsch and Joumel, 1992, p. 164).

The simulation of block pseudos conditioned to rank data proceeds as follows: (1) Generate from generic geoaverages the block effective values’ statistics (5)-(7).

It should be recalled that these statistics are specific to a particular facies association, a particular block size and geometry and a particular direction (x).

(2) Average the data internal to each block, for example, perform the power aver- age (8). Not all blocks of the flow simulation grid will be sampled. Let N’ be the number of blocks sampled with such internal data average: &,,(u;),j = 1,. . . , N’.

A.G. Journel/ Journal of Hydrology 183 (1996) 23-35 31

Rank these data averages from the smallest (Rank 1) to the largest (Rank N). In the improbable case of ties, break them using, e.g. the arithmetic average of the same internal data. Let r(~i),j = 1, . . . , N’, be these rank values. The corresponding uni- form transform of the data average k,(4) is y($) = (l/N’)r(uJ) E [0, l] (for a brief reminder of the theory of rank and uniform transform, see the Appendix).

(3) Perform a stochastic simulation of the uniform transforms, Y, X(ul), 1= l,..., N, of the N block pseudo values conditional to the previous N’ uniform data y(uJ), j = 1,. . . , N’ 5 N. The marginal distribution is uniform [0, l] and the covariance is C,,(h) as defined by relation (7) and as inferred in Step (1). Such conditional simulation can be performed using a sequential simulation algorithm (Deutsch and Journel, 1992). Let {y&(q), I = 1,. . . , N} be the sth realization of that conditional simulation; it honors the generic covariance model C,,,(b) and the uniform (rank) data in that, for blocks that have internal data,

g&4;) = y(uJ) = $r(u$), vj = 1,. . . ,N’

(4) Finally, perform the back-transform identifying the generic block average cdf F,,,(k), as defined in relation (5) and inferred in Step (1). More precisely, the sth realization of the block average values is {&S?(q), I = 1, . N} with

(9)

The cdf of the N simulated values K$(ul), I = 1,. . . , N, thus identifies the generic block pseudo cdf F,,,(k), and hence all its moments and quantiles including mean, variance, median, minimum and maximum. The N simulated block average values K$(u,) also honor the data ranks r(uJ) in that

By honoring the ranks of the internal data set (N’), the trends of the simulated block averages reproduce those of the subsurface data.

5.2. Implementation aspects

(1) The inference of geoaverage statistics (histograms and variograms) and their conditional simulations are specific to each of the depositional types or facies associa- tions used to model the large-scale architecture of the actual reservoir. Thus the number of such facies associations should be small and limited to facies associations that can be clearly identified from subsurface data. The simulation of block average values for each facies association is done independently from that for another facies association, then it is placed into the previously determined or simulated assemblage of facies associations, e.g. using the ‘cookie-cutter’ technique (Begg et al., 1992). The ‘cookie-cutter’ approach may generate sharp discontinuities of block average values at the boundaries between facies associations. Such artefact discontinuities would be somewhat smoothed out by conditioning to rank data across facies associations. It should be noted that the block size, i.e. the support of the block averages being simulated, may vary from one facies association to another. Some post processing

32 A.G. Journel / Journal of Hydrology I83 (19%) 23-35

may be needed to match the faces (transmissivity) of blocks of different sizes across two different facies associations.

(2) The previous discussion was limited to the simulation of one single component, &(u), of the block effective permeability tensor (2) or (3). In practice, a joint con- ditional simulation of all components would be needed. The algorithm of co-located cosimulation (Deutsch and Journel, 1992, p. 122; Xu et al., 1992; Almeida, 1993), might be considered.

(3) Although the generic (non-conditional) simulation of a full block pseudo tensor of the type (2) using outcrop data and appropriate flow simulation codes is possible, the joint conditional simulation of all nine components of the full tensor would be extremely difficult, for several reasons; for instance, the lack of conditioning data on each of the nine components even if limited to spatial ranks, and the dependence of the tensor (2), and hence its statistics on the boundary conditions considered for its simulation. This is also true for a diagonal tensor of type (3), but to a lesser extent as long as the block faces are made parallel and orthogonal to the local flow direction. This latter hypothesis calls for determining before simulation of the block averages the local flow directions - the ‘catch-22’ problem of reservoir modelling. Flexible gridding does provide flexibility in such a task, by allowing the block geometry and size to adapt to the local heterogeneity and (hypothesized) flow direction (Ballin et al., 1992).

(4) All previous discussions and both tensors (2) and (3) address only steady-state, absolute (single-phase) permeability.

The problem of evaluating and upscaling relative permeabilities is a difficult one, which is critical to flow modelling, but outside the scope of this paper. A possible approach may consist of extending the concept of geoaverages to phase-specific, and saturation-specific effective block permeabilities; the resulting upscaled relative per- meability curves would then be used in the course of the flow simulation.

6. PreIiminary conclusions

Although introduced in 1990 and applied successfully in at least one reservoir (Begg et al., 1992), the concept of direct simulation of block averages is still too recent to justify a conclusion for this paper. As usual, repeated applications with specific implementations will indicate whether the concept has taken root.

The two major points addressed in this paper are the following: (1) the ratio between the support volumes of building blocks of reservoir models

and poroperm data (core plugs and well logs) is huge, from lo9 to 10i2. Thus tradi- tional stochastic simulation of poroperm values conditional to well data actually provides only the central quasi-point value of the model building block or geostat cell. Assimilation of that central value to the block or cell average value amounts to ignoring all within-block heterogeneities. That missing scale problem is pervasive through most of the literature on upscaling and permeability averaging.

(2) It is suggested that the macroscopic impact of these within-block heterogeneities be evaluated through the statistics (histograms, variograms) or block averages. Such

A.G. Journel / Journal of Hydrology 183 (1996) 23-35 33

statistics would be modelled using high-resolution geometric and continuous data, which are only available from outcrops. This modelling is specific to each depositional type of facies association retained but, otherwise, need not be made conditional to location-specific subsurface data. Once the statistics of block averages have been inferred, they can be used to simulate directly the block poroperm average values in the reservoir model. This simulation now must be made conditional to the sub- surface data, a conditioning that is not value-to-value because of the difference in support volumes, but to spatial ranks; where well data indicate a higher permeability the corresponding simulated block average values should also be higher.

The paper warns against simulating in the subsurface reservoir model heterogeneity details whose locations cannot be conditioned because of data sparsity. It is proposed that only the macroscopic impact of such small-scale heterogeneities on fluid flow be simulated. However, first, that macroscopic impact must be evaluated on high- resolution models, an exercise best done on the ‘surface’ where abundant outcrop data allow proper conditioning.

References

Almeida, AS., 1993. Joint simulation of multiple variables with a Marko-type coregionalization model. Ph.D. Thesis, Stanford University.

Aziz, K. and Settari, A., 1979. Petroleum Reservoir Simulation. Elsevier Applied Sciences, Barking, UK, 476 pp.

Ballin, P., Joumel, A.G. and Aziz, K., 1992. Prediction of uncertainty in reservoir performance forecasting. J. Can. Pet. Technol., 31(4).

Begg, S., Gustason, E. and Deacon, M., 1992. Characterization of a fluvial-dominated delta: Zone 1 of the Prudhoe Bay Field. SPE Pap., 24698. Society of Petroleum Engineers, Richardson, TX.

Begg, S., Kay, A., Gustason, E. and Angert, P., 1994. Characterization of a complex fluvial-deltaic reservoir for simulation. SPE Paper 28398. Society of Petroleum Engineers, Richardson, TX.

Corbett, P., Ringrose, P., Jensen, J. and Sorbie, K., 1992. Laminated elastic reservoirs: the interplay of capillary pressure and sedimentary architecture. SPE Pap., 24699. Society of Petroleum Engineers, Richardson, TX.

Deutsch, C.V., 1989. Calculating effective absolute permeability in sandstone/shale sequences. In: SPEFE, Sept. 1989, Society of Petroleum Engineers, Richardson, TX, pp. 343-348.

Deutsch, C.V. and Journel, A.G., 1992. GSLIB: Geostatistical Software Library and User’s Guide. Oxford Press, New York, 340~~.

Gomez-Hernandez, J., 1991. A stochastic approach to the simulation of block conductivity fields condi- tioned upon data measured at a smaller scale. Ph.D. Thesis, Stanford University.

Gomez-Hernandez, J. and Journel, A.G., 1990. Stochastic characterization of grid-block permeabilities from point values to block tensors. In: D. Guerillot and 0. Guillon (Editors), Proc. 2nd ECMOR. Technip, Paris, pp. 83-90. Reprinted in SPE-FE, June 1994, pp. 93-99.

Hirst, J., Blackstock, C. and Tyson, S., 1993. Stochastic modelling of fluvial sandstone bodies. In: S. Flint and I. Bryant (Editors), The Geological Modelling of Hydrocarbon Reservoirs and Outcrop Analogues. Special Publ. IAS. Blackwell, Oxford, pp. 237-251.

Journel, A.G., 1984. The place of non-parametric geostatistics. In: G. Verly et al. (Editors), Geostatistics for Natural Resources Characterization. D. Reidel, Dordrecht, pp. 307-335.

Journel, A.G. and Huijbregts, C.J., 1978. Mining Geostatistics. Academic Press, New York, 600~~. Tran, T., 1996. Direct simulation of block effective properties: inference and conditioning. J. Hydrol., 183:

37-56. Xu, W., Tran, T., Srivastava, R.M. and Journel, A.G., 1992. Integrating seismic data in reservoir model-

ling: the collocated cokriging alternative. SPE Pap., 24742.

34 A.G. Journel / Journal of Hydrology 183 (1996) 23-3.5

Appendix: Rank and uniform transform (see also Journel (1994, p. 329))

Let us consider a set of n values ranked by increasing order:

Zl<ZZ<...<Z,

Any tie should be broken. In a spatial context this is easily done, e.g. using the neighbourhood average of each sample value (Deutsch and Journel, 1992, p. 209).

The cumulative frequency (cdf) corresponding to the kth largest value zk is li(zk) = (k/n) E [0, 11. The rank of that value is rk = nF(zk) = k, an integer value in [l,n].

Generalizing the previous definitions to a continuous random variable Z with strictly increasing cdf F(z), the uniform transform of Z is defined as

Y = F(Z) E [O, l]

Y is uniformly distributed in [0, 11; indeed,

(11)

F,(y) = Prob { Y 5 y} = Prob {F(Z) 5 y}

= Prob {I;-’ (F(Z)) 5 F-lb)}, as the quantile function I;-‘(.) is monotonic increasing

= Prob {Z < F-’ (y)} = F(F-l(y)) = y E [0, 11, QED.

Rather than estimating or simulating a rank value which is an integer depending on the sample or population size, one would estimate or simulate the uniform transform y(u) of the original unknown variable Z(U). To do so, one would need the covariance of Y(U) as defined in relation (7).

Let, for example, y(‘)(u) be a simulated value for Y(U). A back-transform using the quantile function F-’ (-) restitutes a simulated value Z(“)(U) for the original variable:

z’“‘(u) = F-l (y’“‘(u)) (12)

Because in practice all simulations (estimations) are performed on a grid with a finite number of nodes N, simulating the uniform transform y(u) of node u amounts to simulating the ‘rank’ r(u) = NY(U) = N - F(z(u)) of the value z(u) at that node. That simulated rank is then back-transformed into a simulated value for Z(U) using the Z-cdf, hence ensuring reproduction of the latter. It should be noted that it is the covariance of Y(u), i.e. the standardized rank covariance, which is reproduced, not the Z covariance. Similarly, in a Gaussian approach (e.g. program sgsim of GSLIB) it is the normal score covariance which is reproduced not the original variable covariance.

Conditioning to sample uniform transform values y(ucl), a = 1, . . . n, amounts to condition to sample ranks I(u,) = ny(u,), CY = 1,. . . , n. If a particular node u of the set of N nodes to be simulated identifies a datum location Us, then

Y%) = Yk)

A.G. Journel/ Journal of Hydrology 183 (1996) 23-35 35

i.e.

#)(U) = Ny@)(u) = NJ@,) =5(&J n

If z(u,) is the tenth largest of a sample size n = 100, then r(u,) = 10. The rank of the simulated value at node u = u, of a grid containing N = lo6 nodes is

#(u) = lo” x 10 = 105 102

as it should be. If enough decimals are kept for the simulated values yCS)(uJ, i= l,... , N, very few values will be tied. Again, in a spatial context, ties could be easily broken.

An important remark

In many applications, one could argue that it is not so much the data values themselves that are important - they may be erroneous owing to sampling errors or short-scale fluctuations (a striking example would be core plus permeabilities). It is the trends revealed by these data that are important: some areas are shown by the data to be ‘richer’ (of higher z value) than others. The simulation or estimation should reproduce those trends, i.e. those spatial ranks, rather than the last decimal of each individual sample value. This is particularly true when simulating or estimating a variable which is defined on a support different from that of the sample data, as is the case when simulating block effective (average) properties conditional on quasi-point support data.