Transcript
Page 1: Seismic processing - velocities

The Constitution of the Rock

The first factor determining seismic velocity is the chemical constitution of the rock. It is easy to accept that a uniform rock consisting entirely of calcium carbonate would have a different velocity from that of a uniform rock consisting entirely of silicon dioxide.

In fact, however, sedimentary rocks are seldom uniform. The closest approximation might be the evaporites, particularly salt, whose velocity is usually within the fairly small range of 4200-4500 m/s (about 14,000-15,000 ft/s)

Among the rocks composed basically of grains, uniform constitution can exist only where the voids between the grains have become totally filled by a cement of the same material as the grains. When this happens, limestone rocks are observed to have velocities on the order of 6300-7000 m/s (about 21,000-23,000 ft/s), and sandstones on the order of 5500-5800 m/s (about 18,000-19,000 ft/s). Unfortunately, this unusual situation is the only one in which velocity can be used as a definite indicator of the chemical constitution of the rock.

More common is the situation where the cement (while still totally filling the pores) is of a different chemical constitution; we might have, for example, a sandstone totally cemented by calcite. Then, velocity must have some sort of average value, between that of the sand grains and that of the calcite cement. It is observed empirically that the appropriate average is the time-average velocity ( Figure 1 and Figure 2 ).

Figure 2

Page 2: Seismic processing - velocities

Figure 1

The physical meaning of this equation is easy to see for the unit cubes of rock depicted. The time for a seismic wave to pass through the block of Figure 1 is 1/V, where V is the time-average velocity. In Figure 2 , this time is seen to be the same as the time to pass first through all the grain material and then to pass through all the cement material. The concept of time-average velocity therefore implies that the propagation can be considered in the two components separately. If the pore volume filled by cement is small, the velocity is close to that of the grains; if the pore volume filled by cement is relatively large (say, 30%), the velocity deviates significantly.

Thus, velocity is often a weak indicator of the chemical constitution of the rock; a limestone with low-velocity cement may be indistinguishable from a sandstone with high-velocity cement.

Porosity

Most of the sedimentary rocks with which we are concerned have three components ( Figure 1 ).

Page 3: Seismic processing - velocities

Figure 1

The first component is the grains, which together (and in contact) form the framework or skeleton of the rock. The second component is the cement; in general, this fills only part of the space between the grains, tending to be concentrated at and near the grain contacts. The remaining portion of the volume (the porosity) is occupied by fluid. We have particular interest in the case in which the fluid is oil or gas; in general, however, it is water.

Fortunately, the time-average equation of Figure 2 and Figure 3 can be extended to this situation, particularly for sandstones.

Page 4: Seismic processing - velocities

Figure 2

In an example like Figure 1 ,

Page 5: Seismic processing - velocities

Figure 3

where the proportion of total volume occupied by cement is small, we often ignore the difference between the cement and the grains, and use the approximation of Figure 4 .

Figure 4

Then the appropriate time-average equation is as given in the figure.

Because the velocity of water (and oil) is so much lower than that of the rock grains, the presence of water-filled porosity can make a large difference in the velocity. Indeed, in many situations the effect of porosity is dominant in determining velocity. Thus, a change from 0% porosity to 30% porosity in a sandstone can depress the velocity from 5700 m/s to 3100 m/s (from 18,700 ft/s to 10,000 ft/s).

The scale of such variations reduces still further the utility of velocity as an indicator of the chemical constitution of a rock; a high-porosity limestone can have a velocity lower than that of a sandstone, and both can be less than that of a salt.

The dependency of velocity on fluid-filled porosity is based on the fact that seismic wave propagation occurs by local compression of the rock, against the stiffness of the rock. If the rock is totally solid, this stiffness is that of the grain material, against molecular forces. But if the rock has pores, and the material in the pores is less stiff than the grains, the grains can deform sideways into the pores; the rock as a whole has lower stiffness.

Page 6: Seismic processing - velocities

The ability of the grains to deform into the pores depends not only on the stiffness of the pore fluid resisting that deformation, but also on the shape of the pores: a spherical pore deforms less readily than a long thin pore. This proves to be an important distinction; the applicability of the time-average equation is limited to pores of intergranular type (such as those of Figure 4 ).

This is probably the reason that the time-average equation is widely applicable in sandstones, where the porosity is usually of intergranular type. In oolitic limestones, the porosity is again of intergranular type, and the time-average equation is used. In many cases, however, important reservoirs in limestones are formed by chemical change or solution or both; the resulting pore shapes may mean velocities lower than those given by the time-average equation. Even more marked deviations may arise when the "pores" are actually open cracks caused by fracturing.

Fracturing

When brittle rocks such as limestones are folded, tiny fractures tend to form over the highs (or other sharp changes of dip). In the presence of extensional faulting, major systems of subparallel fracture planes may form. Both of these situations lead to a marked depression of velocity. Because the voids have so little mechanical strength (as opposed to inter-granular pores), the depression of velocity can be greater than that indicated by the time-average equation.

In principle, therefore, velocity provides a sensitive tool for the detection of fracture porosity (although it cannot distinguish between a valuable system of large interconnected fractures and a useless system of micro fractures without permeability).

Depth

There are several reasons that velocity tends to increase with depth.

1. The first effect of overburden pressure is to cause some "settling" of the grains; this involves physical movement of individual grains to achieve a more efficient packing. The number of grain contacts supporting the overburden increases, so the porosity decreases. This is an irreversible effect that does not yet involve elastic deformation of the grains, and normally occurs at shallow depths.

2. The second effect of overburden pressure is elastic deformation; just as in a seismic wave, the compression causes the grain to deform into the pores. As a layer is buried to a certain depth, the rock progressively loses porosity and increases in velocity. This effect is reversible if the layer is subsequently uplifted.

3. At great depths the pressures at the grain contacts may be sufficient to crack the grain locally, and to cause a further loss of porosity and increase of velocity. This effect is not reversible.

4. If a rock that has already been fractured is buried deeper, the fractures tend to close very quickly. The effect is a pronounced increase of velocity with depth (although never to unfractured values, because the walls of the fractures never fit together exactly when the fracture closes). Any or all of these first four effects are encompassed within the term compaction.

5. After the initial compaction at shallow depth, the greatest loss of porosity (and hence increase of velocity) is likely to be by cementation. Initially, cementation, just at the grain contacts, may have little effect, but ultimately it may destroy the porosity altogether. The amount and type of cementation depends on the chemistry of the rock, the chemistry of the water, the flow of the

Page 7: Seismic processing - velocities

water (for example, up a fracture plane) , and many other factors. In general, these do not depend directly on depth as such, but the chances of increased cementation usually increase with geologic age, which in turn usually increases with depth.

6. The timing of the cementation can be critical. A rock cemented before deep burial may lose much of its porosity at that time, but little more as it is buried.

Overpressure

As a rock is progressively buried, the compaction usually squeezes out the water from the contracting pores, and the water rises through the layers above to the surface.

Sometimes, however, some impermeable barrier above the compacting layer shuts off the path for the water to rise to the surface; the water is locked in the pores. Its stiffness then resists the deformation of the grains into the pores, thus tending to maintain the porosity of the rock as buried. Such a rock is said to be overpressured; the water is holding the pores open. The usual increase of velocity with depth can be reduced dramatically, both by the maintenance of porosity and by the reduction of cementation by circulating waters.

Gas

We have seen that, relative to a solid rock, the presence of significant water-filled porosity produces a dramatic reduction of velocity. Basically this is because the stiffness of the water is less than that of the rock grains. If the water is now replaced by gas, there is virtually no stiffness to resist the deformation of the grains into the pores, and so the velocity decreases still further.

Previously we considered a solid sandstone of velocity 5700 m/s (18,700 ft/s), and computed, using the time-average equation, that its velocity with 30% water-filled porosity would be 3100 m/s (10,000 ft/s). If the water in the pores is replaced by gas, the time-average equation is no longer valid, but the velocity is typically observed to fall to as low as 2300 m/s (7500 ft/s)

This phenomenon is little affected by the proportion of gas to water in the pores; the stiffness disappears with the first bubble of free gas in the pores, and gas saturations of a few percent have as much effect as saturations of 100% (GP506 Hydrocarbon Indicators).

Hydrocarbons

A quite different consequence of having hydrocarbons (either oil or gas) in the pores is that their entrapment effectively prevents the transport of water through those pores, and so the precipitation of cement out of such water. Thus it is not unusual to find lower velocities in the hydrocarbon-saturated portion of a reservoir than in the water-saturated portion. Basically, the porosity has been maintained where the hydrocarbons are, and lost elsewhere.

Reversals

We must consider what happens to the velocity when some of these happenings are reversed.

1. We have already noted that the elastic effect of deeper burial is reversible, but that settling effects and graincontact crushing are not. In general, therefore, uniform uplift produces some reduction of velocity, but not to the value otherwise expected for that depth.

Page 8: Seismic processing - velocities

2. We have also noted that local uplift of brittle rocks is likely to depress the velocity by fracturing. Subsequent deep burial will not restore the value expected for that depth (unless the fractures become cemented).

3. The depression of velocity to be expected in fault zones can be totally restored by cementation from mineral-rich waters rising up the fault plane.

4. Sometimes a local rock (particularly a sandstone) can become totally cemented early in its history. In subsequent burial the strength of the cement protects the sand grains from settling and from crushing. Then, when the rock is at great depth, the cement may be dissolved away by circulating waters, while the overburden remains supported by the country rock. Such circumstances can lead to unexpectedly low velocities.

5. Similar effects can occur where local overpressure has protected a rock body from compaction during burial, and the overpressure has subsequently been relieved (by slow leakage, or by a fault).

6. The presence of hydrocarbons in a trap can lead to a contrast of porosity (and therefore of velocity) at the gas-water or oil-water contact. If the trap is subsequently breached, and the hydrocarbons lost, a "fossil" water contact may remain.

Ambiguities

It is unlikely that we can go directly from a velocity measurement to the rock type, because variations caused by rock condition within one type are greater than variations from type to type.

Only in the most general sense — that is, other things being equal — can we say that a carbonate has a higher velocity than a clastic. We may set minimum likely velocities for a porous carbonate reservoir (3000 m/s, or 10,000 ft/s?) and maximum values for a tight sandstone (5800 m/s, or 19,000 ft/s), but velocity alone does not allow us to say that one particular layer is a porous limestone rather than a less-porous sandstone or a shale. We can say that a thick predominantly carbonate section usually has a higher velocity than a thick predominantly clastic section. Further, the narrow range of velocities for salt is a definite help.

Other than that, we must combine velocity information with other information — in particular the evidences of depositional environment obtained from seismic stratigraphy-to define rock type.

Lateral Change

In real rocks, over what distance can the velocity change by so many percent? We base such a judgment on the factors cited above, having regard to what we see on the seismic section, or what we know from previous work. For example:

• Near-shore carbonate deposition is likely to change (going seaward) to lime mud and finally to marine shale, within one layer. Such a transition is likely to involve a significant change of velocity in the layer, over a distance of several to many kilometers.

• A similar transition may occur as a marginal-marine sandstone passes seaward into siltstone and marine shale. Here, however, the velocity variation is likely to be less.

Page 9: Seismic processing - velocities

• A layer of constant rock type subjected to varying subsidence can be expected to increase its velocity where it is most deeply buried, with the greatest lateral change occurring over the region of greatest dip.

• However, any change of dip great enough to produce fracturing (often evident as a local loss of "grain" in the section) must be expected to produce a local decrease of velocity in carbonates, but not in young or plastic shales.

• Fault zones must be expected to yield locally lowered velocities. The lateral limits of such zones are sometimes evident by loss of "grain," or by a change in the amplitude of reflections close to the fault.

• If lowered velocities are not observed, the answer may be that the fault has become cemented; this can be important in deciding whether a fault is likely to seal.

• Abrupt lateral changes of velocity can occur in any situation where the rock type or condition changes. Examples are from flanking marine shales into a carbonate reef, or from a widespread limestone into the fluvial deposits of a channel cut into it.

• Zones of good porosity (and hence locally depressed velocity) can be caused by weathering or leaching at an exposed surface now evident on the section as a local unconformity.

• Sometimes good porosity is created or maintained by factors dependent on structural position at the time of deposition. The techniques of interval mapping sometimes allow us to recognize the presence and extent of such factors.

Summary

Velocity depends on rock type, rock condition, and saturant. Any dependence on geologic age comes through the rock condition (particularly porosity, cementation, fracturing, and the release of overpressure) rather than through age itself.

In evaporites the dependence on rock type is dominant. In other rocks the dependence on rock condition dominates the dependence on rock type, although there is always a tendency for carbonate velocities to be higher than clastic velocities. In all rocks the replacement of water or oil by gas reduces the velocity; the reduction is large if the porosity is large.

1. In shales, which constitute about 75% of all sedimentary rocks, there is an initial dependence of velocity on the chemical constitution of the clastic grains and on the proportion of lime that was present in the original mud. Then there is a major dependence on porosity that expresses itself as a major dependence on depth. As the material is compacted by burial, the porosity decreases rapidly from very high values at the sea floor to a few percent. The velocity therefore increases sharply in the shallow section, and progressively less sharply with depth.

It is usually said that the relation between porosity and velocity does not follow the time-average equation; one of the difficulties is in deciding on a grain velocity, when most shales contain grains from a wide variety of provenances, and many also contain some lime. Another complication is that the permeability of many shales effectively falls to zero while there is still a few percent of porosity; subsequent deeper burial means that the shale becomes slightly overpressured — maintaining velocities somewhat less than would be given by the time-average equation.

Page 10: Seismic processing - velocities

Overall, however, the pattern is clear: fairly low velocities, with a pronounced dependence on depth.

2. In sandstones originating as wind-blown dunes, the grains are efficiently packed during deposition, and the primary determinant of velocity is porosity. There is less loss of porosity by compaction than in shales, and so less dependence of velocity on depth; there may be, of course, loss of porosity by cementation.

The velocity generally follows the time-average equation. Fluvial and marginal-marine sandstones, in contrast, may have irregular grain shapes, and so have less predictable behavior. Typically the velocity increases rapidly with depth until there is sufficient cementation to annul the effect of the variable grain contacts; then follows the behavior of the dune sands. Thus, shallow unconsolidated sands may have velocities lower than that indicated by the time-average equation. Overpressure can occur only if the encasing shales are overpressured.

The overall pattern: medium velocities, with a pronounced dependence on porosity.

3. In carbonates, velocity behavior is more difficult to predict. This is because of their chemical instability, which in turn affects the porosity. Thick chalk units are observed to compact like shales, and so to have a marked dependence on depth. Where there is intergranular porosity, the time-average equation applies approximately. But chemical and solution porosity have unpredictable effects.

The overall pattern: fairly high velocities, with some dependence on depth and pronounced dependence on fractures.

 

Borehole Velocity Measurements

Here we regard the acoustic log as giving the local or instantaneous velocity, and equate this to the speed of the wavefront as it crosses a particular point in the subsurface ( Figure 1 ). The term instantaneous is not quite precise, as the separation between source and receiver in a downhole tool is finite. Compared to the scale of variation of velocity, however, the tool may well be regarded as a point.

Page 11: Seismic processing - velocities

Figure 1

Because we use vertically oriented downhole tools to measure instantaneous velocity, it is proper to regard it as a vertical velocity function. Accordingly, we designate this velocity Vz, meaning that it is a function of depth.

The instantaneous velocity is determined by dividing the source-to-receiver span by the travel time. In that sense, what we are actually measuring is the velocity of that 2-ft interval, the interval velocity of that span. Similarly, we can determine the interval velocity Vi between two positions of the tool ( Figure 1 ). The interval is the difference in depth between the two positions, or Z2 - Z1. The travel time is the difference in integrated time between the two positions; we may write this as t2 - t1. Thus, we have one of the common definitions of interval velocity: Vi = (Z2 - Z1)/(t2 - t1) .

When the interval is extended to the surface (Z1 = 0), the interval velocity becomes the average velocity, which we designate V a, and which is the ratio of total depth to total time ( Figure 1 ). The formal definition of Va also emerges from a direct consideration of instantaneous velocity Vz and the depth interval dz over which it is appropriate: Va = dz/ (dz/Vz).

Instantaneous-, interval-, and average-velocity measurements thus may be derived from the same tool, the differences among them being a matter of scale. Thus, we can say that the interval velocity is the average velocity of the interval. Instantaneous velocity is the interval velocity over a very small interval; it is also the average velocity between the two parts of the downhole tool. And average velocity is the interval velocity for an interval that begins at the surface.

Page 12: Seismic processing - velocities

In general, each of the above velocity functions implies a vertical path, although Figure 2 and Figure 3 show why this is not always correct.

Figure 2

In Figure 2 ,

Page 13: Seismic processing - velocities

Figure 3

the borehole itself is not vertical, and the resulting value for average velocity is derived from a measurement of distance along the borehole. In Figure 3 , the borehole is vertical, but now the layering is at an angle; interval-velocity calculations thus do not account for the correct bed thickness. In both of these cases, some correction must be made to the calculated velocities.

Seismic Travel Times

As a basis for discussing the inference of velocity distributions from surface-based measurements, let us examine the surface measurements that would result from known velocity distributions.

Single-Layer Models

Figure 1 illustrates the classic single-layer velocity model.

Page 14: Seismic processing - velocities

Figure 1

The ground and the reflector are flat and everywhere parallel, and the bed can be characterized as having an average velocity Va, an interval velocity Vi, and an instantaneous velocity Vz that are all equal. Further, these velocities are independent of direction; the layer is homogeneous and isotropic.

Because the velocity is uniform, the thickness of the bed can be described in units of distance or of time; one implies the other. Finally, we consider only paths having a common midpoint.

Figure 1 also shows the geometry pertinent to one raypath. The total travel time TX can be specified in terms of the vertical (zero-offset) two-way time T0, the offset X between source and receiver, and the average velocity Va. The relation, given in the figure, is hyperbolic in offset and time ( Figure 2 ).

Page 15: Seismic processing - velocities

Figure 2

We could just as easily characterize the curve of Figure 2 in terms of the difference between TX and T0. This difference is the normal moveout t, which we express in Figure 2 terms of offset, velocity, and vertical time.

What happens if the reflector is no longer horizontal? (In this instance, the term normal moveout no longer applies, the qualifier normal being reserved for the case of no dip. In practice, of course, most geophysicists use normal moveout to describe any travel-time differences caused by offset.)

In Figure 3 ,

Page 16: Seismic processing - velocities

Figure 3

the zero-offset time, range of offsets, and average velocity are the same as in Figure 1 , but now the reflector has a dip  . The two-way, normal incidence time T0 now does not imply bed thickness, since it no longer represents a vertical path, and the expression for travel time now becomes

TX = [T02 + (X2cos2 Va

2)]12.

In the presence of dip, therefore, the travel-time curve is still hyperbolic ( Figure 4 ), but slightly flatter.

Page 17: Seismic processing - velocities

Figure 4

It is as though the average velocity were actually equal to Va/cos  . Naturally, the moveout, which we may still call t, is equal to TX - T0; it is given in Figure 4 .

Multilayer Models

Consider the two-layer model of Figure 5 .

Page 18: Seismic processing - velocities

Figure 5

Each layer has a constant interval velocity; further, the velocities and layer thicknesses are such that the depth to the base of the second layer and the zero-offset reflection time from it are the same as their single-layer counterparts in Figure 1 . In other words, the average velocity measured by any straight-line path is the same as in the previous example.

The difference is that the zero-offset path is the only straight-line path. As Figure 5 shows, any offset between source and receiver results in a refracted travel path, in accordance with Fermat's least-time principle. This intuitively attractive rule simply requires that seismic energy travel along a path that takes the least time. It also results in Snell's law, which relates the propagation angle through the respective layers to the ratio of their velocities.

Figure 5 also shows the equation for the travel time TX for the path shown; there, 1 is the angle of incidence at the layer boundary and 2 is the angle of emergence; z1 and z2 are the respective layer thicknesses; and v2 and v2 are the respective layer velocities. Furthermore, Fermat's principle con-strains 1 and 2 so that X = 2(z1 tan 1 + z2 tan 2)

Page 19: Seismic processing - velocities

Thus, we may calculate the variation of travel time with offset for this model. Figure 6 shows this relation, with the curve for the single-layer model, in dashes, for comparison.

Figure 6

The two-layer model produces a flatter curve from the same zero-offset time.

Figure 7 shows a three-layer model.

Page 20: Seismic processing - velocities

Figure 7

Again, the total depth and zero-offset time are the same, and so, therefore, is the average velocity along the vertical path. The travel-time curve is shown in Figure 8 , with the single-layer curve again for comparison. The curve, still apparently hyperbolic, is flatter still.

Page 21: Seismic processing - velocities

Figure 8

So we have three models, each one representing a total thickness of 3000 m and an average velocity over that distance of 2500 m/s. For the vertical path, the models are indistinguishable from surface measurements of the reflector at 3000 m. They differ in their normal-moveout patterns, however; we could use the single-layer travel-time equation to derive only the first of these patterns. (And we can use the qualifier normal because we have postulated horizontal layering.)

Obviously, this is because the basic travel-time equation is a Pythagorean relation, and as such requires a straight-line path from source to reflector, and then from reflector to receiver. Multilayer models produce segmented paths, owing to least-time considerations. But the latter two curves are so nearly hyperbolic in form that there may be single-layer models that we can use to approximate them. (This is a common approach in modeling: rather than seek an approximate solution to an exact model, we seek an approximate model for which we can find an exact solution.)

The Layered sequence

Extending the equations for time TX and offset X to the case of N parallel, horizontal layers, each of constant velocity vk and thickness zk, we have 

TX = 2((zk/vk)/ ).                (1)

(For a derivation of this equation, see the section titled "Travel Time Equation for a Layered Sequence" under the heading "References and Additional Information").

Page 22: Seismic processing - velocities

Further, the least-time constraint on the propagation angles leads to

X = 2p(zkvk/ ) .                  (2)

In these two equations, p = sin 1/v1, and is called the ray parameter. In the special case where all the layer velocities are equal, Equations (1) and (2) can be combined to give the familiar relation of Figure 1 .

Another special case occurs for vertical propagation through the top layer. Snell's law prohibits refraction in this case, making the entire path vertical. Thus, because the ray parameter p also equals zero, the offset X vanishes, and Equation (1) becomes appropriate to the determination of average velocity.

Obviously, the ray parameter is important because it defines a particular ray emerging from the source, at a particular angle. Whatever the refraction experienced by the ray, the ray parameter continues to define that ray. When the ray emerges at the surface, the ray parameter leads to a unique offset-time relation.

The basic travel-time equation is the elementary form of a general expression for TX2. This expression is an

infinite series in even powers of the offset X: TX2= CnX2n

We shall not attempt to derive the coefficients Cn here suitable references are listed in the "Recommended Reading" section under the heading "References and Additional Information". We do observe, however, that only the first two terms of this series are significant as long as the offset X is no greater than the depth to the horizon of interest. Appropriate muting guarantees this condition.

In the general expression, the first coefficient, C0, is simply T02. The coefficient of X2 turns out to be the

inverse of the square of a weighted average of the layer velocities:

C1 = l/Vr2. We call Vr the root-mean-square (rms) velocity, and define it thus:

Vr = ,                             (3)

where vk and tk are the interval velocity and the normal incidence two-way travel time in the kth layer, and

T0 is the zero-offset reflection time (that is, T0 =  tk). The general expression for TX2 thus becomes

TX2 = T0

2 + X2/Vr2.                                   (4)

In certain circumstances, — suitably short offsets, approximately parallel and horizontal layering — Equation (4) can be regarded as equivalent to the equation of Figure 1 . This means that a layered sequence of vertically varying velocity can be approximated by a single layer of velocity equal to the rms velocity Vr. Within that layer, the travel path is a straight line (of constant ray parameter, just as with the layered case), and so it is root-mean-square velocity that defines normal moveout.

Thus, root-mean-square velocity provides a link between physical measurements of velocity and the nonphysical device of the stacking velocity.

Stacking Velocity

Page 23: Seismic processing - velocities

The formal equation for normal moveout ( Figure 1 ) allows us to calculate the normal moveout based on known values of zero-offset reflection time, source-to-receiver offset, and the appropriate velocity to the reflector.

Figure 1

For a layered sequence, the appropriate velocity is not the average velocity, but rather the root-mean-square velocity.

Similarly, Figure 2 allows us to calculate the moveout in the presence of dip.

Page 24: Seismic processing - velocities

Figure 2

For a layered sequence, we may postulate a constant dip for the entire sequence, and again use the rms velocity. If this is not appropriate, the alternative is the rather complex one of ray-tracing, which is a matter for separate discussion.

For the present, let us say that we know the moveout: we measure it from the common-midpoint gathers. Then we can use it to determine the corresponding "velocity." Accordingly, we invert the equations of Figure 1 and Figure 2 to solve for that velocity:

V =  ,                                     (5a)

or

V = cos .                         (5b)

The velocities derived from Equations 5a and 5b may vary, depending on our choices of offset. We minimize this, of course, with our mutes — in effect, "getting rid" of the higher-order terms. There is one

Page 25: Seismic processing - velocities

velocity, however, that best fits the observed moveout pattern. Subtraction of the t values appropriate to this velocity flattens the moveout pattern on the gather, and thus yields the optimum stacked trace. In the parlance, this value of V is the stacking velocity. The entire suite of stacking velocities for a gather is the stacking function — the stacking velocity as a function of time.

(Obviously, this is a poor choice of nomenclature, since we have said that the stacking velocity is not a velocity at all. Rather, it is a variable that produces the observed travel-time pattern. The term is so engrained within the geophysical consciousness, however, that to suggest an alternative would probably cause more confusion than it would alleviate. Therefore, the prudent geophysicist uses the term with the understanding that he or she does not really mean it. For an interesting discussion of seismic velocities and their use in the industry, we refer the reader to Al-Chalabi (1990, 1994)).

The first task of velocity analysis is an attempt to determine the proper stacking function for a given common-midpoint gather. This function permits us to flatten the observed move-out patterns, so that the arrival time of the reflection on all the traces is made to equal the zero-offset time T0. The addition of the corrected traces then yields the optimum stacked trace.

In this discussion, we designate the stacking velocity as Vs. We shall keep to this convention even in the case of dipping layers, which is contrary to the manner of some common texts. We do this because we want the term stacking velocity to mean the same thing in all cases: the variable that best corrects for the observed moveout. The variable that would correct for t in the absence of dip is irrelevant.

(In many references, stacking velocity is also designated as Vnmo, for normal-moveout velocity. We may also see it designated as Vrms, for root-mean-square velocity. The usage in either case is not strictly correct; for example, normal move-out, as originally defined, requires the model of Figure 3 , not that of Figure 4 .

Page 26: Seismic processing - velocities

Figure 3

So the thoughtful geophysicist avoids these terms, but knows what others mean when they use them.)

Page 27: Seismic processing - velocities

Figure 4

As another approach, we look at Figure 5 .

Page 28: Seismic processing - velocities

Figure 5

Again, we take   to be the dip along the line, but this time it is the apparent dip. The true dip, the dip of the reflector, is , and the angle between the line and the direction of true dip is . If the constant velocity

above the reflector is Va, then the required stacking velocity is given by Va/

This relationship is derived in the section titled "Comments on the Problem of Dip", which appears under the heading "References and Additional Information".

Intersecting lines can have — indeed, in the presence of dip, generally have — different stacking velocities at their point of intersection, even if the average velocity to the common point of reflection is required to be the same.

Determination of Stacking Velocity

In practice, velocity analysis from reflection data does not solve Equations 5a and Sb. Rather, the process solves the moveout equation for a series of trial velocities, and then "corrects" the observed moveout for each of these velocities. The common-midpoint gathers are then stacked, and the stacks from all the velocities compared.

For the noise-free event of Figure 6 , therefore, we start with a velocity V1 and check its effectiveness in one of two ways.

Page 29: Seismic processing - velocities

Figure 6

We can assess the match between the proposed moveout pattern and the real moveout pattern by, for example, adding the amplitudes of the traces along the proposed curve ( Figure 7 ).

Page 30: Seismic processing - velocities

Figure 7

Alternatively, we can subtract the moveout appropriate to V1 from the data, and add all the amplitudes at T0

( Figure 8 ). We then go on to the next velocity, and follow the same procedure for that and successive velocities, until we get to the last of our choices.

Page 31: Seismic processing - velocities

Figure 8

Somewhere in that range — unless our estimates are very far wrong — there is one velocity whose calculated moveout pattern fits the data reasonably well (and certainly better than most of the others). This is the stacking velocity appropriate to T0. The next step is to choose a new T0 and a new velocity range, and start the procedure again. In essence, this is the scheme followed by all velocity-analysis programs. Whether we assess the degree of fit visually or statistically, the final selection of stacking velocity is based on the quality of the stacked trace.

 

 A Comparison of Velocity Functions

We know that velocities can be instantaneous, interval, average, root-mean-square, normal-moveout, or stacking. In one case — and one case only — all of the above describe the same velocity. That case is the single, homogeneous, isotropic, uniform layer between a smooth flat surface and a smooth flat reflector parallel to the surface. In other words, the basic single-layer model of Figure 1 , is the only one for which we can use the unqualified term "velocity" without fear of error.

Page 32: Seismic processing - velocities

Figure 1

For more complicated models, however — to say nothing of the real world — the distinctions among the above modifiers are critical.

The first three types of velocity function — instantaneous, interval, and average — can be measured in the earth, using borehole techniques. When the system of layers and interval velocities is thus established, we can calculate root-mean-square velocity. To infer any of these velocities from surface-based measurements, however, we must start with one of the other types.

Let us illustrate this by considering how we might get from stacking velocity to average velocity, to a degree of accuracy suitable for many purposes. The first requirement is that the layering must be parallel (although, as we shall see, the beds may be dipping).

First, we remember that the stacking velocity Vs is, under the conditions of short offsets and approximately parallel and horizontal layering, a good approximation to the root-mean-square velocity Vr. Next, we remember Equation 3, which relates Vr and the interval velocities vk.

Vr =

Thus, if Vr,m and Vr,n are the rms velocities at the top and bottom of a layer, the interval velocity Vk,n of the layer is given by the Dix equation:

Vk,n = V(Vr,n2Tn - Vr,m

2Tm)/(Tn - Tm) ,                 (6)

where Tm and Tn are the normal incidence two-way reflection times at the top and bottom of the layer.

Page 33: Seismic processing - velocities

If the entire sequence is dipping at an angle  , and if the seismic line is in the direction of reflector dip, then the interval velocity is figured by multiplying the above equation by cos  . The dip requirement above ensures that the zero offset path to the top of the layer lies in the same vertical plane as the one to the bottom of the layer.

By inspection of the Dix equation, it is clear that the obtainable accuracy of the interval velocity depends on only three factors: the accuracy of the respective zero-offset times; the accuracy of the inferred stacking velocities; and the degree to which the real earth can be approximated by a parallel-layered sequence. In the dipping counterpart, a fourth factor is the accuracy of the inferred dip.

Having thus obtained the interval velocities, we simply calculate the interval thicknesses (because we know their times) , add them, and then divide by the total one-way time to get an estimate of the average velocity.

In practice, even if we were to accept that the conditions were appropriate for Dix-derivation of the interval velocities — a restrictive conclusion in itself — other difficulties remain. Chief among them is the interval chosen: for the derived velocity to approximate the actual interval velocity of a layer, the interval must contain no major changes of velocity; in practice, it must also be bounded by unambiguous reflections.

Obviously, this is unrealistic for the entire record length of the typical seismogram. More often, we are content to consider only a portion of the section. A more serious problem is the near-surface, often regarded as the worst of the processing geophysicist's problems. The near-surface has a major effect on the determination of stacking velocities (and, therefore, the Dix-derived interval velocities).

Multiples

Because velocity analysis offers a range of velocities at every value of zero-offset time, it offers a best fit not only to primary reflections but (at appropriate velocities) to multiples also.

Figure 1 ,

Page 34: Seismic processing - velocities

Figure 1

Figure 2 , Figure 3

Page 35: Seismic processing - velocities

Figure 3

, and Figure 4 illustrate the problem of multiples.

Page 36: Seismic processing - velocities

Figure 2

Here, the shallow reflector is a strong one, and the deep reflector relatively weak. We also remember that it is our object to suppress the multiples, in stacking, by the use of a velocity appropriate to the primaries.

Page 37: Seismic processing - velocities

Figure 4

The simple multiple in the shallow layer has a zero-offset time about equal to that of the primary from the deep layer. If the multiple stacks with higher amplitude, we are led (wrongly) to pick its lower velocity. The stacking function shows the appearance of a severe velocity inversion at this zero-offset time.

Fortunately, many simple multiples are fairly easy to recognize because of their low velocity. The velocity is low because the travel path lies entirely within the shallow part of the section. The same is true for multiples from mildly dipping interfaces.

Such surface multiples can become a major problem if there really is a velocity inversion at depth, so that the multiple appears to have a higher velocity than the deep primary. Then our normal tool for recognizing a surface multiple by its low velocity may be lost.

A further difficulty arises with multiples that are not simple, the so-called pegleg multiples. These may appear to have velocities close to those of primaries arriving at the same time, and so receive more enhancement in the stack than we would wish.

Where the velocity distinction between primaries and multiples is weak, we are led to check the other distinguishing characteristics of multiples — their repetition times and their apparent increase of dip. This is an example of the fact that velocity analysis involves some degree of interpretation. Statistical and visual best-fits are helpful, and we rely on them. But we must not do so blindly; there must be an understanding of velocity behavior (both real and apparent) underlying all our decisions.

Velocity Panels

Page 38: Seismic processing - velocities

The simplest way to estimate the proper stacking velocity merely asks the computer to nmo-correct the data; we make the decisions based on the appearance of the output.

Constant-Velocity Stacks

In Figure 1 , a small portion of the line (usually at least 20 stacked traces) has been stacked with several different velocities, spanning the range that we expect in the area.

Figure 1

At each depth, we see that there is one velocity (perhaps two) from among these that results in the best stack. The water-bottom reflection, for instance, is clearest when the velocity is 1524 m/s; stacked at higher velocities, it practically disappears. The event at 2.0 s, on the other hand, is best stacked by a velocity of 1980 m/s. (We also see, on the l707-m/s display (or panel), a beautiful example of a primary event — at 1.05 s — and its multiples at 2.10 s and 3.15 s.)

In this manner, we emerge with a stacking function for this portion of the line. Obviously, we do not pick time-velocity pairs for every sample on the trace; rather, we specify them on the strongest events, and have the computer make what we hope is a proper interpolation between successive pairs. That interpolation is probably linear in velocity.

How do we choose the velocities from this sort of display? Sometimes the choice is easy, with only one velocity panel providing a good stack. Other times however, the correct velocity appears to lie between two panels. In that case, our mind's eye interpolates the proper velocity. In Figure 2 and Figure 3 ,

Page 39: Seismic processing - velocities

Figure 2

for example, we see the amplitude of the stack increasing and then decreasing as we go from left to right.

Page 40: Seismic processing - velocities

Figure 3

Our eyes tell us that the event is best stacked by a velocity between the two middle panels, that is, by a velocity of about 2515 m/s. If our goal is a good stack, this visual interpolation is entirely proper.

The stacking function inferred from Figure 1 is strictly applicable only to that small portion of the line. A kilometer or two away — closer if there is some kind of geologic complexity — we must get another set of constant-velocity stacks, and estimate a function there. Interpolation between the two functions, again, is usually linear in velocity, to achieve a more realistic degree of smoothness in the velocity variation along the line.

Constant-Velocity Gathers

An alternative to the constant-velocity stack is the constant-velocity gather ( Figure 4 and Figure 5 ).

Page 41: Seismic processing - velocities

Figure 4

The concept is the same, but this time we examine the corrected but unstacked traces.

Page 42: Seismic processing - velocities

Figure 5

Because this method forgoes the signal-to-noise benefit of the stack, it is applicable only to data of very good quality. Further, it does not provide the same obviousness in the identification of multiples. It does give a clear indication of any situation in which the actual moveout is not hyperbolic. It is also useful in allowing an intelligent selection of the mute.

Practical Considerations

In either constant-velocity display — stack or gather — we must choose a range of input velocities, which must encompass all possible values. We must also choose the increment of velocity; in this we are balancing cost against the uncertainties of interpolation. A velocity range of 1500-4500 m/s (5000-15,000 ft/s) would be typical. Also typical would be the use of 20 or 25 trial velocities; however, these are not spaced uniformly in velocity, but rather in nmo. This means that the velocities "bunch up" at the low end, and spread further apart as they get higher.

The great merit of the constant-velocity stack is that it allows us to make the critical choice of stacking velocity directly on a piece of seismic section, where the dips, the reflection interferences and the geological complications are all evident to us. This is entirely in accord with our principle of staying close to the data as we make the decisions.

Apart from considerations of cost, and for purposes of stacking alone, we would probably prefer to make our choice of stacking velocity in constant-velocity stacks of each line in its entirety. This would give us great confidence in both the lateral and vertical variations of stacking velocity, as well as in their reasonableness. Because of the cost, however, this comprehensive approach is reserved for highly detailed surveys.

Page 43: Seismic processing - velocities

Elsewhere, we reduce the cost by one or both of two methods. In the first, we use a range of fixed velocity distributions ( Figure 6 ), rather than constant velocities. This eliminates the waste of stacking shallow reflections at high velocities and deep reflections at low velocities, but reduces the obviousness of surface multiples.

Figure 6

The second method reduces the width of each constant-velocity display, perhaps to even less than the 20 stacked traces we considered earlier. As we decrease the width of each panel, the advantage of seeing the geology is decreased, and there comes a point where an alternative technique becomes better value for the money. This is the velocity-analysis display, or velocity spectrum.

Coherence Measures and the Velocity-Analysis Display

The judgments that we make in evaluating constant-velocity displays are essentially visual measures of the coherence of the individual traces forming the stack. The problem in relying strictly on our eyes is that small differences in the stack-which may result from significant differences in the stacking function — are hard for the eye to distinguish. This is particularly so because we cannot focus on two stacks at once; rather, we look at the two alternately, and unless the differences between the two are substantial, we have trouble discerning them.

Page 44: Seismic processing - velocities

What we need, then, is a numerical measure of the coherence of the individual traces forming the stacked trace.

Amplitude of the Stacked Trace

One measure is simply that of the maximum amplitude of the stacked trace. The principle is shown in Figure 1 ,

Figure 1

Figure 2 ,

Page 45: Seismic processing - velocities

Figure 2

Figure 3 , and Figure 4 .

Page 46: Seismic processing - velocities

Figure 3

Here, we have a 12-trace gather with a particular zero-offset time T0 and a hyperbolic moveout pattern ( Figure 1 ,

Page 47: Seismic processing - velocities

Figure 4

An idealized moveout pattern, and the summation after no correction. The shading represents the time window) . If we add the traces with no nmo correction at all — essentially corresponding to a velocity with a far-offset nmo of zero — we get a low-amplitude stack. As a measure of the stack amplitude, we may add the samples of the stacked trace within some time window.

Next, we make a correction corresponding to a far-offset of, say, +10 ms; this hyperbola is shown in Figure 2 . Adding the traces after this correction, and then adding the samples within the time window, we get another value for the stack amplitude. Proceeding in this fashion ( Figure 3 ), we find one velocity whose moveout pattern best matches that of the data; nmo correction according to this pattern yields the highest stack amplitude. Normalizing all the stack amplitudes to the highest one, we emerge with the display of Figure 4 .

In this example, the maximum stack amplitude occurs along the hyperbola corresponding to a far-trace nmo of +30 ms. Knowing moveout, zero-offset time, and offset, therefore, we can easily solve for velocity. This velocity is the stacking velocity for that zero-offset time.

In practice, this process works from one common midpoint to the next, and then takes a weighted mean of the amplitudes for several consecutive midpoints. In Figure 5 (A normalized display of stack amplitude as a

Page 48: Seismic processing - velocities

function of time and velocity), for example, 24-fold data were corrected and summed over four consecutive midpoints; this analysis was therefore computed over 96 traces.

Figure 5

The bar graph to the right of the figure shows the maximum amplitude for each part of the trace (before it was normalized).

Semblance

A more common coherence measure is concerned not with amplitude but with power. If we consider a sample at time i on trace j of a common-midpoint gather, then we may designate its amplitude as aij; its power is then aij2 We may then define the semblance coefficient St as the ratio of the

power of the stacked trace to the sum of the powers of the individual traces within a time window:

(7)Semblance is essentially a development of the normalized cross-correlation coefficient. It measures the common signal content over the traces of a gather according to a specified lag pattern. That lag pattern is, of course, the hyperbolic pattern corresponding to a trial normal-moveout velocity. The general procedure is as follows:

1. First, we choose a reference time T0.

Page 49: Seismic processing - velocities

2. We then choose a value for stacking velocity V5, which is constrained to lie within limits appropriate to the time T0 and the study area. The variables T0 and V define a unique hyperbolic pattern, which in turn defines the lag pattern for the subsequent calculation of semblance.

3. Each trace is displaced by an amount that corresponds to its offset.

4. We then compute the semblance of the gather, and display it as a function of time and velocity.

5. Steps 2-4 are then repeated for the expected range of velocities.

Figure 6 shows semblance as a function of stacking velocity for a specific value of zero-offset time.

Figure 6

This calculation is repeated at small and regular intervals of T0. The final output ( Figure 7 , Figure 8

Page 50: Seismic processing - velocities

Figure 8

, and Figure 9 ) is sometimes called the velocity spectrum; in this course we shall call it a velocity-analysis display.

Page 51: Seismic processing - velocities

Figure 7

Page 52: Seismic processing - velocities

Figure 9

Velocity-Analysis Displays

Figure 7 shows the velocity-analysis display in its original form; Figure 8 shows the same display in another common form, in which semblance is contoured; to the right of this display is a curve of power as a function of time. Figure 9 shows yet another display, in which the contours are replaced by numbers that correspond to the semblance values (note that in modern seismic work, this numerical display would typically be shown in color).

The advantage of the latter two displays is twofold. First, they are cleaner, as there is no tangling of the semblance curves. Second (at least for Figure 8 ), the meaning of the analysis is immediately apparent in that a large semblance value is clearly marked by the steep contours. The abscissa and ordinate values at the center of the contour pattern are the proper stacking velocity and zero-offset time values for that pick. On the display of Figure 7 , on the other hand, we can infer the proper time and velocity only after drawing a baseline through the semblance curve's zero value, and projecting the curve's maximum onto that baseline.

We see in Figure 8 two distinctive semblance "signatures." Above about 2.5 5, for example, the choice of the stacking velocity seems to be unambiguous. The contours are generally steep, and semblance is low at the velocities off the main trend. This tells us that the signal-to-noise ratio in the shallow section is fairly high.

Below 2.5 s, we see the stretched contours that are generally characteristic of deeper data. We know that, as reflection time increases, a given change in stacking velocity produces a smaller change in nmo. The result is that a certain degree of improvement in the stack can be effected by a wider choice of velocities. The

Page 53: Seismic processing - velocities

contour stretch, therefore, represents a deterioration, with time, of the certainty of the velocity measurement.

In our interpretation of velocity-analysis displays, we often find it helpful to have a brute stack or near-trace section at hand. Figure 10 is a section across a growth fault.

Figure 10

Trace A is fairly representative of the geology and velocity distribution away from the fault; we see its clear velocity trend in Figure 11 .

Page 54: Seismic processing - velocities

Figure 11

Trace B is a different matter; that there is a fault is obvious, but the velocity behavior below the fault ( Figure 12 ) is not.

Page 55: Seismic processing - velocities

Figure 12

Further, there appears to be a spurious, high-velocity "event" at 1.30 5. The section enables us to recognize that the event is probably a fault-plane reflection. Picking these high velocities, we emerge with Figure 13 ; the fault plane is clearer, and the extent of the fault interpretable.

Page 56: Seismic processing - velocities

Figure 13

The choice between constant-velocity stacks and velocity-analysis displays depends on the geological problem, the concern with cost, and the preference of the processor. The constant-velocity stack, as we remarked earlier, gives a great deal of confidence in the stacked section, and keeps us close to the data. It loses its advantage, however, if only a few traces are used — which is the sole way of reducing its cost to that of a velocity analysis. The velocity analysis avoids the need to interpolate between panels; it gives a single velocity, at each zero-offset time, and thus may be used directly in the calculation of interval velocities.

Selection of Variables

Preprocessing

Generally, the data that go into a velocity analysis are prepared according to the flow of Figure 1 .

Page 57: Seismic processing - velocities

Figure 1

The initial processes substantially consist of signal conditioning, data reduction, and edit-and-gather.

Muting

Conventional velocity analysis requires that all wide-angle arrivals be deleted ( Figure 1 ).

Page 58: Seismic processing - velocities

Figure 1

The reason, of course, is that the stack incorporates only those arrivals that contain within them subsurface information; these are primarily narrow-angle arrivals. A moveout analysis designed for stack optimization therefore has no need of wide-angle arrivals.

(Refractions and supercritical reflections, two types of wide-angle arrival, are useful in certain cases. The measurement of the linear moveout of refractions can yield a velocity estimate for one layer. And the slant stack, in which time-offset data are mapped in a time-ray parameter domain, is useful for fairly accurate velocity analysis where we have offsets greater than reflector depths).

For marine work, the muting must be such that it excludes the sea-floor refraction for those offsets equal to or greater than the water depth. For land work, the mutes should exclude refractions and, insofar as possible, the direct waves. As always, a transitional mute is desirable, so that spurious frequencies are not introduced to the data.

Many times, direct arrivals (which are very wide angle arrivals) occur not in the vicinity of the first breaks but within the main body of the record. Surface waves, water waves, ice breaks, and air waves all travel at velocities much lower than those of the reflections. Where these are pervasive — which is to say where field arrays have been ineffective — we may choose to anticipate the beneficial effects of the stack-array. The hope is that the direct arrivals do not hamper the velocity analysis in the meantime. Alternatively, we may prefer to apply a surgical mute to the worst of the noise, or a frequency-wavenumber (f-k) filter to all of it.

Another reason for a good mute is the problem of offsets. We know that in the presence of dip and lateral velocity variations, the inclusion of the farther offsets may harm both the velocity analysis and the subsequent stack. On the other hand, we need the offset range to resolve the interfering residual normal moveouts, which in turn allows us to separate primary from multiple. Using mutes, we can eliminate the far

Page 59: Seismic processing - velocities

offsets for the shallow data, where the problems are most likely to occur, and restore the far offsets for the deep data, where their benefits are most immediately obtainable.

Offset Corrections

We are also concerned with offsets that may not be what we think they are. There is a dual hazard in marine data. First, note that marine streamers are equipped with stretch sections. We note further that if the ship maintains a constant ground speed while sailing into a current, the stretch may amount to 50 m or more. Finally, we observe that we can use the waterbreak times to determine the actual in-line distance from the source to the near hydrophone.

The larger problem is that of streamer feathering. In the presence of dip — especially cross-dip in the direction opposite that of the feathering — feathering has the effect of moving the apex of the normal-moveout hyperbola away from the zero-offset position ( Figure 1 , Travel time as a function of offset, for a strike line and various feathering angles).

Figure 1

The computed stacking velocities ( Figure 2 , The derived stacking velocities as a function of the feathering angle, for three line directions.

Page 60: Seismic processing - velocities

Figure 2

In these figures, the feathering angle is positive in the direction of dip; it is not measured with respect to the ship’s heading) may degrade the stack, and are certainly inappropriate for other purposes.

The problem of offset is easily solved, largely because the waterbreak analysis is so straightforward that there is really no reason not to perform it, whatever the magnitude of the errors.

The problem of the feathering requires a little more thought. We can minimize the errors during the shooting by steering the ship to keep the near-group midpoint — if not the average midpoint — on the line. This requires deft steering on the part of the captain. Alternatively, we may ask the crew to attach a location sensor to the streamer at the nominal position of the near-group midpoint.

What we do in the processing depends, as usual, on the requirements of the exploration problem. For normal-moveout correction only, it is not likely that we shall need any offset corrections. For more detailed work, we are prepared to restrict the velocity analysis to a portion of the line and for a portion of the time.

Our first step may be to modify the geometry in the trace headers so that "zero offset" is at the average midpoint of each gather. It may be possible to do this from a visual inspection of the gathers. Alternatively, if we are familiar enough with the geology of the prospect, we may already know (to a reasonable degree of accuracy) the reflector dip, the dip along the line, and the average velocity to the reflector; we can estimate the average midpoint from these factors and the feathering.

The necessary correction is simply a matter of subtracting the displacement of the hyperbola apex from the source-to-receiver offsets (refer to the section titled "Comments on the Problem of Dip", found under the heading "References and Additional Information"). When the required information is not at hand, we have

Page 61: Seismic processing - velocities

two other options. One is to modify the velocity-analysis programs to seek the apex of the travel-time curve, and then to find the best-fit hyperbola appropriate to it.

An easier approach — since it bypasses the programmers — is simply to process the line as though it were part of a 3-D data set. First, the traces are arranged into bins, much like the stripes we use in processing crooked-line data. Then, the traces are scanned for a determination of both the azimuth and the magnitude of the dip. This information is used to minimize the scatter of the midpoints within each bin, after which the velocity analysis proceeds as usual.

The 3-D processing method is equally suitable to land data shot with irregular geometry, although the displacement correction described above is generally not.

Static Corrections

For land data, the problem of statics is a tricky one. Certainly, the time delays caused by topographic variations and low-velocity zones should be removed prior to the velocity analysis. We remember, however, that moveout is a function of the full travel path of the seismic energy, which begins at the source and ends at the receiver.

In some cases, therefore, correction to datum removes too much of the travel path; this distorts the true meaning of the observed travel-time patterns. (In all cases, removal of any field statics represents some degree of compromise.)

Topographic variations are the simplest of the field statics to remove. Where bedrock comes to the surface or is covered by a negligible layer of soil or fill, removal of 40-50 m of elevation (which represents about 30-40 ms of two-way travel time) has no profound effect on the velocity analysis. Where a low-velocity layer exists, we are more restricted, since 40 ms (two-way) in a layer with a velocity of 1000 m/s represents only 20 m.

More commonly, the combination of topography and low-velocity material requires a total datum correction of much more than 40 ms. Then, we make the correction in two steps. Before velocity analysis, we remove surface irregularities by correcting to a floating datum. This is a smoothly varying, or even flat, preliminary datum on which we hang the velocity analysis (as well as subsequent processing) . After moveout correction and stack, we correct from the floating datum to the final flat datum.

The more serious problem is that of velocity anomalies in the near-surface, the consequences of which are clear from Figure 1 ,

Page 62: Seismic processing - velocities

Figure 1

Figure 2 , Figure 3 , and Figure 4 .

Page 63: Seismic processing - velocities

Figure 2

Page 64: Seismic processing - velocities

Figure 3

Page 65: Seismic processing - velocities

Figure 4

In Figure 1 we postulate a local thickening of the low-velocity near-surface layer; the near-offset travel paths go through the anomaly, but the far-offset paths do not. The travel-time curve for this model is not a simple hyperbola ( Figure 2 ), and the best-fit curve corresponds to a much higher velocity than is appropriate.

For a midpoint away from the anomaly ( Figure 3 ), the near-offset paths do not go through the anomaly, but the far-offset paths do. In this case, the inferred velocity is much lower than it should be ( Figure 4 ).

The lateral velocity variation arising from this sort of anomaly is shown in Figure 5 .

Figure 5

The effect extends a distance equal to the far offset plus the width of the anomaly; it also becomes more pronounced at depth. This makes sense, since a given time shift has more impact on a measurement of small moveout than on one of large moveout. Finally, we see in Figure 6 (The depth of the reflector is 4500m, and the average velocity below the anomaly is 3150ms) that the velocity error is related to the ratio between the width of the anomaly and the spread length; it is greatest when the ratio is about 0.6.

Page 66: Seismic processing - velocities

Figure 6

First appearances to the contrary, we cannot solve these problems with a simple correction to a floating datum. In Figure 7 we see the travel-time curves and best-fit hyperbolas after three choices of datum.

Page 67: Seismic processing - velocities

Figure 7

Curve A represents a datum 150 m below the surface, and is a simple removal of the low-velocity layer, but not the anomaly. The best-fit hyperbola is the same hyperbola as the one in Figure 2 ; at a smaller zero-offset time, it corresponds to an even higher velocity.

Curve B represents a removal of 300 m of the near-surface. This includes all of the anomaly, but it also includes some of the higher-velocity underlying material. The travel-time curve is now hyperbolic, and the velocity is much closer to the actual velocity. Indeed, with an error of only 7%, this velocity may be an acceptable stacking velocity in some cases. But this is the best we can do with simple static shifts. A deeper datum (Curve C) increases the velocity error.

Naturally, near-surface velocity anomalies also occur at sea. Marine lines often exhibit erosional channels; depending on the nature of the fill, the underlying reflections may be pulled up ( Figure 8 ) or pulled down ( Figure 9

Page 68: Seismic processing - velocities

Figure 9

) by the anomaly.

Page 69: Seismic processing - velocities

Figure 8

The proper correction for situations of this sort is necessarily dynamic, and requires a suitable model of the near-surface. Such a model may be inferred from an analysis of several offset panels, the theory being that the panels differ in the presence of anomalies.

One of the more important dynamic correction methods is that of Wave Equation Datuming (Berryhill, 1986). This technique is described under the heading "Layer Replacement Techniques", which is part of the Static Corrections topic within the Seismic Processing Series.

Some additional thoughts on layer replacement come from Berryhill (1986) in his discussion of wave-equation datuming techniques; although his is not a ray-tracing method, these considerations still apply.

1. It is not always enough simply to digitize the sea-floor reflection, Sometimes, that reflection comes from low-velocity sediments deposited by recent slumping; a canyon bottom that is not V-shaped may indicate such a reflection. In that case, we digitize not the sea floor but the buried velocity-contrast surface.

2. The choice of a replacement velocity is a matter of interpretation, the judgment being that some particular reflector should appear unperturbed as it passes under the canyon. Naturally, this means that replacement velocity can be so determined only for strike or near-strike lines.

3. Velocities determined from refraction analysis are generally useful only as lower limits of replacement velocity.

4. The reflections that benefit most in the velocity analysis are those from deeper horizons.

5. For marine data, we have to remember to account for the changed value of water velocity.

Sometimes the purpose of a velocity analysis is the calculation of residual normal moveout (rnmo) on a section to which nmo corrections have already been applied. We do this if we feel that the previous corrections are in error. In such a case, we must know what datum was used for that previous velocity analysis. It is that datum, after all, which defines the initial zero-offset time, which in turn defines the initial moveout corrections applied. The rnmo is then added to (or subtracted from) the initial nmo to define the final nmo corrections. The zero-offset time then helps to define the stacking function.

Zero-Time Corrections

The most important of the prevelocity analysis steps has the effect of a combined static-dynamic correction. Only by establishing the position of zero time can we expect to have confidence in the timing of our velocity picks. in general, the zero-time correction stands as good practice anyway; in particular, we would not want to infer an interval velocity without it.

From the point of view of the instrumentation, the time origin of the seismic record is either the command-to-shoot or the start of the multiplexing. If we were to record the actual source pulse — filtered the same as the reflection pulse — it would appear as at the top of Figure 1 .

Page 70: Seismic processing - velocities

Figure 1

Instrumental zero time, therefore, is at the onset of the pulse; the correct zero time, however, consistent with our picking scheme, is at the maximum of the envelope.

All effects considered, the time from the command-to-shoot to the maximum of the envelope of the source pulse is 20-60 ms. This is enough to cause substantial errors in the determination of shallow velocities. The method of zero-time correction proceeds as follows.

• Obviously, signature deconvolution is required. Where we have recorded the source signature, this is easy. Where we have not, we may have to use a statistical pulse-compression technique, estimating the pulse shape from, for instance, the water-bottom reflection.

• If the source pulse has not been recorded, and the reflection pulses are inscrutable, we have to choose other means. At sea, our correction may be that which brings the interval velocity of the water layer to a value that is both reasonable (bearing in mind the temperature and the salinity) and independent of the water depth. Alternatively, we may estimate the correction as that time which forces the simple water-bottom multiple (at zero offset) to occur at exactly twice the time of the primary.

• For marine data plagued with multiples, a good dereverberation is required.

• On land, the Vibroseis sweep and the air-gun pops are our source signatures. Vibroseis correlation has the added benefit of dephasing the instruments automatically, provided that the sweep used in correlation has passed through the same instrumental filters as the data.

Page 71: Seismic processing - velocities

• It follows that the minimum zero-time correction for land data is effected by instrument dephasing.

Consistent with the above, we must remember that the first goal of pre-velocity analysis processing is a good signal-to-noise ratio, even if we achieve it at the expense of good resolution shallow. Velocity analysis is not a mainstream operation, in that it does not alter the data. It is like a filter test, or a decon test, the results of which are brought to bear on the data. Therefore, we should give the velocity analysis every advantage we can, but we should not spend too much time deconvolving the data. The pulse-shaping steps above should be sufficient.

If we are confident that we know where zero time is before we go into the velocity analysis, we shall emerge with both a good stack and a good chance of deriving accurate interval velocities.

Locations

The problem of where to locate a velocity analysis gives us our first chance to decide the appropriate level of detail. If our concern is to provide very fine detail over part of a line, or an entire short line, for the ultimate purpose of investigating a particular feature, then a constant-velocity display is desirable.

We are likely to give more thought to analysis locations on a reconnaissance or semidetail line, since all considerations require a few well-placed analyses. Unless the geology (as seen on a brute stack or near-trace section) is uniform, we generally cannot consider a uniform spacing between analysis locations. The following decisions are based on the model of Figure 1 .

Figure 1

Page 72: Seismic processing - velocities

• Our first thought must be the signal-to-noise ratio. We avoid zones of poor Signal-to-noise — unless that is all we have — because whatever is responsible for it is likely to harm the velocity analysis as well.

• With that said, the first rule in making a processing test is to make it in a place that is representative of the exploration problem. Tn Figure 1 , therefore, the first locations that come to mind are the crests and troughs of the folded area at the west end of the line. Here, the layering is at least locally horizontal and uniform, so the stacking velocities more closely approximate root-mean-square velocities. We bear in mind, however, that whereas both layers are horizontal at location 1, this is not the case at locations 4 and 5. The analysis is likely to yield spurious velocities for the first layer at location 4, and certainly for the second layer at location 5.

• Similarly, locations 6 and 7 are appropriate for the first and second layers, respectively.

• Because of the steep dip on the flank of the fold, we suspect that the proper stacking velocity is higher at locations 2 and 3 than a strict interpolation between locations 1 and 4 (or between 1 and 5) would suggest. For this reason, these are also appropriate sites for a velocity analysis — as long as we do not try to attach any geologic significance to the derived velocities.

• Where there are long, discontinuous reflectors at depth, as at locations 3 and 9, we take advantage of the local improvement in the signal-to-noise ratio and position velocity analyses there. Even if the velocities are spurious at the upper levels, these locations are useful for the deep data, and we are grateful for them.

• We do not position a velocity analysis over faults or obvious near-surface anomalies (except when we are actually trying to stack a fault-plane reflection). We get as close as the raypaths allow, however, bearing in mind our choice of mutes. Therefore, before using location 6 for a velocity determination of the second layer, we check the position of the fault zone relative to the spread and the mutes. The same considerations apply at locations 7 and 8; we are especially careful not to include raypaths recorded at geophone groups above the anomaly.

• We avoid making velocity judgments at levels of obvious interference. Therefore, locations 8 and 10 're useful in clarifying the nature of the unconformity at the east end of the line; location 9, however, is included solely for the deep data.

• Our final consideration regarding the model of Figure 1 may be the most critical, and that is an understanding of the exploration problem. It is true that velocity analysis is a processing task, and all processors must learn how to make the above judgments. But the final arbiter is the interpreter, and he has every right to question velocity-analysis locations he believes to be poor choices. After all, the processor has little more than a brute stack on which to base his decisions. The interpreter, on the other hand, may have a firmer grasp on the regional and local geology, and it is his responsibility, not his option, to bring this information to bear on the problem. The processor, in turn, must accept these judgments unless he has sound physical reasons to believe they are in error.

Consistent with all of the above, we must recognize that the question of where to put a velocity analysis is actually a matter of structural sampling. We understand this in the context of the fold at the left of Figure 1 ; a straight interpolation between the crest and the trough would be quite wrong. Furthermore, we recognize that an interpolation from the inflection point to the crest or the trough would also introduce some errors.

Page 73: Seismic processing - velocities

The matter of structural sampling reminds us that the most severe velocity variations arise from anomalies in the near surface. Further, we note that the common-midpoint triangle turns any abrupt velocity variation into a smooth, serpentine one. In effect, it acts as a high-cut filter on the horizontal velocity changes in the earth. Therefore, variations such as that of Figure 2 impose on us a maximum velocity-sampling distance of half the spread length. Where structural and stratigraphic features are small, we may require a velocity analysis every quarter-spread length.

Figure 2

Windows

The analysis window, or time gate, is the time range over which semblance is computed. As is the case with many processing variables, the length of the window depends greatly on the frequencies present in the signal.

We find that a sensible window length is equal to 1.5 times the period of the average reflection pulse. More than this, and we run the risk of a multiple falling within the same time window as a primary reflection. Less, and the signal-to-noise ratio may be degraded. Where the interval velocity of a thin bed is a goal, the window should be narrow enough to include either the top or the bottom of the layer, but not both.

Consistent with the above points, we must recognize that in an area of excellent signal-to-noise ratios and an important thin layer, the analysis window may well be less than 20 ms. In view of the objective, this is acceptable. We may minimize the time and cost of such detailed work by running it over a short length of line or a limited time range.

Page 74: Seismic processing - velocities

In general, therefore, the analysis window should be 20-80 ms, with 48 ms being a good choice for a general-purpose velocity analysis. The window increment, which governs the amount of overlap between successive windows, is optimally half the length of the window itself.

Input Traces

One way to improve the signal-to-noise ratio of an analysis gather is to add traces. It is common practice, therefore, to add several gathers — all the trace 1s, all the trace 2s, all the trace 3s, etc. — representing adjacent midpoints, to form one analysis gather.

If the geology is smooth and regular and there is no dip, we may try for the best possible improvement in signal-to-noise and add up to 11 gathers. If we wish to add more, we must balance the incremental gains in the signal-to-noise ratio against the probability of lateral variations in velocity (which variations would be averaged by the summing).

Where there is dip, we add fewer gathers — perhaps seven — but now we apply an f-k filter to the ensemble of trace 1s (and then to the ensemble of trace 2s, then trace 3s, etc.) before adding the gathers. We set the f-k filter to pass a range of dips, perhaps ± 2 ms/midpoint. Figure 1

Figure 1

and Figure 2 show the improvements in the velocity analysis obtainable with this sort of dip scan.

Page 75: Seismic processing - velocities

Figure 2

Input Velocities

The choice of input velocities is naturally the most important variable in a velocity analysis. We should bring to bear on this choice all that we know about the prospect area, whether from nearby wells or previous lines. All our early efforts come to no good if we do not anticipate the effects of a high-speed layer of chalk, and thus neglect to tell the program to scan for higher stacking velocities. Such an omission is especially noticeable shallow, where an anomalous layer has greater effect Where there is still some doubt, we set the velocity range wide for at least the first few — and representative — lines of the prospect, and thenceforth let experience guide us.

Depending on the velocity-analysis program, there are several ways to supply the trial velocities. They all depend, however, on a first guess of the appropriate stacking velocities; this guess constitutes the central stacking function. From there, we have a choice. The other input functions may differ from the central function in incremental values of velocity or of normal moveout (as specified at some offset)

As we might expect, the increments are important. In Figure 1 we see the moveout pattern corresponding to a reflector 1800 m deep underlying a layer with a velocity of 1800 m/s. The zero-offset time is 2.0 s, and the travel time at the far offset of 1500 m is 2.167 5.

Page 76: Seismic processing - velocities

Figure 1

We consider a reflection frequency component of 50 Hz, having - period of 20 ms.

Now we consider nmo corrections appropriate to velocity choices of 1745 m/s ( Figure 2 ) and 1856 m/s ( Figure 2 ).

Page 77: Seismic processing - velocities

Figure 2

These velocity choices cause the far-offset traces, after correction, to be one-half period out of phase relative to the correct choice of velocity. Summing all the traces along the zerooffset time, we find that, compared to the case of perfect correction, the amplitude of the stacked trace is reduced in both cases about 11 dB.

(In this example we have chosen to disregard the effects of normal-moveout stretch. At this velocity and depth, the effect — less than an 8% reduction in frequency — and its contribution to the decrease in stack amplitude are minor.)

Obviously, this is more than adequate. Therefore, our criterion is as follows: If V0 is the velocity according to the central function, then the next higher velocity must produce a relative undercorrection equal to a half-period at three-quarters of the far offset. The next lower velocity must produce a relative overcorrection of a half-period at that offset. In our example, if V0 is 1800 m/s, the adjacent velocities are 1714 m/s and 1909 m/s.

For deeper data, the offsets are generally greater and the periods usually longer. Thus, we can use larger velocity increments. For shallower data, on the other hand, we must anticipate the nmo stretch; any calculation of this sort should therefore account for the expected mute. If the mute reduces the range of

Page 78: Seismic processing - velocities

offsets by half, to 750 m in our example, then the half-period criterion applies at about 560 m. (As we see in Figure 3 , however, a severe mute and unchanging frequency requirements can mean that the velocity increments get larger shallow.)

Figure 3

The velocity range must encompass all the velocities we expect in the data area, including the near-surface. At sea, of course, the water velocity is the low end of the range.

To determine the high end of the velocity range, we remember that the proper place to pick a semblance contour is at its peak. To locate that peak unambiguously requires closed contours ( Figure 4 , What is the pick at 2.7s?), so the velocity range must ensure that the contours do close.

Page 79: Seismic processing - velocities

Figure 4

This cannot be predicted in advance; rather, it takes some experience, and at least one previous velocity analysis.

Continuous Velocity Analysis

We have said that a detail survey generally requires a velocity analysis at every midpoint; this is known as a continuous velocity analysis. Naturally, such an effort is bound to be long and expensive, particularly if the feature we wish to delineate is large or has many lines across it. We can mitigate this by making preliminary picks ourselves, thereby narrowing the computer's choice of options. One sensible step is to ask the interpreter to point out the important horizons, and to limit the velocity analysis to that time range.

By the time we require this much detail, we also have an idea of the regional dip and the curvature of the reflection. We are therefore justified, economically and in principle, in applying an f-k filter (on a record basis) to reject interfering noise trains of spurious dip.

Picking and Verifying the Velocities

To begin, we may set forth some general thoughts about picking velocities:

• The correct place to pick the semblance contour is at its peak, provided we have made the proper static and zero-time corrections.

• Stacking velocities in zones of steep dip — on the flanks of a fold, for example — have little geologic significance.

Page 80: Seismic processing - velocities

• If the inferred stacking velocity decreases, then the Dix-derived interval velocities may not be realistic.

• Low stacking velocities deep in the section are probably appropriate to multiples.

• High stacking velocities shallow in the section, if they are not due to dip, may correspond to reflections from a fault plane.

This all seems tidy enough. Why do we not just write these judgments into the velocity-analysis programs, and let the computer do all the picking? Fortunately (at least for those of us that enjoy exercises in interpretation), there is more to velocity picking than these mechanical aspects. Above all, the picking of velocity-analysis displays is as much a matter of geologic interpretation as it is of geophysical interpretation.

With that in mind, we need to develop a picking scheme, one that allows us to develop in our mind's eye a picture of the geology as we proceed.

1. Refer to the seismic section. Velocity-analysis displays must never be picked in isolation from the section, even if only a near-trace section.

As we proceed, we recognize very quickly that the times we are accustomed to picking on the section may not coincide with the times of the semblance peaks. Figure 1 shows why this is so.

Figure 1

Page 81: Seismic processing - velocities

It reminds us that the semblance peak corresponds to the maximum of the pulse envelope. In picking a modern section, however, the modern interpreter picks the peak or trough that is closest to the envelope maximum generally over the prospect, and then stays with this particular peak or trough. Local changes of layer thickness, and the resulting changes in interference, inevitably cause the followed peak or trough to deviate locally from the envelope maximum, and so from the semblance peak.

2. Make the easy picks first. Interpreters follow this rule, and it is equally relevant to velocity analysis, both across the line and over the prospect. Figure 2 (Three seismic lines superposed on a block diagram of the subsurface geology.

Figure 2

We expect the fewest velocity complications on the strike line) represents the regional geology of a prospect; the lines over it are either strike lines, dip lines, or oblique lines. In turn, the strike lines are either over the crest of the structure (A in Figure 3 ), off its flanks (B), or off the structure itself (C). By now we know that the fewest velocity complications are likely to be on lines A and C. Given a choice, we pick the velocity analyses on these lines first.

Page 82: Seismic processing - velocities

Figure 3

As we pick the analyses and refer to the sections, certain signatures start to become familiar ( Figure 4 ).

Page 83: Seismic processing - velocities

Figure 4

First, of course, is the strong bull's-eye associated with a good strong pick. Less common are the stretched contours of Figure 4 (part b); characteristic of interference, this is not a good place to pick. Sometimes, we also see the pattern of Figure 4 (part c). The strong event, recognizable by its low velocity, is a multiple; the higher-velocity "event" is an alias of the multiple — it arises from the situation of Figure 5 (The gather has been corrected according to some primary velocity.

Page 84: Seismic processing - velocities

Figure 5

The multiple at time Tm is undercorrected, but adding the traces along Tm yields a local semblance maximum) — and is therefore not a legitimate pick.

3. Pick in groups of three. It is easier to interpret a velocity analysis in the context of the two on either side of it (provided, of course, that they are fairly close together and that there is no intervening structural change, such as a fault or a salt dome). In Figure 6 ,

Page 85: Seismic processing - velocities

Figure 6

Figure 7 ,

Page 86: Seismic processing - velocities

Figure 7

and Figure 8 for example, the left and right panels imply the same stacking function. The middle panel, however, is markedly different, the second pick implying an anomalously high interval velocity.

Figure 8

Our first course is to check the near-trace or brute-stack section. We need a picture of the geologic continuity across these three analysis locations.

So we ask, Does this velocity increase make geologic sense? Does the character or the amplitude of the horizon change at this location? If so, then the pick may be correct. Does the shallow section show evidence of a near-surface anomaly? If so, we may need to do another statics pass, this time incorporating a dynamic model for the near-surface. Alternatively, we may find that this analysis location is itself inappropriate.

Once we are satisfied that the pick is plausible and not a processing artifact, we check the next layer. How does the interval velocity between the second and third picks change from left to right? If it does not, then it is likely that the high-velocity pick is both real and meaningful.

Page 87: Seismic processing - velocities

This leads to another important message: all picks must be made with due regard for the underlying layers, because every layer affects the stacking velocity of the reflectors beneath it. If we can find a good reason for doing so (and parts (b) and (c) in Figure 4 are two such reasons) , then we may — and sometimes we must — disregard the pick. But we never move a pick, and we never pick open contours.

Working in threes is especially important when — as with a gas-saturated sand at moderate depths — a low-velocity anomaly is present. In Figure 9 , Figure 10

Figure 10

,

Page 88: Seismic processing - velocities

Figure 9

and Figure 11 we have the entirely plausible case in which the interval velocity is low enough to cause an inversion in the stacking velocity.

Page 89: Seismic processing - velocities

Figure 11

With just a brute stack and a velocity analysis to go on, however, we cannot know that the inversion is not due to water-layer reverberation or some other multiple.

To improve our odds of making the right choice, we make sure that the dereverberation operator we applied has worked well. The keys are to look for a train of water multiples curving downward and to the left from the accepted pick (in the manner shown in Figure 10 ). Of course, we also check the interval velocity represented by the inversion. If all is well, we move to the interval velocity of the underlying layer. If it is the same on all three panels, then we are fairly certain that the pick is proper.

Naturally, the method is weakened if the underlying layer is thick. Because of wavefront healing over the dimensions of the layer, the stacking velocity at the base of it may remain substantially the same over the three panels. The result is that the inferred interval velocity for that layer does change, rendering meaningless any conclusions we might draw from it.

Interval velocity is weakened as a diagnostic if the body having the anomalous interval velocity is of small lateral extent. Examples are small reefs, or erosional channels approximately aligned with the seismic line. In such cases, the wavefront bends around the body, and the stacking velocity for a deep reflector may not be influenced by the velocity of the reef or channel.

4. Pick high? Some processors follow a general rule of picking velocity analyses on the high side of the indicated maxima. While this introduces a small loss of stacked-primary amplitude (a loss that can be estimated from the contours themselves) , the reasoning is that it introduces a greater loss on the multiples.

Page 90: Seismic processing - velocities

Like most generalizations, this rule should not be applied without regard to the specific circumstances. If it is known (or obvious from the analysis) that the area is plagued with multiples, then clearly it makes good sense. But if the multiples are negligible, there is no point in taking the loss of the primaries. The practice also imposes a slight error on the interval velocities (which may or may not be significant).

A case in point might be a section in which interesting structures are separated by a broad, flat low. In general, the multiples are worse in such a flat-lying area. Then we may choose to pick high over this flat area, but to honor the maxima over the structures.

From all of this, we see that the operation of picking velocity analyses includes many judgments. First, we are making a judgment as to how much of the observed velocity variation is introduced by the geology, and how much by the seismic dependence on dip, statics, and complications of the raypath. Second, we are making a judgment whether to honor the variations from these latter sources; if they are very local, interpolation between widely spaced analyses may cause a broader degradation of the stack than ignoring them. Third, we are making judgments about the reasonableness of the geologic component of the variation.

These are not easy judgments. They are difficult enough on a final stacked section; they can be extremely difficult through the dark glass of a near-trace section.

The judgment that an observed velocity variation is not ecologically reasonable identifies the problem as being seismic. To some extent, we can then search for the cause, and either discard picks or make intelligent repicks. Sometimes, however, we are left shrugging our shoulders, and muttering that it must be the statics. And often, when the observed velocity variation is just plausible geologically, we have no way of telling whether it is indeed geological . . . or seismic.

In traditional seismic processing, therefore, the operation of picking velocities is one requiring considerable geological knowledge and considerable seismic knowledge. It is itself an interpretation constraining the choice of interpretations later available to the interpreter. This has been a major worry in exploration, contributing significantly to the exploration risk. Today, the concern is reduced by the provision of interactive processing for the processor, and a measure of interactive reprocessing for the interpreter.

Let us look for a moment at the geologic reasonableness of a velocity function.

• First, we know that the general fact of compaction must make the function concave to the bottom left, with the major change concentrated in the shallow section ( Figure 12 ).

Page 91: Seismic processing - velocities

Figure 12

• The velocity function as a whole is dominated by the thick layers.

• Therefore, the general form of a geologically plausible velocity distribution cannot change abruptly, except perhaps at a major fault.

• A thick carbonate section has a higher velocity than a thick clastic section.

• The only evaporite likely to occur in thick layers is salt, which has a fairly high (and predictable) velocity.

• Therefore, a velocity distribution that shows a sustained inversion is plausible only if a thick clastic section is overlain by a thick carbonate section (or possibly by basalt or salt).

• A velocity distribution that shows a local inversion is plausible only if the section shows a layer whose depositional setting or rock condition could be compatible with a low velocity, and if the computed interval velocity is also reasonable. If reflection polarity is clear, the inversion also demands a negative polarity.

• A velocity distribution that is of the usual form down to some level, and that shows little increase thereafter, suggests overpressure below that level. Within limits, the degree of overpressure may be computable from the observed interval velocity.

• Where a major geologic feature is recognizable on the seismic section, the time-average equation gives some guidance as to the range of plausible velocities. Thus, a porous reef

Page 92: Seismic processing - velocities

that has become totally cemented is likely to approach the grain velocity of limestone (about 6700 m/s or 22,000 ft/s); the maximum plausible water-filled porosity, on the other hand, would suggest a velocity of 3030 m/s (or 9930 ft/s). In a gas-prone province, a high porosity could produce a significant further depression.

• At shallow to medium depths, the interval velocity of a shale or limestone layer is expected to increase downdip; in a sandstone the effect may be negligible.

• Rocks that are brittle enough to fault cleanly are likely to be of fairly high velocity. Such rocks (and only such rocks) may show large depressions of velocity in structural situations likely to produce extensional fracturing.

• In general, a major vertical change of interval velocity is not admissible unless it generates a strong reflection. The exception is salt, because of its low density. Salt is usually recognizable on the section by its reflection-free interior, or by pillowing.

• A section full of strong reflections is unlikely to have a velocity distribution that is high shallow in the section.

Now let us look at the seismic reasonableness of a velocity function. • Any abrupt change of the velocity function as a whole suggests a seismic cause.

• The observation of significant dip on the section should produce a calculable increase of velocity on the velocity analyses. The picks should honor the high values for stacking purposes, but should be ignored for interval-velocity purposes.

• A Dix-derived interval velocity is meaningless unless the top and base of the layer are approximately parallel. There are things that can be done about this, but the solution is too complex for incorporation in routine velocity analysis.

• Particular velocity analyses showing uniformly high velocities or uniformly low velocities suggest statics problems, particularly if a high-velocity analysis is a spread length away from a low-velocity analysis. These variations are difficult to eliminate; for stacking purposes, they should be picked at face value.

• In effect, all lateral changes of velocity in the earth are filtered by the geometry of the common-midpoint gather. The lateral extent of this filter is the spread length (modified by the mute) in the serious case of the weathered layer, but proportionally less for deeper changes.

• Strange effects are inevitable near faults, particularly where an inclined fault plane happens to coincide with one side of the common-midpoint triangle. Again, the rule must be to honor the picks for stacking purposes, provided that the analyses are sufficiently close to remove interpolation problems; such picks have no value for interval-velocity purposes.

From all of this, we see again the need for adequate velocity analyses, having regard to the spread length and the factors of geologic change. Provided we have sufficient sampling, the broad rule is to honor the picks for stacking purposes. But for the calculation of meaningful interval velocities, we are better advised to select only those analyses that are seen (from the section) to have no seismic complications, and to ignore all others.

Page 93: Seismic processing - velocities

In routine processing of land data, we often have to process one line at a time. Inevitably, we find later that our velocity picks do not tie at the line intersections. Should they?

Obviously, the geologic component of the velocity variation should tie. But the seismic component often does not, and this can be a powerful tool in distinguishing the two components. The contribution of dip to the mis-tie at line intersections is calculable, and so can be checked for reasonableness and magnitude. The contribution of statics and common-midpoint geometry remains; basically, the amount of the discordance, after allowance for dip, is a measure of this contribution.

The geologic reasonableness of interval velocities is also strengthened, as a criterion, when we consider the whole prospect rather than one line. We can map the interval velocities, and check their variations against depositional and structural variations in three dimensions.

We have now identified many of the thoughts that are in the mind of the processor as he picks velocities. Because of the complex interrelation of all these factors, we acknowledge that many picks will be wrong. We consider now what can be done to catch these wrong picks before we go on to stacking and subsequent processing.

One powerful tool is to play out the velocity-analysis gather with the nmo corrections corresponding to the picks made. These displays are generally available during the picking process. The primary reflections should be perfectly aligned; the multiples should be undercorrected ( Figure 13 ). Usually, such displays are supplemented by displays of the interval velocities implied by the picks, and these can be overlaid on the section to check their reasonableness.

Figure 13

Page 94: Seismic processing - velocities

A second tool, developed from this, is a contoured display of both stacking velocities and interval velocities along the line ( Figure 14 ,

Figure 14

A contour plot of the stacking velocities, Figure 15 ,

Page 95: Seismic processing - velocities

Figure 15

A contour plot of the interval velocities inferred in Figure 14 , and Figure 16 , The seismic section, showing that the interval-velocity pattern arises from a reef structure).

Page 96: Seismic processing - velocities

Figure 16

From these we can assess whether the observed variations are compatible with the geologic agencies and the seismic agencies evident on the section, and edit the velocities on the display.

The most powerful tool of all is to play out, in sectional form, every gather along the line, corrected with the picked velocities and the selected interpolations between them. The results are often startling. Abrupt changes from overcorrection to undercorrection occur within a spread length. Patterns of incorrect statics can be seen rolling through the gathers, and can often be associated with the elevation or sea-floor profile. And the clearest possible visual distinction is obtained between geological and seismic variations. But where the problems are not seen to exist, we have the best possible indication that the final stack is correct, and that its detailed amplitude and character changes can be trusted.

Such a display, if made at normal scale, would be longer than the stacked section by a factor equal to the fold of stack. This is unmanageable. The display is made both more acceptable and more useful by using very narrow traces, so that the result has about the same proportions as the stacked section. Because these displays are so narrow, they are best plotted in variable-density mode. Such displays are called microstacks ( Figure 17 We have confidence that event A is properly corrected.

Page 97: Seismic processing - velocities

Figure 17

Event B, however, shows an overcorrection at SP212, grading to an undercorrection at SP230. Obviously, interpolation between the velocity analyses at Sps 206, 226, and 246 was not correct for this event.), or sometimes repeated-incidence (RI) displays. The technique, while perhaps expensive for routine processing, is of great value for all detail work.

An alternative presentation requires a color plotter, but is more efficient for lines with high folds of stack. The method consists first of separating the seismic data into three offset ranges, and then forming subgathers for each range at each midpoint. All the subgathers are nmo-corrected, with one stacking function for the three subgathers at each midpoint, and each offset range is then stacked. The three resulting stacks — each in its own primary color — are finally superposed to form an image that resembles a conventional stacked section, but with colors delineating the offset ranges.

The "Colorstack," like the microstack, is best implemented on an interactive work-station; the zoom control allows us to see at a glance the details of overcorrection or undercorrection, and the errors in static correction.

The work-station also removes the need for writing down and entering the velocity picks by hand, as the latter can be done by the cursor on the screen. Each time the cursor location is input, the time-velocity coordinates are entered into a file for future reference.

The Problem of Statics

Refined velocity picks can aid automatic-statics programs in reaching convergent solutions. It is also true, however, that a good statics solution makes the velocity analysis better. This implies that the best way to make static and dynamic corrections is in an iterative loop.

Page 98: Seismic processing - velocities

Which processing step first? How many iterations around the loop? When do we decide that enough is enough? First, we assume that the field statics — and at sea, the water-depth corrections — are not an issue. That is, we hope we are left with only the anomalous arrival-time variations imposed by the near-surface.

The aim of automatic-statics corrections is to line up reflections from one trace to the next. Each trace can be regarded as having four spatial coordinates ( Figure 1 ).

Figure 1

Further, the arrival times along each of these four planes is affected by some combination of dip, velocity, source statics, and receiver statics ( Figure 1 ). After the removal of field statics, the largest source of time delay (from any trace to the one adjacent) is that caused by moveout.

Therefore, for statics programs to have a reasonable chance of working, some degree of dynamic correction has to be applied first.

The first step in the loop is therefore an estimate of the velocities, and, for present purposes, we do the best we can, as limited by the statics. In so doing, we reduce the delays imposed by velocity to the scale of residual normal-moveout (rnmo) corrections. Then, the programs are written such that rnmo is one of the unknowns; they thus provide us with a measure of how good or how poor our initial nmo-velocity estimates were.

With the automatic-statics corrections done, we can repeat the stage of velocity analysis. Figure 2 and Figure 3 compare velocity analyses computed before and after statics.

Page 99: Seismic processing - velocities

Figure 2

There is considerably less scatter of the picks; in particular, the picks at 2.1-2.5 seconds are seen to be significantly different.

Page 100: Seismic processing - velocities

Figure 3

More often than not, we find ourselves making at least one-and-a-half passes through this loop ( Figure 4 ).

Page 101: Seismic processing - velocities

Figure 4

Where conditions warrant, however, we are prepared to make two full passes — even three — before we are confident that we have done all that we can. The cost rises, so a balance must be struck-in consultation with the interpreter — between final quality and cost. An alternative approach to this process, which involves simultaneous estimation of statics and velocity, is described by Larner and Tjan (1995).

Harmonizing Stacking and Well-Survey Velocities

Stacking velocities in themselves have no geologic significance; their purpose is to correct an artifact of the seismic method. Still, it is useful to examine the geometric and nongeometric reasons for the discordance between stacking velocities and well-survey (or check-shot) velocities.

Geometric Causes of Discordance

In the following material we assume that the velocity survey is done with a source close to the wellhead, so that we need not concern ourselves here with oblique incidence.

• In light of the above assumption, the path of a velocity survey is essentially vertical, whereas the stacking velocities represent an average from a large triangle. In itself, this does not mean that the velocities should be different. We realize, however, that wells are drilled on anomalies; we are therefore not surprised that there are significant velocity variations in the vicinity of the borehole.

Page 102: Seismic processing - velocities

• Even a velocity analysis centered on the well can be expected to yield different results from a well survey. We remember that the velocity appropriate to correct the moveout is not an average velocity. We can derive average velocity from stacking velocity, of course, but it takes several steps to do so, and must take into account heterogeneity, static corrections, velocities inferred for overlying layers, dip, and refraction.

Having done all that, we may still emerge with an average velocity appropriate at a point some distance from the actual midpoint, especially if there is significant dip. In that case, we may need to use a gather centered away from the well to get an average velocity appropriate at the well. (And this is likely to change from one reflection to the next.)

• If we have drilled a well, it is because we think there are hydrocarbons at that location. Any free gas that may be present causes lateral velocity variations, which in turn cause errors in stacking velocities.

• In the real world, the travel-time curve is not an exact hyperbola. Earlier, however, we made the assumption that higher-order terms do not matter for usual offsets. This assumption carries over to our programming, and so the velocity analysis seeks a hyperbola that best fits the observations. We can check the seriousness of the problem by making analyses using different offset ranges of the gather. If the outer offsets yield much higher velocities than the inner offsets, then the assumptions have hurt us.

• Geometric causes of velocity discordance are not entirely the fault of the common-midpoint method. Velocity surveys are also subject to velocity lensing, blind spots, and dip and refraction effects.

• Finally, the velocities inferred from a common-midpoint analysis are actually horizontal velocities, whereas the check-shot measurements are of vertical velocities.  

Nongeometric Causes of Discordance

Mostly, these arise from differences in recording and processing schemes.

• First, of course, is the fact that the velocity-survey path is one-way, whereas the cmp path is two-way. Thus, the former is less affected by absorption, short-path multiples, and scattering. The check-shot pulse has therefore suffered less loss of the high frequencies, and its velocity must be slightly higher for this reason.

• It is probable that the velocity survey and the seismic line were shot by different crews (and certain that they were shot at different times). Thus, we may see the effects of different sources, receivers, amplifiers, filters, and sampling intervals.

• Measurements of stacking velocity require us to pick at the maximum of the pulse envelope. Check-shot times are usually picked at what is assumed to be the "onset" of the pulse, or at the first trough. This means that the "time" of an event on a seismic section may not be the same as the "time" of the same reflector on check-shot data.

PRESTACK PARTIAL MIGRATION (DMO)

Page 103: Seismic processing - velocities

In the presence of dip, the required stacking velocity is higher than it would otherwise be. This allows proper correction of the flatter moveout hyperbola, so that summing the traces along the zero-offset time yields the optimum stack amplitude.

The larger problem with dip is that the traces of the gather, which share a common midpoint, do not share a common depth point. The stack therefore smears the reflection data over this range of depth points ( Figure 1 ).

Figure 1

A further complication occurs when there is a change of dip along the reflector, as in Figure 2 .

Page 104: Seismic processing - velocities

Figure 2

Here, the zero-offset trace at midpoint M records two normal-incidence paths; the velocity analysis at M thus yields two equally valid stacking velocities, one for the flat event, and one for the dipping event. If we choose the velocity appropriate to one, we degrade the other.

We see the problem more acutely in Figure 3 , which represents a time section appropriate to the model of Figure 2 .

Page 105: Seismic processing - velocities

Figure 3

The dashed "events" are reflections recorded at zero offset; the solid lines are reflections recorded at the far offset. In the vicinity of the change of dip, we see that a given midpoint corresponds to two depth points. Clearly, we need to establish a one-to-one correspondence between midpoint and depth point. Just as clearly, we accomplish this by migrating the data.

Prestack migration is, of course, well established as the optimum technique for the conversion of non-zero offset data directly to the migrated image. Unfortunately, it is a computationally intensive — and expensive — process that requires detailed velocity information. Worse, it does not provide an unmigrated time section as an intermediate product. The computationally efficient alternative that we describe here is called prestack partial migration; it is also known as the dipmoveout (dmo) correction or offset continuation, depending on the details of the process. In this course we shall use the abbreviation dmo, by way of analogy with the well-established nmo.

The goal of the dmo correction is to get all the depth points in their proper lateral positions. We can then perform velocity analyses on real common-depth-point gathers, as suggested by the flow of Figure 4 .

Page 106: Seismic processing - velocities

Figure 4

After stack, a conventional time migration should yield a section equivalent to that obtained through prestack full migration.

Figure 3 suggests that we can perform the migration on suites of common-offset traces; common-offset sections reveal the geology without major complications from the normal moveout. To form these sections — if the field geometry is regular we may think of them as common trace-sections — we simply gather all the trace 1s to form one section, all the trace 2s to form another, and so on. Thus, a field program using 96-channel recording would yield 96 commonoffset sections.

We now face two problems. One has to do with the migration of long-offset common-offset sections. In general, this proves to be a thorny problem if we have not yet applied any moveout corrections. The second problem is the detailed velocity information required by migration. We have saved nothing if we try to get that information; we may as well do a prestack full migration.

In addressing the first problem, we find that the lateral shift of depth points effected by the dmo process is fairly insensitive to errors in the normal-moveout correction. Not only does it not matter to the migration how good our nmo corrections are, it also does not matter if we even make the corrections. In other words, it is all the same to the dmo process. Therefore, we can help the long-offset sections by making an approximate nmo correction.

As for the migration velocity, we recognize that any reasonable migration has to help, so, again, the choice of velocity function is not very critical. It is probable that migration with an approximate function does not achieve the one-to-one correspondence between midpoint and depth point that we would wish. Still, we are better off than when we started.

Page 107: Seismic processing - velocities

Finally, we are guided by the understanding that we do not wish to spend a lot of money on this step. Ninety-six common-offset sections is a lot of migrating. So, we are content to use the cheapest migration program that does the job. Generally, this is a frequency-domain operation performed after a Fourier transformation of the data.

After the migration, we re-sort the traces by common midpoint, which is now equivalent, or very nearly so, to re-sorting by common depth point. A detailed velocity analysis at this point is relatively free of the sort of ambiguity described by Figure 1 , Figure 2 and Figure 3 .

What is the effect of the complete dmo process? For one thing, the subsequent velocity analysis is noticeably improved, the dip dependence of the stacking velocities having been removed. In Figure 5 , for instance, we wonder about the strong semblance peak at 2.8 5.

Figure 5

Part (b) of Figure 5 , after dmo correction, guides us to use a lower velocity for that event, one that is more in line with the general velocity trend. In other words, the process allows us to use one velocity to stack events with conflicting dips.

Figure 6 shows us the test line from which Figure 5 is taken,

Page 108: Seismic processing - velocities

Figure 6

and Figure 7

Page 109: Seismic processing - velocities

Figure 7

and Figure 8 show the detail in the vicinity of the conflicting dips .

Page 110: Seismic processing - velocities

Figure 8

Page 111: Seismic processing - velocities

Figure 9

Figure 9 and Figure 10 are their migrated counterparts.

Figure 10

Clearly, prestack partial migration followed by a detailed velocity analysis and a good poststack migration preserves the information we need in order to make a good interpretation. And a good interpretation, after all, is the goal of all seismic processing.

REFERENCES

Al-Chalabi, M., 1994, Seismic velocity a critique: First Break, 12, no. 12, 589-596.

Al-Chalabi, M. 1974. An analysis of stacking, rms, average, and interval velocities over a horizontally layered ground. Geophysical Prospecting 22(3) :458-475.

Al-Chalabi, M. 1974. An analysis of stacking, rms, average, and interval velocities over a horizontally layered ground, in Byun, Bok s., Ed., Velocity analysis on multichannel seismic data: Soc. Expl. Geophys., 28 45. (* Reprinted from Geophysical Prospecting, 22, 458 475)

Page 112: Seismic processing - velocities

Anstey, N. A. 1977. Seismic interpretation — The physical aspects. Boston: IHRDC.

Berg, L. E. 1984. Prestack partial migration. Paper presented at the 54th International SEG Meeting, Atlanta, Ga.

Berryhill, J. R. 1986. Submarine canyons: Velocity replacement by wave-equation datuming before stack. Geophysics 51(8) :1S72-1579.

Cordier, J. P. 1985. Velocities in reflection seismology. Amsterdam: D. Reidel.

Domenico, S. N. 1974. Effect of water saturation on seismic reflectivity of sand reservoirs encased in shale. Geophysics 39(6)759-769.

Gardner, G. H. F., L. W. Gardner, and A. R. Gregory. 1974. Formation velocity and density — The diagnostic basis for stratigraphic traps. Geophysics 39(6) :770-780.

Hale, D. 1984. Dip-moveout by Fourier transform. Geophysics 49(6) :741-757.

Jankowsky, W. 1970. Empirical investigation of some factors affecting elastic-wave velocities in carbonate rocks. Geophysical Prospecting 18(1)103-118.

Kalra, A. K. 1986. Velocity analysis for feathered marine data. Geophysics 51(1)190-191.

Larner, Ken and Tjan, Timo, 1995, Simultaneous statics and velocity estimation for data from structurally-complex areas: 65th Annual Internat: Mtg., Soc. Expl. Geophys., Expanded Abstracts, , 95, 1401 1404.

Levin, F. K. 1971. Apparent velocity from dipping-interface reflections. Geophysics 36(3)510-516.

_________ 1983. The effects of streamer feathering on stacking. Geophysics 48(9):1165-1171.

Levin, F. K., and P. M. Shah. 1977. Pegleg multiples and dipping reflectors. Geophysics 42(5)957-981.

May, B. T., and J. D. Covey. 1981. An inverse method for computing geologic structures from seismic reflections — Zero-offset case. Geophysics 46(3)268-287.

Onstott, G. E., M. M. Backus, C. R. Wilson, and J. D. Phillips. 1984. Color display of offset-dependent reflectivity in seismic data. Paper presented at the 54th International SEG Meeting, Atlanta, Ga.

Schultz, P. S. 1984. Seismic velocity estimation. Proceedings of the IEEE 72(10):1330-1339.

Taner, M. T., and F. Koehler. 1969. Velocity spectra — Digital computer derivation and applications of velocity functions. Geophysics 34(6) :859-881.

Taner, M. T., F. Koehler, and K. A. Al-Hilali. 1974. Estimation and correction of near-surface time anomalies. Geophysics 39(4) :441-463.

Toksöz, M. N., C. H. Cheng, and A. Timur. 1976. Velocities of seismic waves in porous rocks. Geophysics 4l(4):621-645.

Page 113: Seismic processing - velocities

Recommended Reading

Alkhalifah, Tariq and Tsvankin, Ilya, 1995, Velocity analysis for transversely isotropic media: Geophysics, 60, no. 5, 1550, 1566. Sayers, C.M., 1995, Anisotropic velocity analysis: Geophys. Prosp., 43, no. 4, 541 568.

Berryhill, J.A., 1986, Submarine canyons velocity replacement by wave equation datum before stack: Geophysics, 51, no. 8, 1572 1579.

Bolondi, G., and F. Rocca. 1985. Normal-moveout correction, offset continuation, and prestack partial migration compared as prestack processes. In Developments in geophysical exploration methods — 6, ed. A. A. Fitch. London: Elsevier Applied Science Publishers, Ltd.

Byun, Bok S., [Ed], 1990, Velocity analysis on multichannel seismic data: Soc. Expl. Geophys., 518pp.

de Bazelaire, E. 1986. Normal moveout revisited — Inhomogeneous media and curved interfaces. Paper presented at the 56th International SEG Meeting, Houston.

Deregowski, S. M., 1990, Common-offset migrations and velocity analysis: First Break, 8, no. 6, 224 234.

Deregowski, S. M. 1986. What is DM0? First Break 4(7):7-24.

Diet, J. P., Audebert, F., Huard, I., Lanfranchi, P. and Zhang, X., 1993, Velocity analysis with pre-stack time migration using the S-G method: A unified approach: 63rd Annual Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, , 93, 957 960.

Doicin, Dariu, Johnson, Colin, Hargreaves, Neil and Perkins, Colin, 1995, Machine-guided velocity interpretation: 65th Annual Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, , 95, 1413 1416.

Dunkin, J. W., and F. K. Levin. 1973. Effect of normal moveout on a seismic pulse. Geophysics 38(4):635-642.

Hale, D. 1982. Migration of nonzero offset sections. In Report No. 30, Stanford Exploration Project.

Hottman, C. E., and R. K. Johnson. 1965. Estimation of formation pressures from log-derived shale properties. J. Pet. Tech. 17:717-722.

Hubral, P., and Th. Krey. 1980. Interval velocities from seismic reflection time measurements. Tulsa: SEG.

Jenyon, M. K., and A. A. Fitch. 1985. Seismic reflection interpretation. Berlin: Geopublication Associates.

Kleyn, A. H. 1983. Seismic reflection interpretation. London: Applied Science Publishers, Ltd.

Magara, K. 1980. Comparison of porosity-depth relationships of shale and sandstone. J. Pet. Geol. 3(2):175-185.

Marsden, D., 1993, The velocity domain: The Leading Edge, 12, no. 7, 747 749.

Page 114: Seismic processing - velocities

May, T. W., and S. H. Bickel. 1985. The effects of compaction on moveout. Oil and Gas Journal 83(40):144-155.

Noponen, I., and J. Keeney. 1986. Attenuation of waterborne coherent noise by application of hyperbolic velocity filtering during the T-p transform. Geophysics 51(l):20-33.

Robinson, E. A. 1983. Seismic velocity analysis and the convolutional model. Boston: IHRDC.

Sengbush, R. L., 1983. Seismic exploration methods: IHRDC.

Shah, P. M., and F. K. Levin. 1973. Gross properties of time-distance curves. Geophysics 38(4) :643-656.

Shah, P. N. 1973. Short note on ray tracing in three dimensions. Geophysics 38(3):600-604.

Sheriff, R. E., and Geldart, L. P., 1983. Exploration seismology Volume 2: Data processing and interpretation: Cambridge University Press.

Yilmaz, d., and J. F. Claerbout. 1980. Prestack partial migration. Geophysics 45(12): 1753-1779.

Yilmaz, O., 1987. Seismic data processing: Soc. Expl. Geophys.

__________ 1985. How compaction affects dipping beds. Oil and Gas Journal 83(41):142-l46.

The Travel-Time Equation for a Layered Sequence

For the two-layer model of Figure 1 , the total travel time for a given raypath is

Page 115: Seismic processing - velocities

Figure 1

TX = 2(z1/v1 cos1 + z2/v2 cos2).                     (A-l)

The offset for which this relation holds is a consequence of Fermat's principle, so that

X = 2 (z1 tanl + z2 tan2)                                   (A-2)

For the case of N parallel, horizontal layers, each of constant velocity vk and thickness , Equations A-l and A-2 become

TX = 2  zk/vk cosk;                                         (A-3)

X = 2  zk tank,                                                 (A-4)

where k is the propagation angle, measured from the vertical, through each layer.

We now make two important observations: one is the trigonometric identity sin2 + cos2= 1. The other, from Snell's law, allows us to write sink in terms of sinl and the layer velocities v1 and vk: sin k = sin l (vk/v1).

Therefore, both cosk and tank can be written in terms of these variables. We have

TX = 2  zk/vk 

                                  =2  zk/vk 

                                  =2  zk/vk                                   (A-5)

and

                            X = 2  zk/vk sin k/cos k

                                 =2  zk/vksin 1vk/v1)/

                                 = 2 zk/vk p/                               (A-6)

In the above, we have let the constant sin1/v1 equal p, which is the ray parameter.

Comments on the Problem of Dip

Page 116: Seismic processing - velocities

1. The derivation that follows is essentially the geometric approach that Levin eschewed in his 1971 paper. There is also a slight difference in notation from that paper.

Figure 1 shows the dipping reflector and the planes containing the dip and oblique lines.

Figure 1

The first plane is vertical, by definition; the second is not. Here the dip of the reflector is (which corresponds to Levin' s ) ; the apparent dip as seen on the oblique line is (we wish to derive its cosine); and the surface angle between the two lines is (Levin's ).

At the point of intersection, a zero-offset measurement determines that the depth to the reflector is Z. (We note in passing that the structural geologist defines the apparent dip on the oblique line in a vertical plane, and the depth Z perpendicular to the surface.) Finally, the distance along the reflector from the surface to the common reflection point, in the plane of the oblique line, is m.

To get in terms of and , we need to determine sides A and B. The triangle containing A, Z, and is shown in Figure 2 ; A is equal to Z/sin .

Page 117: Seismic processing - velocities

Figure 2

The triangle containing A, B, and is shown in Figure   3 ; if A is Z/sin , then B is equal to Z/sin cos

Page 118: Seismic processing - velocities

Figure 3

Figure 4 shows the triangle containing B, Z, and .

Page 119: Seismic processing - velocities

Figure 4

Because B is Z/sin cos, then sin = Z/(Z/sin cos ), or simply sin cos . Thus,

cos =

2. We in Figure   5 that in the presence of dip, the effect of streamer feathering is to move the apex of the normal-moveout hyperbola away from the zero-offset position.

Figure 5

The computed stacking velocities, which assume an apex at zero offset, therefore do not fit the observed data.

Levin (1983) shows that the displacement of the hyperbola's apex is given by

where and are as above, and is the feathering angle, positive in the direction of dip, and negative if the dip and feathering directions are on opposite sides of the line.

Page 120: Seismic processing - velocities

Obviously, one way to correct feathered marine data prior to velocity analysis is to subtract this quantity from the nominal offset for each trace. For most purposes, the angles need be known to within a few degrees, and an average feathering angle is sufficient for most, if not all, of the line. The computed displacement can then be subtracted (or added) in the geometry headers.


Top Related