seismic processing - velocities
Post on 18-Jul-2016
Embed Size (px)
DESCRIPTIONHow to do velocity picking in seismic processing
The Constitution of the Rock
The Constitution of the Rock
The first factor determining seismic velocity is the chemical constitution of the rock. It is easy to accept that a uniform rock consisting entirely of calcium carbonate would have a different velocity from that of a uniform rock consisting entirely of silicon dioxide.
In fact, however, sedimentary rocks are seldom uniform. The closest approximation might be the evaporites, particularly salt, whose velocity is usually within the fairly small range of 4200-4500 m/s (about 14,000-15,000 ft/s)
Among the rocks composed basically of grains, uniform constitution can exist only where the voids between the grains have become totally filled by a cement of the same material as the grains. When this happens, limestone rocks are observed to have velocities on the order of 6300-7000 m/s (about 21,000-23,000 ft/s), and sandstones on the order of 5500-5800 m/s (about 18,000-19,000 ft/s). Unfortunately, this unusual situation is the only one in which velocity can be used as a definite indicator of the chemical constitution of the rock.
More common is the situation where the cement (while still totally filling the pores) is of a different chemical constitution; we might have, for example, a sandstone totally cemented by calcite. Then, velocity must have some sort of average value, between that of the sand grains and that of the calcite cement. It is observed empirically that the appropriate average is the time-average velocity ( Figure 1 and Figure 2 ).
The physical meaning of this equation is easy to see for the unit cubes of rock depicted. The time for a seismic wave to pass through the block of Figure 1 is 1/V, where V is the time-average velocity. In Figure 2 , this time is seen to be the same as the time to pass first through all the grain material and then to pass through all the cement material. The concept of time-average velocity therefore implies that the propagation can be considered in the two components separately. If the pore volume filled by cement is small, the velocity is close to that of the grains; if the pore volume filled by cement is relatively large (say, 30%), the velocity deviates significantly.
Thus, velocity is often a weak indicator of the chemical constitution of the rock; a limestone with low-velocity cement may be indistinguishable from a sandstone with high-velocity cement.
Most of the sedimentary rocks with which we are concerned have three components ( Figure 1 ).
The first component is the grains, which together (and in contact) form the framework or skeleton of the rock. The second component is the cement; in general, this fills only part of the space between the grains, tending to be concentrated at and near the grain contacts. The remaining portion of the volume (the porosity) is occupied by fluid. We have particular interest in the case in which the fluid is oil or gas; in general, however, it is water.
Fortunately, the time-average equation of Figure 2 and Figure 3 can be extended to this situation, particularly for sandstones.
In an example like Figure 1 ,
where the proportion of total volume occupied by cement is small, we often ignore the difference between the cement and the grains, and use the approximation of Figure 4 .
Then the appropriate time-average equation is as given in the figure.
Because the velocity of water (and oil) is so much lower than that of the rock grains, the presence of water-filled porosity can make a large difference in the velocity. Indeed, in many situations the effect of porosity is dominant in determining velocity. Thus, a change from 0% porosity to 30% porosity in a sandstone can depress the velocity from 5700 m/s to 3100 m/s (from 18,700 ft/s to 10,000 ft/s).
The scale of such variations reduces still further the utility of velocity as an indicator of the chemical constitution of a rock; a high-porosity limestone can have a velocity lower than that of a sandstone, and both can be less than that of a salt.
The dependency of velocity on fluid-filled porosity is based on the fact that seismic wave propagation occurs by local compression of the rock, against the stiffness of the rock. If the rock is totally solid, this stiffness is that of the grain material, against molecular forces. But if the rock has pores, and the material in the pores is less stiff than the grains, the grains can deform sideways into the pores; the rock as a whole has lower stiffness.
The ability of the grains to deform into the pores depends not only on the stiffness of the pore fluid resisting that deformation, but also on the shape of the pores: a spherical pore deforms less readily than a long thin pore. This proves to be an important distinction; the applicability of the time-average equation is limited to pores of intergranular type (such as those of Figure 4 ).
This is probably the reason that the time-average equation is widely applicable in sandstones, where the porosity is usually of intergranular type. In oolitic limestones, the porosity is again of intergranular type, and the time-average equation is used. In many cases, however, important reservoirs in limestones are formed by chemical change or solution or both; the resulting pore shapes may mean velocities lower than those given by the time-average equation. Even more marked deviations may arise when the "pores" are actually open cracks caused by fracturing.
When brittle rocks such as limestones are folded, tiny fractures tend to form over the highs (or other sharp changes of dip). In the presence of extensional faulting, major systems of subparallel fracture planes may form. Both of these situations lead to a marked depression of velocity. Because the voids have so little mechanical strength (as opposed to inter-granular pores), the depression of velocity can be greater than that indicated by the time-average equation.
In principle, therefore, velocity provides a sensitive tool for the detection of fracture porosity (although it cannot distinguish between a valuable system of large interconnected fractures and a useless system of micro fractures without permeability).
There are several reasons that velocity tends to increase with depth.
1. The first effect of overburden pressure is to cause some "settling" of the grains; this involves physical movement of individual grains to achieve a more efficient packing. The number of grain contacts supporting the overburden increases, so the porosity decreases. This is an irreversible effect that does not yet involve elastic deformation of the grains, and normally occurs at shallow depths.
2. The second effect of overburden pressure is elastic deformation; just as in a seismic wave, the compression causes the grain to deform into the pores. As a layer is buried to a certain depth, the rock progressively loses porosity and increases in velocity. This effect is reversible if the layer is subsequently uplifted.
3. At great depths the pressures at the grain contacts may be sufficient to crack the grain locally, and to cause a further loss of porosity and increase of velocity. This effect is not reversible.
4. If a rock that has already been fractured is buried deeper, the fractures tend to close very quickly. The effect is a pronounced increase of velocity with depth (although never to unfractured values, because the walls of the fractures never fit together exactly when the fracture closes). Any or all of these first four effects are encompassed within the term compaction.
5. After the initial compaction at shallow depth, the greatest loss of porosity (and hence increase of velocity) is likely to be by cementation. Initially, cementation, just at the grain contacts, may have little effect, but ultimately it may destroy the porosity altogether. The amount and type of cementation depends on the chemistry of the rock, the chemistry of the water, the flow of the water (for example, up a fracture plane) , and many other factors. In general, these do not depend directly on depth as such, but the chances of increased cementation usually increase with geologic age, which in turn usually increases with depth.
6. The timing of the cementation can be critical. A rock cemented before deep burial may lose much of its porosity at that time, but little more as it is buried.Overpressure
As a rock is progressively buried, the compaction usually squeezes out the water from the contracting pores, and the water rises through the layers above to the surface.
Sometimes, however, some impermeable barrier above the compacting layer shuts off the path for the water to rise to the surface; the water is locked in the pores. Its stiffness then resists the deformation of the grains into the pores, thus tending to maintain the porosity of the rock as buried. Such a rock is said to be overpressured; the water is holding the pores open. The usual increase of velocity with depth can be reduced dramatically, both by the maintenance of porosity and by the reduction of cementation by circulating waters.
We have seen that, relative to a solid rock, the presence of significant water-filled porosity produces a dramatic reduction of velocity. Basically this is because the stiffness of the water is less than that of the rock grains. If the water is now replaced by gas, there is virtually no stiffness to resist the deformation of the grains into the pores, and so the velocity decreases still further.
Previously we considered a solid sandstone of velocity 5700 m/s (18,700 ft/s), and computed, using the time-average equation, that its velocity with 30% water-filled porosity would be 3100 m/s (10,000 ft/s). If the water in the pores is replaced by gas, the time-average equation is no longer valid, but the velocity is typically observed to fall to as low as 2300 m/s (7500 ft/s)
This phenomenon is little affected by the proportion of gas to water in the pores; the stiffness disappears with the first bubble of free gas in the pores, and gas saturations of a few percent have as much effect as saturations of 100% (GP506 Hydrocarbon Indicators).
A quite different consequence of having hydrocarbons (either oil or gas) in the pores is that their entrapment effectively prevents the transport of water through those pores, and so the precipitation of cement out of such water. Thus it is not unusual to find lower velocities in the hydrocarbon-saturated portion of a reservoir than in the water-saturated portion. Basically, the porosity has been maintained where the hydrocarbons are, and lost elsewhere.
We must consider what happens to the velocity when some of these happenings are reversed.
1. We have already noted that the elastic effect of deeper burial is reversible, but that settling effects and graincontact crushing are not. In general, therefore, uniform uplift produces some reduction of velocity, but not to the value otherwise expected for that depth.
2. We have also noted that local uplift of brittle rocks is likely to depress the velocity by fracturing. Subsequent deep burial will not restore the value expected for that depth (unless the fractures become cemented).
3. The depression of velocity to be expected in fault zones can be totally restored by cementation from mineral-rich waters rising up the fault plane.
4. Sometimes a local rock (particularly a sandstone) can become totally cemented early in its history. In subsequent burial the strength of the cement protects the sand grains from settling and from crushing. Then, when the rock is at great depth, the cement may be dissolved away by circulating waters, while the overburden remains supported by the country rock. Such circumstances can lead to unexpectedly low velocities.
5. Similar effects can occur where local overpressure has protected a rock body from compaction during burial, and the overpressure has subsequently been relieved (by slow leakage, or by a fault).
6. The presence of hydrocarbons in a trap can lead to a contrast of porosity (and therefore of velocity) at the gas-water or oil-water contact. If the trap is subsequently breached, and the hydrocarbons lost, a "fossil" water contact may remain.Ambiguities
It is unlikely that we can go directly from a velocity measurement to the rock type, because variations caused by rock condition within one type are greater than variations from type to type.
Only in the most general sense that is, other things being equal can we say that a carbonate has a higher velocity than a clastic. We may set minimum likely velocities for a porous carbonate reservoir (3000 m/s, or 10,000 ft/s?) and maximum values for a tight sandstone (5800 m/s, or 19,000 ft/s), but velocity alone does not allow us to say that one particular layer is a porous limestone rather than a less-porous sandstone or a shale. We can say that a thick predominantly carbonate section usually has a higher velocity than a thick predominantly clastic section. Further, the narrow range of velocities for salt is a definite help.
Other than that, we must combine velocity information with other information in particular the evidences of depositional environment obtained from seismic stratigraphy-to define rock type.
In real rocks, over what distance can the velocity change by so many percent? We base such a judgment on the factors cited above, having regard to what we see on the seismic section, or what we know from previous work. For example:
Near-shore carbonate deposition is likely to change (going seaward) to lime mud and finally to marine shale, within one layer. Such a transition is likely to involve a significant change of velocity in the layer, over a distance of several to many kilometers.
A similar transition may occur as a marginal-marine sandstone passes seaward into siltstone and marine shale. Here, however, the velocity variation is likely to be less.
A layer of constant rock type subjected to varying subsidence can be expected to increase its velocity where it is most deeply buried, with the greatest lateral change occurring over the region of greatest dip.
However, any change of dip great enough to produce fracturing (often evident as a local loss of "grain" in the section) must be expected to produce a local decrease of velocity in carbonates, but not in young or plastic shales.
Fault zones must be expected to yield locally lowered velocities. The lateral limits of such zones are sometimes evident by loss of "grain," or by a change in the amplitude of reflections close to the fault.
If lowered velocities are not observed, the answer may be that the fault has become cemented; this can be important in deciding whether a fault is likely to seal.
Abrupt lateral changes of velocity can occur in any situation where the rock type or condition changes. Examples are from flanking marine shales into a carbonate reef, or from a widespread limestone into the fluvial deposits of a channel cut into it.
Zones of good porosity (and hence locally depressed velocity) can be caused by weathering or leaching at an exposed surface now evident on the section as a local unconformity.
Sometimes good porosity is created or maintained by factors dependent on structural position at the time of deposition. The techniques of interval mapping sometimes allow us to recognize the presence and extent of such factors.Summary
Velocity depends on rock type, rock condition, and saturant. Any dependence on geologic age comes through the rock condition (particularly porosity, cementation, fracturing, and the release of overpressure) rather than through age itself.
In evaporites the dependence on rock type is dominant. In other rocks the dependence on rock condition dominates the dependence on rock type, although there is always a tendency for carbonate velocities to be higher than clastic velocities. In all rocks the replacement of water or oil by gas reduces the velocity; the reduction is large if the porosity is large.
1. In shales, which constitute about 75% of all sedimentary rocks, there is an initial dependence of velocity on the chemical constitution of the clastic grains and on the proportion of lime that was present in the original mud. Then there is a major dependence on porosity that expresses itself as a major dependence on depth. As the material is compacted by burial, the porosity decreases rapidly from very high values at the sea floor to a few percent. The velocity therefore increases sharply in the shallow section, and progressively less sharply with depth.
It is usually said that the relation between porosity and velocity does not follow the time-average equation; one of the difficulties is in deciding on a grain velocity, when most shales contain grains from a wide variety of provenances, and many also contain some lime. Another complication is that the permeability of many shales effectively falls to zero while there is still a few percent of porosity; subsequent deeper burial means that the shale becomes slightly overpressured maintaining velocities somewhat less than would be given by the time-average equation.
Overall, however, the pattern is clear: fairly low velocities, with a pronounced dependence on depth.
2. In sandstones originating as wind-blown dunes, the grains are efficiently packed during deposition, and the primary determinant of velocity is porosity. There is less loss of porosity by compaction than in shales, and so less dependence of velocity on depth; there may be, of course, loss of porosity by cementation.
The velocity generally follows the time-average equation. Fluvial and marginal-marine sandstones, in contrast, may have irregular grain shapes, and so have less predictable behavior. Typically the velocity increases rapidly with depth until there is sufficient cementation to annul the effect of the variable grain contacts; then follows the behavior of the dune sands. Thus, shallow unconsolidated sands may have velocities lower than that indicated by the time-average equation. Overpressure can occur only if the encasing shales are overpressured.
The overall pattern: medium velocities, with a pronounced dependence on porosity.
3. In carbonates, velocity behavior is more difficult to predict. This is because of their chemical instability, which in turn affects the porosity. Thick chalk units are observed to compact like shales, and so to have a marked dependence on depth. Where there is intergranular porosity, the time-average equation applies approximately. But chemical and solution porosity have unpredictable effects.
The overall pattern: fairly high velocities, with some dependence on depth and pronounced dependence on fractures.
Borehole Velocity Measurements
Here we regard the acoustic log as giving the local or instantaneous velocity, and equate this to the speed of the wavefront as it crosses a particular point in the subsurface ( Figure 1 ). The term instantaneous is not quite precise, as the separation between source and receiver in a downhole tool is finite. Compared to the scale of variation of velocity, however, the tool may well be regarded as a point.
Because we use vertically oriented downhole tools to measure instantaneous velocity, it is proper to regard it as a vertical velocity function. Accordingly, we designate this velocity Vz, meaning that it is a function of depth.
The instantaneous velocity is determined by dividing the source-to-receiver span by the travel time. In that sense, what we are actually measuring is the velocity of that 2-ft interval, the interval velocity of that span. Similarly, we can determine the interval velocity Vi between two positions of the tool ( Figure 1 ). The interval is the difference in depth between the two positions, or Z2 - Z1. The travel time is the difference in integrated time between the two positions; we may write this as t2 - t1. Thus, we have one of the common definitions of interval velocity: Vi = (Z2 - Z1)/(t2 - t1) .
When the interval is extended to the surface (Z1 = 0), the interval velocity becomes the average velocity, which we designate V a, and which is the ratio of total depth to total time ( Figure 1 ). The formal definition of Va also emerges from a direct consideration of instantaneous velocity Vz and the depth interval dz over which it is appropriate: Va = dz/ (dz/Vz).
Instantaneous-, interval-, and average-velocity measurements thus may be derived from the same tool, the differences among them being a matter of scale. Thus, we can say that the interval velocity is the average velocity of the interval. Instantaneous velocity is the interval velocity over a very small interval; it is also the average velocity between the two parts of the downhole tool. And average velocity is the interval velocity for an interval that begins at the surface.
In general, each of the above velocity functions implies a vertical path, although Figure 2 and Figure 3 show why this is not always correct.
In Figure 2 ,
the borehole itself is not vertical, and the resulting value for average velocity is derived from a measurement of distance along the borehole. In Figure 3 , the borehole is vertical, but now the layering is at an angle; interval-velocity calculations thus do not account for the correct bed thickness. In both of these cases, some correction must be made to the calculated velocities.
Seismic Travel Times
As a basis for discussing the inference of velocity distributions from surface-based measurements, let us examine the surface measurements that would result from known velocity distributions.
Figure 1 illustrates the classic single-layer velocity model.
The ground and the reflector are flat and everywhere parallel, and the bed can be characterized as having an average velocity Va, an interval velocity Vi, and an instantaneous velocity Vz that are all equal. Further, these velocities are independent of direction; the layer is homogeneous and isotropic.
Because the velocity is uniform, the thickness of the bed can be described in units of distance or of time; one implies the other. Finally, we consider only paths having a common midpoint.
Figure 1 also shows the geometry pertinent to one raypath. The total travel time TX can be specified in terms of the vertical (zero-offset) two-way time T0, the offset X between source and receiver, and the average velocity Va. The relation, given in the figure, is hyperbolic in offset and time ( Figure 2 ).
We could just as easily characterize the curve of Figure 2 in terms of the difference between TX and T0. This difference is the normal moveout t, which we express in Figure 2 terms of offset, velocity, and vertical time.
What happens if the reflector is no longer horizontal? (In this instance, the term normal moveout no longer applies, the qualifier normal being reserved for the case of no dip. In practice, of course, most geophysicists use normal moveout to describe any travel-time differences caused by offset.)
In Figure 3 ,
the zero-offset time, range of offsets, and average velocity are the same as in Figure 1 , but now the reflector has a dip . The two-way, normal incidence time T0 now does not imply bed thickness, since it no longer represents a vertical path, and the expression for travel time now becomes
TX = [T02 + (X2cos2 Va2)]12.
In the presence of dip, therefore, the travel-time curve is still hyperbolic ( Figure 4 ), but slightly flatter.
It is as though the average velocity were actually equal to Va/cos . Naturally, the moveout, which we may still call t, is equal to TX - T0; it is given in Figure 4 .
Consider the two-layer model of Figure 5 .
Each layer has a constant interval velocity; further, the velocities and layer thicknesses are such that the depth to the base of the second layer and the zero-offset reflection time from it are the same as their single-layer counterparts in Figure 1 . In other words, the average velocity measured by any straight-line path is the same as in the previous example.
The difference is that the zero-offset path is the only straight-line path. As Figure 5 shows, any offset between source and receiver results in a refracted travel path, in accordance with Fermat's least-time principle. This intuitively attractive rule simply requires that seismic energy travel along a path that takes the least time. It also results in Snell's law, which relates the propagation angle through the respective layers to the ratio of their velocities.
Figure 5 also shows the equation for the travel time TX for the path shown; there, 1 is the angle of incidence at the layer boundary and 2 is the angle of emergence; z1 and z2 are the respective layer thicknesses; and v2 and v2 are the respective layer velocities. Furthermore, Fermat's principle con-strains 1 and 2 so that X = 2(z1 tan 1 + z2 tan 2)
Thus, we may calculate the variation of travel time with offset for this model. Figure 6 shows this relation, with the curve for the single-layer model, in dashes, for comparison.
The two-layer model produces a flatter curve from the same zero-offset time.
Figure 7 shows a three-layer model.
Again, the total depth and zero-offset time are the same, and so, therefore, is the average velocity along the vertical path. The travel-time curve is shown in Figure 8 , with the single-layer curve again for comparison. The curve, still apparently hyperbolic, is flatter still.
So we have three models, each one representing a total thickness of 3000 m and an average velocity over that distance of 2500 m/s. For the vertical path, the models are indistinguishable from surface measurements of the reflector at 3000 m. They differ in their normal-moveout patterns, however; we could use the single-layer travel-time equation to derive only the first of these patterns. (And we can use the qualifier normal because we have postulated horizontal layering.)
Obviously, this is because the basic travel-time equation is a Pythagorean relation, and as such requires a straight-line path from source to reflector, and then from reflector to receiver. Multilayer models produce segmented paths, owing to least-time considerations. But the latter two curves are so nearly hyperbolic in form that there may be single-layer models that we can use to approximate them. (This is a common approach in modeling: rather than seek an approximate solution to an exact model, we seek an approximate model for which we can find an exact solution.)
The Layered sequence
Extending the equations for time TX and offset X to the case of N parallel, horizontal layers, each of constant velocity vk and thickness zk, we have
TX = 2((zk/vk)/). (1)
(For a derivation of this equation, see the section titled "Travel Time Equation for a Layered Sequence" under the heading "References and Additional Information").
Further, the least-time constraint on the propagation angles leads to
X = 2p(zkvk/) . (2)
In these two equations, p = sin1/v1, and is called the ray parameter. In the special case where all the layer velocities are equal, Equations (1) and (2) can be combined to give the familiar relation of Figure 1 .
Another special case occurs for vertical propagation through the top layer. Snell's law prohibits refraction in this case, making the entire path vertical. Thus, because the ray parameter p also equals zero, the offset X vanishes, and Equation (1) becomes appropriate to the determination of average velocity.
Obviously, the ray parameter is important because it defines a particular ray emerging from the source, at a particular angle. Whatever the refraction experienced by the ray, the ray parameter continues to define that ray. When the ray emerges at the surface, the ray parameter leads to a unique offset-time relation.
The basic travel-time equation is the elementary form of a general expression for TX2. This expression is an infinite series in even powers of the offset X: TX2=CnX2n
We shall not attempt to derive the coefficients Cn here suitable references are listed in the "Recommended Reading" section under the heading "References and Additional Information". We do observe, however, that only the first two terms of this series are significant as long as the offset X is no greater than the depth to the horizon of interest. Appropriate muting guarantees this condition.
In the general expression, the first coefficient, C0, is simply T02. The coefficient of X2 turns out to be the inverse of the square of a weighted average of the layer velocities:
C1 = l/Vr2. We call Vr the root-mean-square (rms) velocity, and define it thus:
Vr =, (3)
where vk and tk are the interval velocity and the normal incidence two-way travel time in the kth layer, and T0 is the zero-offset reflection time (that is, T0 =tk). The general expression for TX2 thus becomes
TX2 = T02 + X2/Vr2. (4)
In certain circumstances, suitably short offsets, approximately parallel and horizontal layering Equation (4) can be regarded as equivalent to the equation of Figure 1 . This means that a layered sequence of vertically varying velocity can be approximated by a single layer of velocity equal to the rms velocity Vr. Within that layer, the travel path is a straight line (of constant ray parameter, just as with the layered case), and so it is root-mean-square velocity that defines normal moveout.
Thus, root-mean-square velocity provides a link between physical measurements of velocity and the nonphysical device of the stacking velocity.
The formal equation for normal moveout ( Figure 1 ) allows us to calculate the normal moveout based on known values of zero-offset reflection time, source-to-receiver offset, and the appropriate velocity to the reflector.
For a layered sequence, the appropriate velocity is not the average velocity, but rather the root-mean-square velocity.
Similarly, Figure 2 allows us to calculate the moveout in the presence of dip.
For a layered sequence, we may postulate a constant dip for the entire sequence, and again use the rms velocity. If this is not appropriate, the alternative is the rather complex one of ray-tracing, which is a matter for separate discussion.
For the present, let us say that we know the moveout: we measure it from the common-midpoint gathers. Then we can use it to determine the corresponding "velocity." Accordingly, we invert the equations of Figure 1 and Figure 2 to solve for that velocity:
V =, (5a)
V = cos
INCLUDEPICTURE "http://jktipims01/data/gp3.1/eqns/Image2351.gif" \* MERGEFORMATINET . (5b)
The velocities derived from Equations 5a and 5b may vary, depending on our choices of offset. We minimize this, of course, with our mutes in effect, "getting rid" of the higher-order terms. There is one velocity, however, that best fits the observed moveout pattern. Subtraction of the t values appropriate to this velocity flattens the moveout pattern on the gather, and thus yields the optimum stacked trace. In the parlance, this value of V is the stacking velocity. The entire suite of stacking velocities for a gather is the stacking function the stacking velocity as a function of time.
(Obviously, this is a poor choice of nomenclature, since we have said that the stacking velocity is not a velocity at all. Rather, it is a variable that produces the observed travel-time pattern. The term is so engrained within the geophysical consciousness, however, that to suggest an alternative would probably cause more confusion than it would alleviate. Therefore, the prudent geophysicist uses the term with the understanding that he or she does not really mean it. For an interesting discussion of seismic velocities and their use in the industry, we refer the reader to Al-Chalabi (1990, 1994)).
The first task of velocity analysis is an attempt to determine the proper stacking function for a given common-midpoint gather. This function permits us to flatten the observed move-out patterns, so that the arrival time of the reflection on all the traces is made to equal the zero-offset time T0. The addition of the corrected traces then yields the optimum stacked trace.
In this discussion, we designate the stacking velocity as Vs. We shall keep to this convention even in the case of dipping layers, which is contrary to the manner of some common texts. We do this because we want the term stacking velocity to mean the same thing in all cases: the variable that best corrects for the observed moveout. The variable that would correct for t in the absence of dip is irrelevant.
(In many references, stacking velocity is also designated as Vnmo, for normal-moveout velocity. We may also see it designated as Vrms, for root-mean-square velocity. The usage in either case is not strictly correct; for example, normal move-out, as originally defined, requires the model of Figure 3 , not that of Figure 4 .
So the thoughtful geophysicist avoids these terms, but knows what others mean when they use them.)
As another approach, we look at Figure 5 .
Again, we take to be the dip along the line, but this time it is the apparent dip. The true dip, the dip of the reflector, is , and the angle between the line and the direction of true dip is . If the constant velocity above the reflector is Va, then the required stacking velocity is given by Va/
This relationship is derived in the section titled "Comments on the Problem of Dip", which appears under the heading "References and Additional Information".
Intersecting lines can have indeed, in the presence of dip, generally have different stacking velocities at their point of intersection, even if the average velocity to the common point of reflection is required to be the same.
Determination of Stacking Velocity
In practice, velocity analysis from reflection data does not solve Equations 5a and Sb. Rather, the process solves the moveout equation for a series of trial velocities, and then "corrects" the observed moveout for each of these velocities. The common-midpoint gathers are then stacked, and the stacks from all the velocities compared.
For the noise-free event of Figure 6 , therefore, we start with a velocity V1 and check its effectiveness in one of two ways.
We can assess the match between the proposed moveout pattern and the real moveout pattern by, for example, adding the amplitudes of the traces along the proposed curve ( Figure 7 ).
Alternatively, we can subtract the moveout appropriate to V1 from the data, and add all the amplitudes at T0 ( Figure 8 ). We then go on to the next velocity, and follow the same procedure for that and successive velocities, until we get to the last of our choices.
Somewhere in that range unless our estimates are very far wrong there is one velocity whose calculated moveout pattern fits the data reasonably well (and certainly better than most of the others). This is the stacking velocity appropriate to T0. The next step is to choose a new T0 and a new velocity range, and start the procedure again. In essence, this is the scheme followed by all velocity-analysis programs. Whether we assess the degree of fit visually or statistically, the final selection of stacking velocity is based on the quality of the stacked trace.
A Comparison of Velocity Functions
We know that velocities can be instantaneous, interval, average, root-mean-square, normal-moveout, or stacking. In one case and one case only all of the above describe the same velocity. That case is the single, homogeneous, isotropic, uniform layer between a smooth flat surface and a smooth flat reflector parallel to the surface. In other words, the basic single-layer model of Figure 1 , is the only one for which we can use the unqualified term "velocity" without fear of error.
For more complicated models, however to say nothing of the real world the distinctions among the above modifiers are critical.
The first three types of velocity function instantaneous, interval, and average can be measured in the earth, using borehole techniques. When the system of layers and interval velocities is thus established, we can calculate root-mean-square velocity. To infer any of these velocities from surface-based measurements, however, we must start with one of the other types.
Let us illustrate this by considering how we might get from stacking velocity to average velocity, to a degree of accuracy suitable for many purposes. The first requirement is that the layering must be parallel (although, as we shall see, the beds may be dipping).
First, we remember that the stacking velocity Vs is, under the conditions of short offsets and approximately parallel and horizontal layering, a good approximation to the root-mean-square velocity Vr. Next, we remember Equation 3, which relates Vr and the interval velocities vk.
Thus, if Vr,m and Vr,n are the rms velocities at the top and bottom of a layer, the interval velocity Vk,n of the layer is given by the Dix equation:
Vk,n = V(Vr,n2Tn - Vr,m2Tm)/(Tn - Tm) , (6)
where Tm and Tn are the normal incidence two-way reflection times at the top and bottom of the layer.
If the entire sequence is dipping at an angle, and if the seismic line is in the direction of reflector dip, then the interval velocity is figured by multiplying the above equation by cos . The dip requirement above ensures that the zero offset path to the top of the layer lies in the same vertical plane as the one to the bottom of the layer.
By inspection of the Dix equation, it is clear that the obtainable accuracy of the interval velocity depends on only three factors: the accuracy of the respective zero-offset times; the accuracy of the inferred stacking velocities; and the degree to which the real earth can be approximated by a parallel-layered sequence. In the dipping counterpart, a fourth factor is the accuracy of the inferred dip.
Having thus obtained the interval velocities, we simply calculate the interval thicknesses (because we know their times) , add them, and then divide by the total one-way time to get an estimate of the average velocity.
In practice, even if we were to accept that the conditions were appropriate for Dix-derivation of the interval velocities a restrictive conclusion in itself other difficulties remain. Chief among them is the interval chosen: for the derived velocity to approximate the actual interval velocity of a layer, the interval must contain no major changes of velocity; in practice, it must also be bounded by unambiguous reflections.
Obviously, this is unrealistic for the entire record length of the typical seismogram. More often, we are content to consider only a portion of the section. A more serious problem is the near-surface, often regarded as the worst of the processing geophysicist's problems. The near-surface has a major effect on the determination of stacking velocities (and, therefore, the Dix-derived interval velocities).
Because velocity analysis offers a range of velocities at every value of zero-offset time, it offers a best fit not only to primary reflections but (at appropriate velocities) to multiples also.
Figure 1 ,
Figure 2 , Figure 3
, and Figure 4 illustrate the problem of multiples.
Here, the shallow reflector is a strong one, and the deep reflector relatively weak. We also remember that it is our object to suppress the multiples, in stacking, by the use of a velocity appropriate to the primaries.
The simple multiple in the shallow layer has a zero-offset time about equal to that of the primary from the deep layer. If the multiple stacks with higher amplitude, we are led (wrongly) to pick its lower velocity. The stacking function shows the appearance of a severe velocity inversion at this zero-offset time.
Fortunately, many simple multiples are fairly easy to recognize because of their low velocity. The velocity is low because the travel path lies entirely within the shallow part of the section. The same is true for multiples from mildly dipping interfaces.
Such surface multiples can become a major problem if there really is a velocity inversion at depth, so that the multiple appears to have a higher velocity than the deep primary. Then our normal tool for recognizing a surface multiple by its low velocity may be lost.
A further difficulty arises with multiples that are not simple, the so-called pegleg multiples. These may appear to have velocities close to those of primaries arriving at the same time, and so receive more enhancement in the stack than we would wish.
Where the velocity distinction between primaries and multiples is weak, we are led to check the other distinguishing characteristics of multiples their repetition times and their apparent increase of dip. This is an example of the fact that velocity analysis involves some degree of interpretation. Statistical and visual best-fits are helpful, and we rely on them. But we must not do so blindly; there must be an understanding of velocity behavior (both real and apparent) underlying all our decisions.
The simplest way to estimate the proper stacking velocity merely asks the computer to nmo-correct the data; we make the decisions based on the appearance of the output.
In Figure 1 , a small portion of the line (usually at least 20 stacked traces) has been stacked with several different velocities, spanning the range that we expect in the area.
At each depth, we see that there is one velocity (perhaps two) from among these that results in the best stack. The water-bottom reflection, for instance, is clearest when the velocity is 1524 m/s; stacked at higher velocities, it practically disappears. The event at 2.0 s, on the other hand, is best stacked by a velocity of 1980 m/s. (We also see, on the l707-m/s display (or panel), a beautiful example of a primary event at 1.05 s and its multiples at 2.10 s and 3.15 s.)
In this manner, we emerge with a stacking function for this portion of the line. Obviously, we do not pick time-velocity pairs for every sample on the trace; rather, we specify them on the strongest events, and have the computer make what we hope is a proper interpolation between successive pairs. That interpolation is probably linear in velocity.
How do we choose the velocities from this sort of display? Sometimes the choice is easy, with only one velocity panel providing a good stack. Other times however, the correct velocity appears to lie between two panels. In that case, our mind's eye interpolates the proper velocity. In Figure 2 and Figure 3 ,
for example, we see the amplitude of the stack increasing and then decreasing as we go from left to right.
Our eyes tell us that the event is best stacked by a velocity between the two middle panels, that is, by a velocity of about 2515 m/s. If our goal is a good stack, this visual interpolation is entirely proper.
The stacking function inferred from Figure 1 is strictly applicable only to that small portion of the line. A kilometer or two away closer if there is some kind of geologic complexity we must get another set of constant-velocity stacks, and estimate a function there. Interpolation between the two functions, again, is usually linear in velocity, to achieve a more realistic degree of smoothness in the velocity variation along the line.
An alternative to the constant-velocity stack is the constant-velocity gather ( Figure 4 and Figure 5 ).
The concept is the same, but this time we examine the corrected but unstacked traces.
Because this method forgoes the signal-to-noise benefit of the stack, it is applicable only to data of very good quality. Further, it does not provide the same obviousness in the identification of multiples. It does give a clear indication of any situation in which the actual moveout is not hyperbolic. It is also useful in allowing an intelligent selection of the mute.
In either constant-velocity display stack or gather we must choose a range of input velocities, which must encompass all possible values. We must also choose the increment of velocity; in this we are balancing cost against the uncertainties of interpolation. A velocity range of 1500-4500 m/s (5000-15,000 ft/s) would be typical. Also typical would be the use of 20 or 25 trial velocities; however, these are not spaced uniformly in velocity, but rather in nmo. This means that the velocities "bunch up" at the low end, and spread further apart as they get higher.
The great merit of the constant-velocity stack is that it allows us to make the critical choice of stacking velocity directly on a piece of seismic section, where the dips, the reflection interferences and the geological complications are all evident to us. This is entirely in accord with our principle of staying close to the data as we make the decisions.
Apart from considerations of cost, and for purposes of stacking alone, we would probably prefer to make our choice of stacking velocity in constant-velocity stacks of each line in its entirety. This would give us great confidence in both the lateral and vertical variations of stacking velocity, as well as in their reasonableness. Because of the cost, however, this comprehensive approach is reserved for highly detailed surveys.
Elsewhere, we reduce the cost by one or both of two methods. In the first, we use a range of fixed velocity distributions ( Figure 6 ), rather than constant velocities. This eliminates the waste of stacking shallow reflections at high velocities and deep reflections at low velocities, but reduces the obviousness of surface multiples.
The second method reduces the width of each constant-velocity display, perhaps to even less than the 20 stacked traces we considered earlier. As we decrease the width of each panel, the advantage of seeing the geology is decreased, and there comes a point where an alternative technique becomes better value for the money. This is the velocity-analysis display, or velocity spectrum.
Coherence Measures and the Velocity-Analysis Display
The judgments that we make in evaluating constant-velocity displays are essentially visual measures of the coherence of the individual traces forming the stack. The problem in relying strictly on our eyes is that small differences in the stack-which may result from significant differences in the stacking function are hard for the eye to distinguish. This is particularly so because we cannot focus on two stacks at once; rather, we look at the two alternately, and unless the differences between the two are substantial, we have trouble discerning them.
What we need, then, is a numerical measure of the coherence of the individual traces forming the stacked trace.
Amplitude of the Stacked Trace
One measure is simply that of the maximum amplitude of the stacked trace. The principle is shown in Figure 1 ,
Figure 2 ,
Figure 3 , and Figure 4 .
Here, we have a 12-trace gather with a particular zero-offset time T0 and a hyperbolic moveout pattern ( Figure 1 ,
An idealized moveout pattern, and the summation after no correction. The shading represents the time window) . If we add the traces with no nmo correction at all essentially corresponding to a velocity with a far-offset nmo of zero we get a low-amplitude stack. As a measure of the stack amplitude, we may add the samples of the stacked trace within some time window.
Next, we make a correction corresponding to a far-offset of, say, +10 ms; this hyperbola is shown in Figure 2 . Adding the traces after this correction, and then adding the samples within the time window, we get another value for the stack amplitude. Proceeding in this fashion ( Figure 3 ), we find one velocity whose moveout pattern best matches that of the data; nmo correction according to this pattern yields the highest stack amplitude. Normalizing all the stack amplitudes to the highest one, we emerge with the display of Figure 4 .
In this example, the maximum stack amplitude occurs along the hyperbola corresponding to a far-trace nmo of +30 ms. Knowing moveout, zero-offset time, and offset, therefore, we can easily solve for velocity. This velocity is the stacking velocity for that zero-offset time.
In practice, this process works from one common midpoint to the next, and then takes a weighted mean of the amplitudes for several consecutive midpoints. In Figure 5 (A normalized display of stack amplitude as a function of time and velocity), for example, 24-fold data were corrected and summed over four consecutive midpoints; this analysis was therefore computed over 96 traces.
The bar graph to the right of the figure shows the maximum amplitude for each part of the trace (before it was normalized).
A more common coherence measure is concerned not with amplitude but with power. If we consider a sample at time i on trace j of a common-midpoint gather, then we may designate its amplitude as aij; its power is then aij2 We may then define the semblance coefficient St as the ratio of the
power of the stacked trace to the sum of the powers of the individual traces within a time window:
(7)Semblance is essentially a development of the normalized cross-correlation coefficient. It measures the common signal content over the traces of a gather according to a specified lag pattern. That lag pattern is, of course, the hyperbolic pattern corresponding to a trial normal-moveout velocity. The general procedure is as follows:
1. First, we choose a reference time T0.
2. We then choose a value for stacking velocity V5, which is constrained to lie within limits appropriate to the time T0 and the study area. The variables T0 and V define a unique hyperbolic pattern, which in turn defines the lag pattern for the subsequent calculation of semblance.
3. Each trace is displaced by an amount that corresponds to its offset.
4. We then compute the semblance of the gather, and display it as a function of time and velocity.
5. Steps 2-4 are then repeated for the expected range of velocities.Figure 6 shows semblance as a function of stacking velocity for a specific value of zero-offset time.
This calculation is repeated at small and regular intervals of T0. The final output ( Figure 7 , Figure 8
, and Figure 9 ) is sometimes called the velocity spectrum; in this course we shall call it a velocity-analysis display.
Figure 7 shows the velocity-analysis display in its original form; Figure 8 shows the same display in another common form, in which semblance is contoured; to the right of this display is a curve of power as a function of time. Figure 9 shows yet another display, in which the contours are replaced by numbers that correspond to the semblance values (note that in modern seismic work, this numerical display would typically be shown in color).
The advantage of the latter two displays is twofold. First, they are cleaner, as there is no tangling of the semblance curves. Second (at least for Figure 8 ), the meaning of the analysis is immediately apparent in that a large semblance value is clearly marked by the steep contours. The abscissa and ordinate values at the center of the contour pattern are the proper stacking velocity and zero-offset time values for that pick. On the display of Figure 7 , on the other hand, we can infer the proper time and velocity only after drawing a baseline through the semblance curve's zero value, and projecting the curve's maximum onto that baseline.
We see in Figure 8 two distinctive semblance "signatures." Above about 2.5 5, for example, the choice of the stacking velocity seems to be unambiguous. The contours are generally steep, and semblance is low at the velocities off the main trend. This tells us that the signal-to-noise ratio in the shallow section is fairly high.
Below 2.5 s, we see the stretched contours that are generally characteristic of deeper data. We know that, as reflection time increases, a given change in stacking velocity produces a smaller change in nmo. The result is that a certain degree of improvement in the stack can be effected by a wider choice of velocities. The contour stretch, therefore, represents a deterioration, with time, of the certainty of the velocity measurement.
In our interpretation of velocity-analysis displays, we often find it helpful to have a brute stack or near-trace section at hand. Figure 10 is a section across a growth fault.
Trace A is fairly representative of the geology and velocity distribution away from the fault; we see its clear velocity trend in Figure 11 .
Trace B is a different matter; that there is a fault is obvious, but the velocity behavior below the fault ( Figure 12 ) is not.
Further, there appears to be a spurious, high-velocity "event" at 1.30 5. The section enables us to recognize that the event is probably a fault-plane reflection. Picking these high velocities, we emerge with Figure 13 ; the fault plane is clearer, and the extent of the fault interpretable.
The choice between constant-velocity stacks and velocity-analysis displays depends on the geological problem, the concern with cost, and the preference of the processor. The constant-velocity stack, as we remarked earlier, gives a great deal of confidence in the stacked section, and keeps us close to the data. It loses its advantage, however, if only a few traces are used which is the sole way of reducing its cost to that of a velocity analysis. The velocity analysis avoids the need to interpolate between panels; it gives a single velocity, at each zero-offset time, and thus may be used directly in the calculation of interval velocities.
Selection of Variables
Generally, the data that go into a velocity analysis are prepared according to the flow of Figure 1 .
The initial processes substantially consist of signal conditioning, data reduction, and edit-and-gather.
Conventional velocity analysis requires that all wide-angle arrivals be deleted ( Figure 1 ).
The reason, of course, is that the stack incorporates only those arrivals that contain within them subsurface information; these are primarily narrow-angle arrivals. A moveout analysis designed for stack optimization therefore has no need of wide-angle arrivals.
(Refractions and supercritical reflections, two types of wide-angle arrival, are useful in certain cases. The measurement of the linear moveout of refractions can yield a velocity estimate for one layer. And the slant stack, in which time-offset data are mapped in a time-ray parameter domain, is useful for fairly accurate velocity analysis where we have offsets greater than reflector depths).
For marine work, the muting must be such that it excludes the sea-floor refraction for those offsets equal to or greater than the water depth. For land work, the mutes should exclude refractions and, insofar as possible, the direct waves. As always, a transitional mute is desirable, so that spurious frequencies are not introduced to the data.
Many times, direct arrivals (which are very wide angle arrivals) occur not in the vicinity of the first breaks but within the main body of the record. Surface waves, water waves, ice breaks, and air waves all travel at velocities much lower than those of the reflections. Where these are pervasive which is to say where field arrays have been ineffective we may choose to anticipate the beneficial effects of the stack-array. The hope is that the direct arrivals do not hamper the velocity analysis in the meantime. Alternatively, we may prefer to apply a surgical mute to the worst of the noise, or a frequency-wavenumber (f-k) filter to all of it.
Another reason for a good mute is the problem of offsets. We know that in the presence of dip and lateral velocity variations, the inclusion of the farther offsets may harm both the velocity analysis and the subsequent stack. On the other hand, we need the offset range to resolve the interfering residual normal moveouts, which in turn allows us to separate primary from multiple. Using mutes, we can eliminate the far offsets for the shallow data, where the problems are most likely to occur, and restore the far offsets for the deep data, where their benefits are most immediately obtainable.
We are also concerned with offsets that may not be what we think they are. There is a dual hazard in marine data. First, note that marine streamers are equipped with stretch sections. We note further that if the ship maintains a constant ground speed while sailing into a current, the stretch may amount to 50 m or more. Finally, we observe that we can use the waterbreak times to determine the actual in-line distance from the source to the near hydrophone.
The larger problem is that of streamer feathering. In the presence of dip especially cross-dip in the direction opposite that of the feathering feathering has the effect of moving the apex of the normal-moveout hyperbola away from the zero-offset position ( Figure 1 , Travel time as a function of offset, for a strike line and various feathering angles).
The computed stacking velocities ( Figure 2 , The derived stacking velocities as a function of the feathering angle, for three line directions.
In these figures, the feathering angle is positive in the direction of dip; it is not measured with respect to the ships heading) may degrade the stack, and are certainly inappropriate for other purposes.
The problem of offset is easily solved, largely because the waterbreak analysis is so straightforward that there is really no reason not to perform it, whatever the magnitude of the errors.
The problem of the feathering requires a little more thought. We can minimize the errors during the shooting by steering the ship to keep the near-group midpoint if not the average midpoint on the line. This requires deft steering on the part of the captain. Alternatively, we may ask the crew to attach a location sensor to the streamer at the nominal position of the near-group midpoint.
What we do in the processing depends, as usual, on the requirements of the exploration problem. For normal-moveout correction only, it is not likely that we shall need any offset corrections. For more detailed work, we are prepared to restrict the velocity analysis to a portion of the line and for a portion of the time.
Our first step may be to modify the geometry in the trace headers so that "zero offset" is at the average midpoint of each gather. It may be possible to do this from a visual inspection of the gathers. Alternatively, if we are familiar enough with the geology of the prospect, we may already know (to a reasonable degree of accuracy) the reflector dip, the dip along the line, and the average velocity to the reflector; we can estimate the average midpoint from these factors and the feathering.
The necessary correction is simply a matter of subtracting the displacement of the hyperbola apex from the source-to-receiver offsets (refer to the section titled "Comments on the Problem of Dip", found under the heading "References and Additional Information"). When the required information is not at hand, we have two other options. One is to modify the velocity-analysis programs to seek the apex of the travel-time curve, and then to find the best-fit hyperbola appropriate to it.
An easier approach since it bypasses the programmers is simply to process the line as though it were part of a 3-D data set. First, the traces are arranged into bins, much like the stripes we use in processing crooked-line data. Then, the traces are scanned for a determination of both the azimuth and the magnitude of the dip. This information is used to minimize the scatter of the midpoints within each bin, after which the velocity analysis proceeds as usual.
The 3-D processing method is equally suitable to land data shot with irregular geometry, although the displacement correction described above is generally not.
For land data, the problem of statics is a tricky one. Certainly, the time delays caused by topographic variations and low-velocity zones should be removed prior to the velocity analysis. We remember, however, that moveout is a function of the full travel path of the seismic energy, which begins at the source and ends at the receiver.
In some cases, therefore, correction to datum removes too much of the travel path; this distorts the true meaning of the observed travel-time patterns. (In all cases, removal of any field statics represents some degree of compromise.)
Topographic variations are the simplest of the field statics to remove. Where bedrock comes to the surface or is covered by a negligible layer of soil or fill, removal of 40-50 m of elevation (which represents about 30-40 ms of two-way travel time) has no profound effect on the velocity analysis. Where a low-velocity layer exists, we are more restricted, since 40 ms (two-way) in a layer with a velocity of 1000 m/s represents only 20 m.
More commonly, the combination of topography and low-velocity material requires a total datum correction of much more than 40 ms. Then, we make the correction in two steps. Before velocity analysis, we remove surface irregularities by correcting to a floating datum. This is a smoothly varying, or even flat, preliminary datum on which we hang the velocity analysis (as well as subsequent processing) . After moveout correction and stack, we correct from the floating datum to the final flat datum.
The more serious problem is that of velocity anomalies in the near-surface, the consequences of which are clear from Figure 1 ,
Figure 2 , Figure 3 , and Figure 4 .
In Figure 1 we postulate a local thickening of the low-velocity near-surface layer; the near-offset travel paths go through the anomaly, but the far-offset paths do not. The travel-time curve for this model is not a simple hyperbola ( Figure 2 ), and the best-fit curve corresponds to a much higher velocity than is appropriate.
For a midpoint away from the anomaly ( Figure 3 ), the near-offset paths do not go through the anomaly, but the far-offset paths do. In this case, the inferred velocity is much lower than it should be ( Figure 4 ).
The lateral velocity variation arising from this sort of anomaly is shown in Figure 5 .
The effect extends a distance equal to the far offset plus the width of the anomaly; it also becomes more pronounced at depth. This makes sense, since a given time shift has more impact on a measurement of small moveout than on one of large moveout. Finally, we see in Figure 6 (The depth of the reflector is 4500m, and the average velocity below the anomaly is 3150ms) that the velocity error is related to the ratio between the width of the anomaly and the spread length; it is greatest when the ratio is about 0.6.
First appearances to the contrary, we cannot solve these problems with a simple correction to a floating datum. In Figure 7 we see the travel-time curves and best-fit hyperbolas after three choices of datum.
Curve A represents a datum 150 m below the surface, and is a simple removal of the low-velocity layer, but not the anomaly. The best-fit hyperbola is the same hyperbola as the one in Figure 2 ; at a smaller zero-offset time, it corresponds to an even higher velocity.
Curve B represents a removal of 300 m of the near-surface. This includes all of the anomaly, but it also includes some of the higher-velocity underlying material. The travel-time curve is now hyperbolic, and the velocity is much closer to the actual velocity. Indeed, with an error of only 7%, this velocity may be an acceptable stacking velocity in some cases. But this is the best we can do with simple static shifts. A deeper datum (Curve C) increases the velocity error.
Naturally, near-surface velocity anomalies also occur at sea. Marine lines often exhibit erosional channels; depending on the nature of the fill, the underlying reflections may be pulled up ( Figure 8 ) or pulled down ( Figure 9
) by the anomaly.
The proper correction for situations of this sort is necessarily dynamic, and requires a suitable model of the near-surface. Such a model may be inferred from an analysis of several offset panels, the theory being that the panels differ in the presence of anomalies.
One of the more important dynamic correction methods is that of Wave Equation Datuming (Berryhill, 1986). This technique is described under the heading "Layer Replacement Techniques", which is part of the Static Corrections topic within the Seismic Processing Series.
Some additional thoughts on layer replacement come from Berryhill (1986) in his discussion of wave-equation datuming techniques; although his is not a ray-tracing method, these considerations still apply.
1. It is not always enough simply to digitize the sea-floor reflection, Sometimes, that reflection comes from low-velocity sediments deposited by recent slumping; a canyon bottom that is not V-shaped may indicate such a reflection. In that case, we digitize not the sea floor but the buried velocity-contrast surface.
2. The choice of a replacement velocity is a matter of interpretation, the judgment being that some particular reflector should appear unperturbed as it passes under the canyon. Naturally, this means that replacement velocity can be so determined only for strike or near-strike lines.
3. Velocities determined from refraction analysis are generally useful only as lower limits of replacement velocity.
4. The reflections that benefit most in the velocity analysis are those from deeper horizons.
5. For marine data, we have to remember to account for the changed value of water velocity.Sometimes the purpose of a velocity analysis is the calculation of residual normal moveout (rnmo) on a section to which nmo corrections have already been applied. We do this if we feel that the previous corrections are in error. In such a case, we must know what datum was used for that previous velocity analysis. It is that datum, after all, which defines the initial zero-offset time, which in turn defines the initial moveout corrections applied. The rnmo is then added to (or subtracted from) the initial nmo to define the final nmo corrections. The zero-offset time then helps to define the stacking function.
The most important of the prevelocity analysis steps has the effect of a combined static-dynamic correction. Only by establishing the position of zero time can we expect to have confidence in the timing of our velocity picks. in general, the zero-time correction stands as good practice anyway; in particular, we would not want to infer an interval velocity without it.
From the point of view of the instrumentation, the time origin of the seismic record is either the command-to-shoot or the start of the multiplexing. If we were to record the actual source pulse filtered the same as the reflection pulse it would appear as at the top of Figure 1 .
Instrumental zero time, therefore, is at the onset of the pulse; the correct zero time, however, consistent with our picking scheme, is at the maximum of the envelope.
All effects considered, the time from the command-to-shoot to the maximum of the envelope of the source pulse is 20-60 ms. This is enough to cause substantial errors in the determination of shallow velocities. The method of zero-time correction proceeds as follows.
Obviously, signature deconvolution is required. Where we have recorded the source signature, this is easy. Where we have not, we may have to use a statistical pulse-compression technique, estimating the pulse shape from, for instance, the water-bottom reflection.
If the source pulse has not been recorded, and the reflection pulses are inscrutable, we have to choose other means. At sea, our correction may be that which brings the interval velocity of the water layer to a value that is both reasonable (bearing in mind the temperature and the salinity) and independent of the water depth. Alternatively, we may estimate the correction as that time which forces the simple water-bottom multiple (at zero offset) to occur at exactly twice the time of the primary.
For marine data plagued with multiples, a good dereverberation is required.
On land, the Vibroseis sweep and the air-gun pops are our source signatures. Vibroseis correlation has the added benefit of dephasing the instruments automatically, provided that the sweep used in correlation has passed through the same instrumental filters as the data.
It follows that the minimum zero-time correction for land data is effected by instrument dephasing.Consistent with the above, we must remember that the first goal of pre-velocity analysis processing is a good signal-to-noise ratio, even if we achieve it at the expense of good resolution shallow. Velocity analysis is not a mainstream operation, in that it does not alter the data. It is like a filter test, or a decon test, the results of which are brought to bear on the data. Therefore, we should give the velocity analysis every advantage we can, but we should not spend too much time deconvolving the data. The pulse-shaping steps above should be sufficient.
If we are confident that we know where zero time is before we go into the velocity analysis, we shall emerge with both a good stack and a good chance of deriving accurate interval velocities.
The problem of where to locate a velocity analysis gives us our first chance to decide the appropriate level of detail. If our concern is to provide very fine detail over part of a line, or an entire short line, for the ultimate purpose of investigating a particular feature, then a constant-velocity display is desirable.
We are likely to give more thought to analysis locations on a reconnaissance or semidetail line, since all considerations require a few well-placed analyses. Unless the geology (as seen on a brute stack or near-trace section) is uniform, we generally cannot consider a uniform spacing between analysis locations. The following decisions are based on the model of Figure 1 .
Our first thought must be the signal-to-noise ratio. We avoid zones of poor Signal-to-noise unless that is all we have because whatever is responsible for it is likely to harm the velocity analysis as well.
With that said, the first rule in making a processing test is to make it in a place that is representative of the exploration problem. Tn Figure 1 , therefore, the first locations that come to mind are the crests and troughs of the folded area at the west end of the line. Here, the layering is at least locally horizontal and uniform, so the stacking velocities more closely approximate root-mean-square velocities. We bear in mind, however, that whereas both layers are horizontal at location 1, this is not the case at locations 4 and 5. The analysis is likely to yield spurious velocities for the first layer at location 4, and certainly for the second layer at location 5.
Similarly, locations 6 and 7 are appropriate for the first and second layers, respectively.
Because of the steep dip on the flank of the fold, we suspect that the proper stacking velocity is higher at locations 2 and 3 than a strict interpolation between locations 1 and 4 (or between 1 and 5) would suggest. For this reason, these are also appropriate sites for a velocity analysis as long as we do not try to attach any geologic significance to the derived velocities.
Where there are long, discontinuous reflectors at depth, as at locations 3 and 9, we take advantage of the local improvement in the signal-to-noise ratio and position velocity analyses there. Even if the velocities are spurious at the upper levels, these locations are useful for the deep data, and we are grateful for them.
We do not position a velocity analysis over faults or obvious near-surface anomalies (except when we are actually trying to stack a fault-plane reflection). We get as close as the raypaths allow, however, bearing in mind our choice of mutes. Therefore, before using location 6 for a velocity determination of the second layer, we check the position of the fault zone relative to the spread and the mutes. The same considerations apply at locations 7 and 8; we are especially careful not to include raypaths recorded at geophone groups above the anomaly.
We avoid making velocity judgments at levels of obvious interference. Therefore, locations 8 and 10 're useful in clarifying the nature of the unconformity at the east end of the line; location 9, however, is included solely for the deep data.
Our final consideration regarding the model of Figure 1 may be the most critical, and that is an understanding of the exploration problem. It is true that velocity analysis is a processing task, and all processors must learn how to make the above judgments. But the final arbiter is the interpreter, and he has every right to question velocity-analysis locations he believes to be poor choices. After all, the processor has little more than a brute stack on which to base his decisions. The interpreter, on the other hand, may have a firmer grasp on the regional and local geology, and it is his responsibility, not his option, to bring this information to bear on the problem. The processor, in turn, must accept these judgments unless he has sound physical reasons to believe they are in error.Consistent with all of the above, we must recognize that the question of where to put a velocity analysis is actually a matter of structural sampling. We understand this in the context of the fold at the left of Figure 1 ; a straight interpolation between the crest and the trough would be quite wrong. Furthermore, we recognize that an interpolation from the inflection point to the crest or the trough would also introduce some errors.
The matter of structural sampling reminds us that the most severe velocity variations arise from anomalies in the near surface. Further, we note that the common-midpoint triangle turns any abrupt velocity variation into a smooth, serpentine one. In effect, it acts as a high-cut filter on the horizontal velocity changes in the earth. Therefore, variations such as that of Figure 2 impose on us a maximum velocity-sampling distance of half the spread length. Where structural and stratigraphic features are small, we may require a velocity analysis every quarter-spread length.
The analysis window, or time gate, is the time range over which semblance is computed. As is the case with many processing variables, the length of the window depends greatly on the frequencies present in the signal.
We find that a sensible window length is equal to 1.5 times the period of the average reflection pulse. More than this, and we run the risk of a multiple falling within the same time window as a primary reflection. Less, and the signal-to-noise ratio may be degraded. Where the interval velocity of a thin bed is a goal, the window should be narrow enough to include either the top or the bottom of the layer, but not both.
Consistent with the above points, we must recognize that in an area of excellent signal-to-noise ratios and an important thin layer, the analysis window may well be less than 20 ms. In view of the objective, this is acceptable. We may minimize the time and cost of such detailed work by running it over a short length of line or a limited time range.
In general, therefore, the analysis window should be 20-80 ms, with 48 ms being a good choice for a general-purpose velocity analysis. The window increment, which governs the amount of overlap between successive windows, is optimally half the length of the window itself.
One way to improve the signal-to-noise ratio of an analysis gather is to add traces. It is common practice, therefore, to add several gathers all the trace 1s, all the trace 2s, all the trace 3s, etc. representing adjacent midpoints, to form one analysis gather.
If the geology is smooth and regular and there is no dip, we may try for the best possible improvement in signal-to-noise and add up to 11 gathers. If we wish to add more, we must balance the incremental gains in the signal-to-noise ratio against the probability of lateral variations in velocity (which variations would be averaged by the summing).
Where there is dip, we add fewer gathers perhaps seven but now we apply an f-k filter to the ensemble of trace 1s (and then to the ensemble of trace 2s, then trace 3s, etc.) before adding the gathers. We set the f-k filter to pass a range of dips, perhaps 2 ms/midpoint. Figure 1
and Figure 2 show the improvements in the velocity analysis obtainable with this sort of dip scan.
The choice of input velocities is naturally the most important variable in a velocity analysis. We should bring to bear on this choice all that we know about the prospect area, whether from nearby wells or previous lines. All our early efforts come to no good if we do not anticipate the effects of a high-speed layer of chalk, and thus neglect to tell the program to scan for higher stacking velocities. Such an omission is especially noticeable shallow, where an anomalous layer has greater effect Where there is still some doubt, we set the velocity range wide for at least the first few and representative lines of the prospect, and thenceforth let experience guide us.
Depending on the velocity-analysis program, there are several ways to supply the trial velocities. They all depend, however, on a first guess of the appropriate stacking velocities; this guess constitutes the central stacking function. From there, we have a choice. The other input functions may differ from the central function in incremental values of velocity or of normal moveout (as specified at some offset)
As we might expect, the increments are important. In Figure 1 we see the moveout pattern corresponding to a reflector 1800 m deep underlying a layer with a velocity of 1800 m/s. The zero-offset time is 2.0 s, and the travel time at the far offset of 1500 m is 2.167 5.
We consider a reflection frequency component of 50 Hz, having - period of 20 ms.
Now we consider nmo corrections appropriate to velocity choices of 1745 m/s ( Figure 2 ) and 1856 m/s ( Figure 2 ).
These velocity choices cause the far-offset traces, after correction, to be one-half period out of phase relative to the correct choice of velocity. Summing all the traces along the zerooffset time, we find that, compared to the case of perfect correction, the amplitude of the stacked trace is reduced in both cases about 11 dB.
(In this example we have chosen to disregard the effects of normal-moveout stretch. At this velocity and depth, the effect less than an 8% reduction in frequency and its contribution to the decrease in stack amplitude are minor.)
Obviously, this is more than adequate. Therefore, our criterion is as follows: If V0 is the velocity according to the central function, then the next higher velocity must produce a relative undercorrection equal to a half-period at three-quarters of the far offset. The next lower velocity must produce a relative overcorrection of a half-period at that offset. In our example, if V0 is 1800 m/s, the adjacent velocities are 1714 m/s and 1909 m/s.
For deeper data, the offsets are generally greater and the periods usually longer. Thus, we can use larger velocity increments. For shallower data, on the other hand, we must anticipate the nmo stretch; any calculation of this sort should therefore account for the expected mute. If the mute reduces the range of offsets by half, to 750 m in our example, then the half-period criterion applies at about 560 m. (As we see in Figure 3 , however, a severe mute and unchanging frequency requirements can mean that the velocity increments get larger shallow.)
The velocity range must encompass all the velocities we expect in the data area, including the near-surface. At sea, of course, the water velocity is the low end of the range.
To determine the high end of the velocity range, we remember that the proper place to pick a semblance contour is at its peak. To locate that peak unambiguously requires closed contours ( Figure 4 , What is the pick at 2.7s?), so the velocity range must ensure that the contours do close.
This cannot be predicted in advance; rather, it takes some experience, and at least one previous velocity analysis.
Continuous Velocity Analysis
We have said that a detail survey generally requires a velocity analysis at every midpoint; this is known as a continuous velocity analysis. Naturally, such an effort is bound to be long and expensive, particularly if the feature we wish to delineate is large or has many lines across it. We can mitigate this by making preliminary picks ourselves, thereby narrowing the computer's choice of options. One sensible step is to ask the interpreter to point out the important horizons, and to limit the velocity analysis to that time range.
By the time we require this much detail, we also have an idea of the regional dip and the curvature of the reflection. We are therefore justified, economically and in principle, in applying an f-k filter (on a record basis) to reject interfering noise trains of spurious dip.
Picking and Verifying the Velocities
To begin, we may set forth some general thoughts about picking velocities:
The correct place to pick the semblance contour is at its peak, provided we have made the proper static and zero-time corrections.
Stacking velocities in zones of steep dip on the flanks of a fold, for example have little geologic significance.
If the inferred stacking velocity decreases, then the Dix-derived interval velocities may not be realistic.
Low stacking velocities deep in the section are probably appropriate to multiples.
High stacking velocities shallow in the section, if they are not due to dip, may correspond to reflections from a fault plane.This all seems tidy enough. Why do we not just write these judgments into the velocity-analysis programs, and let the computer do all the picking? Fortunately (at least for those of us that enjoy exercises in interpretation), there is more to velocity picking than these mechanical aspects. Above all, the picking of velocity-analysis displays is as much a matter of geologic interpretation as it is of geophysical interpretation.
With that in mind, we need to develop a picking scheme, one that allows us to develop in our mind's eye a picture of the geology as we proceed.
1. Refer to the seismic section. Velocity-analysis displays must never be picked in isolation from the section, even if only a near-trace section.
As we proceed, we recognize very quickly that the times we are accustomed to picking on the section may not coincide with the times of the semblance peaks. Figure 1 shows why this is so.
It reminds us that the semblance peak corresponds to the maximum of the pulse envelope. In picking a modern section, however, the modern interpreter picks the peak or trough that is closest to the envelope maximum generally over the prospect, and then stays with this particular peak or trough. Local changes of layer thickness, and the resulting changes in interference, inevitably cause the followed peak or trough to deviate locally from the envelope maximum, and so from the semblance peak.
2. Make the easy picks first. Interpreters follow this rule, and it is equally relevant to velocity analysis, both across the line and over the prospect. Figure 2 (Three seismic lines superposed on a block diagram of the subsurface geology.
We expect the fewest velocity complications on the strike line) represents the regional geology of a prospect; the lines over it are either strike lines, dip lines, or oblique lines. In turn, the strike lines are either over the crest of the structure (A in Figure 3 ), off its flanks (B), or off the structure itself (C). By now we know that the fewest velocity complications are likely to be on lines A and C. Given a choice, we pick the velocity analyses on these lines first.
As we pick the analyses and refer to the sections, certain signatures start to become familiar ( Figure 4 ).
First, of course, is the strong bull's-eye associated with a good strong pick. Less common are the stretched contours of Figure 4 (part b); characteristic of interference, this is not a good place to pick. Sometimes, we also see the pattern of Figure 4 (part c). The strong event, recognizable by its low velocity, is a multiple; the higher-velocity "event" is an alias of the multiple it arises from the situation of Figure 5 (The gather has been corrected according to some primary velocity.
The multiple at time Tm is undercorrected, but adding the traces along Tm yields a local semblance maximum) and is therefore not a legitimate pick.
3. Pick in groups of three. It is easier to interpret a velocity analysis in the context of the two on either side of it (provided, of course, that they are fairly close together and that there is no intervening structural change, such as a fault or a salt dome). In Figure 6 ,
Figure 7 ,
and Figure 8 for example, the left and right panels imply the same stacking function. The middle panel, however, is markedly different, the second pick implying an anomalously high interval velocity.
Our first course is to check the near-trace or brute-stack section. We need a picture of the geologic continuity across these three analysis locations.
So we ask, Does this velocity increase make geologic sense? Does the character or the amplitude of the horizon change at this location? If so, then the pick may be correct. Does the shallow section show evidence of a near-surface anomaly? If so, we may need to do another statics pass, this time incorporating a dynamic model for the near-surface. Alternatively, we may find that this analysis location is itself inappropriate.
Once we are satisfied that the pick is plausible and not a processing artifact, we check the next layer. How does the interval velocity between the second and third picks change from left to right? If it does not, then it is likely that the high-velocity pick is both real and meaningful.
This leads to another important message: all picks must be made with due regard for the underlying layers, because every layer affects the stacking velocity of the reflectors beneath it. If we can find a good reason for doing so (and parts (b) and (c) in Figure 4 are two such reasons) , then we may and sometimes we must disregard the pick. But we never move a pick, and we never pick open contours.
Working in threes is es