basical processing

62
Basic Seismic Processing BASIC SEISMIC PROCESSING Initial Corrections Initial Processing Procedures 1 Spherical Spreading 4 Datum Corrections 8 Amplitude Adjustments 12 Trace Balancing 14 Display 18 Scale 20 Trace Display Modes 23 Simple Processing Flow 25 Signal-to-Noise Ratio Improvement 26 Array Simulation 27 Frequency Filtering 29 Dereverberation 35 Wavelet Processing 37 The Processing Flow 39 CMP Data Applications The Utility of CMP data 40 NMO Determination 43 NMO correction and Stacking 47 The Processing Flow 50 Residual Statistics, Migration & Inversion Residual Static Corrections 51 Migration 54 Inversion 58 1

Upload: ariyanto-wibowo

Post on 19-Jan-2016

13 views

Category:

Documents


0 download

DESCRIPTION

abstract

TRANSCRIPT

Page 1: Basical Processing

Basic Seismic Processing

BASIC SEISMIC PROCESSING

Initial Corrections

Initial Processing Procedures 1

Spherical Spreading 4

Datum Corrections 8

Amplitude Adjustments 12

Trace Balancing 14

Display 18

Scale 20

Trace Display Modes 23

Simple Processing Flow 25

Signal-to-Noise Ratio Improvement 26

Array Simulation 27

Frequency Filtering 29

Dereverberation 35

Wavelet Processing 37

The Processing Flow 39

CMP Data Applications

The Utility of CMP data 40

NMO Determination 43

NMO correction and Stacking 47

The Processing Flow 50

Residual Statistics, Migration & Inversion

Residual Static Corrections 51

Migration 54

Inversion 58

Initial correctionInitial Processing Procedures

1

Page 2: Basical Processing

Basic Seismic Processing

The seismic reflection method employs the principle of common-mid-point (cmp) recording, with some 120 to 1024 geophone groups recording each shot. Forty-eight groups is a small survey today. More typical would be: 120, 240 even 1024 or more. To get the earliest look at the data, however, let us consider only the near-offset traces, those representing the travel paths of most nearly normal incidence.Culling out the near traces is easily effected in the processing center. It is simply a matter of selecting from each shot record ( Figure   1 ) the channel with the smallest shot-to-group offset.

Figure 1

Using a database it is straightforward to extract these traces directly from the data volume (which is most likely on disk). The resulting section ( Figure   2 )

2

Page 3: Basical Processing

Basic Seismic Processing

Figure 2

simulates the simple geometry of one-shot-one-group ( Figure   3 )

Figure 3

Having created our near-trace section, we may choose to eliminate, or mute, the first breaks. These are the earliest arrivals to the geophone group, the seismic energy having followed some near-surface path. As such, they contain no reflected energy. (they do,however, contain information about the near-surface layers; for the case of refraction statics, the first arrivals are the data). To prevent the unnatural appearance caused by an instantaneous mute, we

3

Page 4: Basical Processing

Basic Seismic Processing

prefer to ramp, or taper, the mute over about 100 ms (The first-break mute and its taper. Figure   4

Figure 4

The first break, Figure   5 ,

Figure 5

Application of the mute and taper, and Figure   6 , The muted and tapered trace).

4

Page 5: Basical Processing

Basic Seismic Processing

Figure 6

We are ready now to make the preliminary corrections that allow us to evaluate the data.

=== Spherical Spreading ===Seismic amplitude decays with time ( Figure 1 ).

Figure 1

The most readily determined cause of this decay is the phenomenon of geometrical spreading, whereby the propagation of energy occurs with an ever-expanding, curved wavefront. In the simplest case, that of a constant velocity medium, the wave-front is a sphere. The seismic energy per unit surface area of this sphere is inversely proportional to the square of the distance from the shot. Energy is also proportional to the square of

5

Page 6: Basical Processing

Basic Seismic Processing

amplitude. It follows that seismic amplitude is inversely proportional to distance and, in a constant velocity medium, to time. Most of the decay due to spherical spreading occurs early; at later times, the slope of the decay becomes successively smaller. Finally, the amplitude of the reflection drops below the level of the ambient noise. We wish to compensate the known effect of spherical spreading; we want to compress the amplitudes of the earlier arrivals and amplify the later ones. To do this, we first establish an appropriate reference amplitude A0 and determine its time t0. Because of the inverse relationship between amplitude and time, the product A0t0 equals the product Antn for all times. So, bringing A up to the reference amplitude A0 is simply a matter of multiplying it by the factor t/t0. For example, if t0 were to equal one second, the spreading correction would be effected by multiplying each amplitude by its time. We must be careful to stop our compensation at the time when reflection amplitude drops below ambient noise amplitude, lest we increase the noise. If this time is tN, we simply leave the multiplication factor at tN/t0 for all times greater than tN. Corrected shot records ( Figure 2 )

Figure 2

and a corrected section ( Figure 3 )

6

Page 7: Basical Processing

Basic Seismic Processing

Figure 3

demonstrate much more evenness than before ( Figure 4 , Two raw shot records, with the near traces arrowed, and Figure 5 , The near-trace section).

Figure 4

A certain amount of amplitude decay remains ( Figure 6

7

Page 8: Basical Processing

Basic Seismic Processing

Figure 6

but it is less pronounced.

Figure 5

In addition to correcting for spherical spreading, we may also apply an exponential gain to compensate for absorption losses.

Datum Corrections Ideally, our near-trace section should represent accurately the configuration of the subsurface. Due to topographic and near-surface irregularities, this is not immediately the case. A line shot across a valley ( Figure 1 )

8

Page 9: Basical Processing

Basic Seismic Processing

Figure 1

can make a flat reflector appear as an anticline ( Figure 2 , The effect of elevation only).

Figure 2

Matters are further confused when the propagation path goes through the low-velocity weathered layer. The thickness and velocity of this layer can change from shot-point to shot-point (and, in the rainy season, from day to day). The resulting section demonstrates a lack of event continuity as well as false structure ( Figure 3 , The combined effect of elevation and weathering).

9

Page 10: Basical Processing

Basic Seismic Processing

Figure 3

The key to resolving this problem is to select an arbitrary (but reasoned) datum plane, such as sea level, and subtract that part of the travel time due to propagation above it. In effect, this amounts to a "removal" of all material above the datum, and simulates the case of shots and receivers on the plane. The time shifts that effect this removal are called datum corrections, because they set zero time at the datum plane. Alternatively, they are sometimes called field static corrections (field, because they are calculated directly from field data — elevations, shot depths, weathering depths, etc.— and static, because they are applied over the entire length of the trace). And sometimes they are simply called field statics. The simplest of the datum corrections is the elevation correction ( Figure 4 ).

Figure 4

This correction is appropriate when the bedrock outcrops at the surface, or is covered by a negligible layer of soil or fill. We divide the surface elevation (above datum) by the bedrock velocity for each shot-point (the source static) and its corresponding geophone group (the receiver static). The sum of these quantities is the total static, and is subtracted from the total travel time to yield the travel time below the datum. The situation is slightly different with a buried source ( Figure 5 ).

10

Page 11: Basical Processing

Basic Seismic Processing

Figure 5

The receiver static is the same, of course, but a source static calculated as above removes too much time; we have to put some of it back. The amount we restore to the travel time is the source depth (below the surface) divided by the bedrock velocity. We are now in a position to make the proper corrections despite variations in the elevations, shot depths, or both. Let us now introduce real-world complications ( Figure 6 ).

Figure 6

In addition to changing elevations and shot depths, we now have a near-surface layer of unconsolidated sediment above the bedrock. This material, sometimes called the low-velocity layer, sometimes the weathered layer, and sometimes just the weathering, is characterized by variability in thickness and velocity. Further complications may be introduced by the presence of a water table, which is itself subject to variations in depth. Whatever the case, the effect of this low-velocity layer is to slow down the seismic wave, so that a simple elevation correction is inadequate. In effect, we have to correct the correction. This compensation is called the weathering correction. To determine time corrections (which is what statics are), we need both layer thickness and velocity. The thickness of the low-velocity layer sometimes becomes apparent as each shot-hole is drilled. The weathering velocity, however, does not, unless we conduct

11

Page 12: Basical Processing

Basic Seismic Processing

some kind of velocity survey. The variability of the material may require that such a survey be done at each shot and receiver location, a procedure that is seldom economically viable. Fortunately, we can get a direct reading of travel time in the near-surface by using an uphole geophone, placed a few meters from the shot-point, which records the arrival of a direct wave from the shot to the surface. Figure 7 illustrates a common situation: a shot, buried some 10 m below the weathering and repeated at every receiver location.

Figure 7

In this example, the wave generated by a shot at ground position 1 bounces off a deep horizontal reflector beneath ground position 2, and is recorded by a receiver at ground position 3. Subsequent shots go through similar travel paths. The total datum correction for the first trace, then, consists of the source static at ground position 1 and the receiver static at ground position 3 ( Figure 8 ).

12

Page 13: Basical Processing

Basic Seismic Processing

Figure 8

Calculation of the source static follows the example of Figure 5 . The receiver static, however, can be broken down into two parts. The first part is clearly the same as the source static for the shot at ground position 3. The second part is the travel time recorded by the uphole geophone; this is the uphole time for the shot at ground position 3. Some comments are now in order. First, the method of Figure 8 applies only to subsurface sources. For surface sources such as Vibroseis, the best we can do is an elevation correction plus whatever weathering information we have. Quite often, a large Vibroseis survey will have strategically located weathering surveys conducted over it. Or, if it is a mature prospect, or ties with dynamite lines, the velocity information of other vintages can be brought to bear. In some areas, the weathering corrections can be derived from first breaks across the spread; this approach is detailed in GP404 Static Corrections. Whatever the case, the datum corrections will probably need refining in a later step. We also find many lines where shots are are drilled at alternate group locations, or even every third. What we do here is simply interpolate uphole times when we do not have a shot at a receiver location. It may not be correct, but it is easy, and in most cases will not be far wrong.

=== Amplitude Adjustments ===Besides spherical spreading, there are other reasons for the observable decay in seismic amplitudes. One cause is the fact that velocity is not constant, but ordinarily increases with depth. Because of Snell's law, this increase means that the growth of the expanding wavefront is also not constant, but accelerating. For this and other reasons the observed decay of reflection amplitude normally exceeds that imposed strictly by spherical spreading. Amplitude can also vary from trace to trace. These inconsistencies arise not only from genuine lateral inhomogeneities, but also from conditions in the field. Charge size and depth can vary along a line (and not always because we want them to cultural impediments often impose restrictions on charge size in particular). When low-power surface sources such as Vibroseis are used, a unit may fail, reducing the size of the source array. In marine work, air guns also occasionally fail, reducing the source volume somewhat. The result is a nonuniform suite of bangs. And on the receiving end, surface conditions can affect geophone plants.

13

Page 14: Basical Processing

Basic Seismic Processing

The combined effect is a section where the traces are uneven, across and down. Deep, weak reflections may be hard to see. A known reflector may appear to come and go across the section. The solution involves normalizing the traces and then balancing them. Trace Equalization Trace equalization is an amplitude adjustment applied to the entire trace. It is directly applicable to the case of a weak shot or a poor geophone plant. We start with two traces that have been corrected only for spherical spreading ( Figure 1 ). Clearly, one trace has higher amplitudes than the other, so our task is to bring them both to the same level.

Figure 1

First, we specify a time window for each trace. Here, in the context of a near-trace section, the windows are likely to be the same, say, 0.0 to 4.0 seconds. Then, we add the (absolute) amplitude values of all the samples in the window for each trace. Division by the number of samples within the window yields the mean amplitude of the trace. (As we apply this process for all the traces of the section, we note the variability of the mean amplitudes.) The next step is to determine a scaler, or multiplier, which brings the mean amplitudes up or down to a predetermined value. If, for instance, this desired value is 1000, and the calculated mean amplitudes are 1700 and 500, our scalers are 0.6 and 2.0 Each scaler is applied to the whole trace for which it is calculated ( Figure 1 ). Equalization enhances the appearance of continuity, and provides partial compensation for the quirks in the field work that might otherwise degrade data quality.

Trace Balancing Trace balancing is the adjustment of amplitudes within a trace, as opposed to among traces. Its effect is, again, the suppression of stronger arrivals, coupled with the enhancement of weaker ones, and its goal is the improvement of event continuity and visual standout. Two trace balancing processes are automatic gain control (agc) and time-variant scaling. As with trace equalization, trace balancing requires the calculation of the mean amplitude in a given time window. In this step, however, there are numerous successive windows within each trace ( Figure 2 ), and the scalers apply only within those windows.

14

Page 15: Basical Processing

Basic Seismic Processing

Figure 2

So, if our first calculated mean amplitude of Figure 2 is 5000, and the last is 500, and if we want them both scaled to 1000, our initial approach might be to multiply the amplitudes in the first window by 0.2, and those in the last by 2.0. This process, however, would introduce discontinuous steps of amplitude at the junction of two windows. Two solutions to this are in common use. One solution, known as time-variant scaling, ( Figure 3 ) applies the computed scaler at the center of each window, and interpolates between these scalers at the intermediate points.

Figure 3

15

Page 16: Basical Processing

Basic Seismic Processing

Another approach, automatic gain control (agc), uses a sliding time window ( Figure 4 ), such that each window begins and ends one sample later than the one before.

Figure 4

Again, the scaling is applied to the amplitude of the sample at the center of the window. In this manner, we effect a smooth scaling that reacts to major amplitude variations while maintaining sensitivity to local fluctuations. We also ensure that a peculiarly large amplitude does not have undue influence throughout the entire trace. Some Further Considerations We now have to think about some problems we may encounter. The problem of ambient noise first arose in our discussion of the spherical spreading correction. We encounter it again when we have to determine the length of our trace normalization window. Particularly for weaker shots, ambient noise can dominate signal at later reflection times. In settling on a normalization window, then, we may choose to use the weakest shot in our data set. We need also to determine a reasonable window length to be used in trace balancing. Ambient noise is not relevant here; rather, the prime determinant is the falsification of relative reflection strengths. Consider the trace of Figure 5 .

16

Page 17: Basical Processing

Basic Seismic Processing

Figure 5

It has a high amplitude reflection at 1.8 s, but is otherwise reasonably well-behaved. The center of the first time window we consider is at 1.6 s, at which time there is a reflection whose amplitude is, say, 625. Because the anomalous reflection is included in this window, the mean amplitude of the window is higher than it would otherwise be, perhaps 714. If the desired average is 1000, the required scaler for the window is 1.4, and the reflection at 1.6 s is brought up to 875. Further down the trace, the sample at the center of a later window has an amplitude of 400. Within this window, there are no abnormal reflections, and the mean amplitude is also 400. So, the scaler of 2.5 does indeed bring the reflection up to an amplitude of 1000. We see the problem: the large amplitude at 1.8 s causes a falsification of relative reflection strengths by suppressing those amplitudes within a half window length of it. We reduce this effect in several ways, acting separately or in concert. First, we may weight the samples within each window ( Figure 6 ).

17

Page 18: Basical Processing

Basic Seismic Processing

Figure 6

This reduces the contribution to the mean absolute amplitude of every sample not at the center of the window. Alternatively, we may reduce the window length ( Figure 7 ), thereby reducing the number of samples affected by the anomalous amplitude.

Figure 7

It varies with data area, but a sensible trace balancing window (provided a spherical spreading correction is applied first) is from 500 to 100 ms. Our guide must be the data: if the amplitudes are fairly uniform, there is less of a need to balance, and we get away from using longer windows. A third method is to make the scaler some nonlinear function of the mean absolute amplitude. We might, for instance, scale to an amplitude of 500 if the mean absolute amplitude in a window is below 500; to 650 for mean amplitudes between 500 and 800;

18

Page 19: Basical Processing

Basic Seismic Processing

to 1000 for mean amplitudes between 800 and 1200; to 1350 for mean amplitudes between 1200 and 1500; and to 1500 for mean amplitudes above 1500. A fourth method is to ignore, in the calculation of mean amplitude, any values which exceed the previous mean by some arbitrary ratio (perhaps 3:1). Thus, the scalers are derived from what we might call the "background" level of reflection activity; the background levels are balanced, but individual strong reflections maintain their relative strength.

Display Once we have make adjustments to the data amplitude (allowing us to see the data throughout the record length) and have compensated for the nearsurface (eliminating false structure and improving reflection continuity), we can view the section. This brings us to the matter of display, an aspect of processing that is sometimes (and unfortunately) given inadequate thought. From the point of view of the processor, this is because display variables are given by the interpreter. The interpreter, likewise, is forced to conform to a large suite of existing data and other company standards. We will not try, therefore, to outline one "correct" type of display. There are, however, some aspects of the display which all companies and contractors treat in a uniform manner. We examine these first, and then discuss matters which are truly variable.

Standard Plot Considerations Numbering the shot-points of a seismic section is often regarded as a strictly mechanical operation; after all, the shots themselves have already been numbered in the field such that there is some uniformity over the prospect area. In transferring these numbers to the section, however, the processor should remember that there is usually an offset between the shot and the near group. The field setup of Figure 1 (The weathering correction; the field method), illustrated here by Figure 2 is a case in point.

Figure 1

Certainly, Trace 1 comes from Shot 1, so the immediate temptation is to label it 1 on the section.

19

Page 20: Basical Processing

Basic Seismic Processing

Figure 2

By the same token, however, Trace 1 comes also from Receiver 3, so our previous logic makes labeling it 3 on the section just as valid. But the mid-point for the first shot (and the reflection point in the simple case of horizontal reflectors) falls directly beneath ground position 2. We see, then, that Trace 1 is correctly numbered 2 ( Figure 3 ).

Figure 3

The direction of plotting conforms to the way maps are drawn: east is to the right. The lone exception is a line that trends due north, in which case north is to the right. Any plot which is given to an interpreter — even a preliminary print — should have a side label containing the field variables and the processing history. Inclusion of the latter

20

Page 21: Basical Processing

Basic Seismic Processing

becomes more important as we increase our processing sophistication. In the steps we have taken so far, we also need to include the processing variables we have used: the datum and the elevation velocity; the cutoff time of the spreading correction; the trace normalization window; and the length of the trace balancing window. Providing the interpreter with all these details allows him to evaluate the data in light of how the line was shot and processed, and also to compare the section against older vintages. An interpreter can also use this information to decide what he does not want to do again!

Scales When we choose plotting scales for a seismic section, we need first to decide the purpose of the section. Is this a line from which we want to extract fine stratigraphic detail? Or shall we be content to map regional trends and large structures? If we plan to use the section for detailed work, we find that a vertical (time) scale of ten centimeters to one second of (two-way) reflection time is usually adequate. This approximates to four inches per second, and is usually enough to accommodate the frequencies normally associated with this level of detail. When the effort is more on a regional level, that is, for reconnaissance lines, we may choose to halve the time scale, so that now we have five centimeters (about two inches) per second. (When high frequencies, more than 50 or 60 Hz, and the subtlest traps are the objectives, some interpreters use a time scale of 20 cm/s. The choice of horizontal scale depends on the vertical scale. For a "full-scale section," one with a time scale of 10 cm/s, we find it convenient to make the horizontal scale equal to the scale of the map on which the interpreter is working. For typical prospects, this is usually 1:25,000, which means that four centimeters on the section (or map) represents one kilometer on the ground. In the U.S., the comparable scale is 1:24,000; one inch on the map represents 2000 ft on the ground. This relationship is useful should the interpreter wish to construct a fence diagram ( Figure 4 ), a network of sections, aligned as on a map, to illustrate variations in three dimensions.

Figure 4

For the "half-scale section," where the time scale is 5 cm/s, we keep proportions the same as on the full-scale section by using a horizontal scale of 1:50,000 (two centimeters equals one kilometer). In both cases, the effect, for typical velocities, is a vertical exaggeration (or horizontal compression) of about 2:1 (more commonly nowadays, the 3-D volume would be viewed and interpreted at a workstation).

21

Page 22: Basical Processing

Basic Seismic Processing

A hardship may arise when we try to tie our section to those of older vintages which used different scales. Moreover, the older lines may have been shot with different field variables, which may mean that a constant horizontal scale results in different trace widths. In either case, comparisons are difficult. In critically important cases, the solution is to reprocess the pertinent older data — particularly as analysis techniques continue to improve; we then have an opportunity to plot these sections to conform to our new standards. Irrespective of the choice of scales, these variables must be annotated on the section, preferably across the top of the section ( Figure 5 ),

Figure 5

and certainly on the side label ( Figure 6 ).

22

Page 23: Basical Processing

Basic Seismic Processing

Figure 6

Trace Display Modes Now to the matter of the traces themselves. There are five conventional display modes that we can use:

• wiggles only; • wiggles with variable area; • variable area only; and • variable density, with and without wiggles.

The first, wiggles only, ( Figure 7 ), is the sort of display that comes out of the field camera monitor.

Figure 7

Subtle features are hard to see, trace overlap can cause confusion, and zero crossings

23

Page 24: Basical Processing

Basic Seismic Processing

are not readily apparent. Generally, a display of this sort is appropriate only for a preliminary, comparative quality check of the data, of the sort the field observer performs. A sophistication of the wiggles-only plot comes with blacking in one side ( Figure 7 ); this is the variable-area-wiggles (v-a-w) plot. The full waveform is plotted, but this mode gives emphasis only to the blacked-in peaks. Still, the full-waveform plotting is important when we undertake wavelet processing, and also when we want to pick and time an event properly. The variable area wiggles plot has proven to be the most popular display mode in current practice. A plot without wiggles but with variable area (v-a) only may be arranged to give equal emphasis to positive and negative excursions of the trace ( Figure 8 ).

Figure 8

In other words, we see the trace swing to the left as clearly as we see it swing to the right. Such a plot also prints better than a v-a-w plot. The problem with this mode is eyestrain, particularly for a person with an astigmatism. This is no small consideration for someone who must pore over these sections many hours a day. With the rectified variable-area plot ( Figure 9 ), peaks (black) and troughs (gray) are plotted on the same side of the trace. For clarity of faults, these are very effective displays.

24

Page 25: Basical Processing

Basic Seismic Processing

Figure 9

The choice of display mode is a matter of personal taste, but is also affected by company dictum. Currently, the most popular medium of data exchange is the variable area wiggles plot. With so much data being sold, traded, and processed for partnerships, chances are we shall all work in this mode at one time or another.

A Simple Processing Flow Figure 1 shows a basic, simple, uncluttered processing flow.

Figure 1

Its simplicity is based on the goal of just wanting to see the data. We pick and mute our near traces, then apply a correction for an effect (spherical spreading) that we can understand and specify this is deterministic processing. Then we remove the effects of the near-surface by making datum corrections. Another amplitude adjustment is made for attenuating effects that we cannot quantify explicitly-this is statistical processing,

25

Page 26: Basical Processing

Basic Seismic Processing

whereby we compensate average effects of unknown agents. Finally, we display, and quickly see that there is much more involved in seismic processing.

Exp: The reference amplitude A0 equals 100 at a time t0 of 0.2 s. Ambient noise has an amplitude of 4. What is the multiplier at 5.5 s? a)     4.0 b)     5.0 c)     25.0 d)     27.5 e)     Can't tell exp:In Figure 1 , Shot 1 is at an elevation of 175 m and has a depth of 35 m. The uphole time is 35 ms. Shot 2 is at an elevation of 190 m, and has a depth of 40 m. Datum is 100 m above sea level. Elevation velocity is 2000 m/s. If the weathering velocity can be assumed to be constant, what is the total static for a trace from Shot 1 recorded by a group at Shot 2? a)     75 ms b)     80 ms c)     82 ms d)     85 ms e)     Can't tell

Improving Signal-To-Noise Improving the quality of the initial seismic data presentation involves two tasks: enhancing the strength and clarity of the signal; and suppressing the noise that contaminates the trace. Signal is simply all the useful information on a trace; noise is everything else. The goal is an increase in the signal-to-noise ratio. Noise falls into two broad categories: much coherent noise is caused by a disturbance traveling from group to group at or near the surface; incoherent noise is random in space or time. The distinction is often a matter of group interval; noise that is coherent on one line may appear to be incoherent on a line with a larger group interval. We tailor our definitions to whatever data we are presently considering; we may, however, make some broad generalizations. Most coherent noise is source-gene rated and cannot always be prevented (although we can avoid recording a lot of it). Incoherent noise, on the other hand, can stem from a variety of unknowns. For the former problem, source and receiver arrays provide some help; for the latter, instrumental filters (which can also reduce some coherent noise) can be useful. In either case, there are steps that a processor can take. The first step is to recognize noise. Figure 1 , for instance, illustrates a very pervasive coherent noise ground roll.

26

Page 27: Basical Processing

Basic Seismic Processing

Figure 1

Most noticeable on shot records of surface sources, these surface waves are identifiable by their low velocity and comparatively low frequencies. Random noise and incoherent noise are recognized by, more than anything, the paucity of coherent data ( Figure 1 ), although it is true that a lack of reflections may simply indicate a lack of reflectors. Still, when there definitely are reflections, as at 1.42 s in the middle of Figure 1 , and when those reflections are not as clear as we would like, it is because signal-to-noise is low. More often than not, it is not apparent what causes incoherent noise, so our only recourse (once the field work is done) is to try to suppress it in the processing.

Array Simulation Within the context of a near-trace section, we may choose to define signal as that which is common from trace to trace. Intertrace randomness, therefore, defines noise. Consequently, we can add adjacent traces representing adjacent reflection points to gain a measure of signal enhancement. In effect, this sort of summing ( Figure 1 , Three input traces to one output trace yields a 3:1 mix) is analogous to the source and receiver arrays used in the field.

27

Page 28: Basical Processing

Basic Seismic Processing

Figure 1

When we perform the array simulation depicted in Figure 1 , we usually obtain a benefit of signal-to-noise ratio ( Figure 2 , A conventional 3:1 mix).

Figure 2

We pay a price for this benefit, however, if the geology changes rapidly from trace to trace. The process of adding part of the first output trace into the second output trace, and in this illustration, into the third output trace smears the data; we have sacrificed lateral resolution.

28

Page 29: Basical Processing

Basic Seismic Processing

One situation in which the geology changes rapidly from trace to trace occurs in areas of steep dip ( Figure 3 ).

Figure 3

Here, a legitimate and valuable signal is not common (in the sense of being aligned) from trace to trace. The output from an array simulation, therefore, can actually reduce the amplitude of the reflection. Obviously, we need to strike some compromise between our desire for enhanced signal-to-noise and the need to maintain the geologic validity of the data. The prime determinant is the section itself. If the target horizon has steep dip, then trace summing should be avoided. If, on the other hand, there is little dip at the target depth, and if the noise is a serious problem, then this summing, or mixing, may be warranted.

Frequency Filtering Filtering is a selective deletion of information passing through a system and, unless otherwise specified, indicates discrimination on the basis of frequency. In processing, this is a useful approach, since signal and coherent noise (and some random noise) often have different, albeit possibly overlapping, spectra. (Ambient noise pervades the entire signal spectrum, and therefore poses a different problem.) We therefore define signal in the present context as that which falls within a desired frequency band, and noise as anything outside that range. To be sure, there are analog filters in the recording instruments. For a variety of reasons, however, these are deliberately wide-band. So, we need to do our own digital filtering in the processing center. This is most easily performed in the frequency domain, which we get to by passing a trace through a digital "prism." In much the same way that an optical prism breaks up a ray of light into its color (or frequency) components, so does a digital prism reveal the amplitude spectrum, or frequency composition, of the trace ( Figure 1 ).

29

Page 30: Basical Processing

Basic Seismic Processing

Figure 1

Specification of a filter is in terms of its frequency response ( Figure 2 ).

Figure 2

Filtering is then simply a matter of multiplying, frequency by frequency, the amplitude spectrum of the input signal and the frequency response of the filter. After the filtering ( Figure 3 ), we return to the trace as a time series by passing it, backwards, through the digital prism.

30

Page 31: Basical Processing

Basic Seismic Processing

Figure 3

The manner of specifying a digital filter varies with its type. For many commonly used types, the filter may be specified by a low-cut corner frequency, a high-cut corner frequency, and the low-cut and high-cut slopes. In Figure 4 , for example, the low-cut corner frequency is 12 Hz and the high-cut is 128 Hz.

Figure 4

The corresponding slopes are 20 and 50 dB/octave. Many processors write this response as 12/20-128/50; others write it as 1 2(20)-1 28(50). For other types of digital filters, the specification is in terms of the corner frequencies and the null frequencies. Thus, the response of the filter of Figure 5 might be written as 6-12-80-120 Hz.

31

Page 32: Basical Processing

Basic Seismic Processing

Figure 5

What types of noise does this frequency selectivity reduce? One is ground roll, the surface waves that contaminate a section and provide no direct subsurface information. A near-trace section is particularly vulnerable to ground roll. Because it does not use a spread of geophones, the near-trace section does not afford recognition of the ground roll by its distinctive low velocity ( Figure 6 and Figure 7 ) .

Figure 6

If the surface wave velocity is constant, the ground roll arrives on each trace of the near-trace section at the same time, and so may appear as a flat, low-frequency reflection.

32

Page 33: Basical Processing

Basic Seismic Processing

Figure 7

If the velocity varies, this "reflection"" appears to show structure. "f-k" filtering is the most effective means of removing ground roll. Because ground roll has many frequencies lower than those of the signal, filtering is a useful device. The spectra do overlap, however, so we must take care not to filter out the surface waves indiscriminately, lest we eliminate some good signal also. Pervading the whole spectrum is wind noise, either directly on the geophones, or indirectly by blowing dust or rustling vegetation. Digital filtering can help to attenuate those components of the wind noise which are outside of the signal spectrum. Finally, there are a host of other noise-producing sources, whether natural, such as animals and raindrops, or artificial, such as machinery and power lines. Power-line noise, at 50 or 60 Hz, is induced into the cables or geophones or injected into the cables at points of leakage to the ground. One measure for counteracting power-line noise is the use of notch filters, having a very narrow rejection region at the power-line frequency, in the field. Power-line noise is therefore rarely a problem for processors, although digital notch filters are generally available should the need arise. The other incidences of noise

are random, and are usually susceptible to the  -type attenuation effected by summing. Noise in marine work is generated by wave action, marine organisms, streamer friction and turbulence, and streamer jerk. Much of this can be dealt with only in processing. Given the continuous nature of seismic operations at sea, any wait for conditions to improve cannot be less than the time for a full circle; this is a serious loss of production. So, contracts generally specify that recording be suspended when noise reaches a certain threshold, or when a certain number of traces are noisy, or when air gun capacity falls below a certain volume. Shots from other boats ( Figure 8 ),

33

Page 34: Basical Processing

Basic Seismic Processing

Figure 8

sideswipe from offshore structures ( Figure 9 ),

Figure 9

and irregularities in the sea floor ( Figure 10 ) are additional causes of noise.

34

Page 35: Basical Processing

Basic Seismic Processing

Figure 10

We do not hope to eliminate noise it must always be the ultimate limit on penetration. We want, rather, to reduce its harmful effects in processing, having done all we reasonably can in the field to minimize recording it.

Dereverberation There is a category of noise that is immune to both trace mixing and frequency discrimination. It is common from trace to trace, so summing enhances it. Its frequency range is the same as that of the signal, so filtering does not help. It is the water-layer "ringing" that arises because of the large acoustic contrasts at both the air-water interface and the sea floor. The effect ( Figure 1 ) is that reflections from the ocean bottom come in repeatedly.

Figure 1

35

Page 36: Basical Processing

Basic Seismic Processing

This phenomenon actually has two parts. The first part ( Figure 2 ) is that portion of the downgoing energy that never manages to penetrate the sea floor.

Figure 2

The second occurs after the recording of a primary reflection; this energy bounces off the air-water boundary and heads downward again, to begin anew the interlayer bouncing ( Figure 2 ). The primary sea-floor reflection, and all other primary reflections, are therefore followed by a train of reverberations; this train is aggravated if the sea floor is hard. Dereverberation is the process of counteracting the water-layer ringing. Although the details are complicated, the process is conceptually simple. In effect, we determine the water depth (and hence the reverberation times) and estimate the reverberation decay; this allows us to synthesize the train of reverberations for each reflection, and simply subtract it from the section. The result ( Figure 3 ) is a cleaner, more interpretable, and better-resolved section.

Figure 3

36

Page 37: Basical Processing

Basic Seismic Processing

Wavelet Processing The ideal seismic source pulse is often considered to be an infinitely strong impulse, of infinitesimal duration, that contains all frequencies ( Figure 1 ).

Figure 1

The real world, however, subjects seismic waves to attenuation, scattering, multiple generation, and near-surface pulse-shaping events. Even if we could generate the ideal pulse, therefore, we would have no hope of recovering all the frequencies in it. The earth will not let us. What the earth does let us record is the reflection waveform — a wavelet — such as that of Figure 1 . The difficulties posed by this waveform are apparent. It is sufficiently long that interference between closely spaced reflectors is inevitable. And even with single, clear reflections, it is not obvious from the waveform exactly when (or where) the reflections occur. The goal of wavelet processing is twofold. First, we want to reshape the wavelet so that it is symmetrical; the waveform then has a clear maximum which corresponds to the actual reflection time. Next, we want to compress the wavelet so that it is as short as possible; this permits more discrimination between closely spaced events. We want the reflection waveform to appear as in Figure 1 . Basically, wavelet processing relies on computational techniques which allow a known waveform or an observed waveform to be changed into a desired waveform.

37

Page 38: Basical Processing

Basic Seismic Processing

We recall that the reflection waveform represents the combined effect of the source pulse, the propagation in the earth, and the field equipment. To the degree that we know or can observe these several contributions, we can compensate their effects. Our first task is to remove the effects of pulse-shaping agents that are part of the field equipment. The distorting effects of the geophones and the recording instruments can be specified exactly. The usual course is to compensate the known phase-frequency response of the instruments (and ideally the geophones) in a process called dephasing. Sometimes we also seek to compensate some of the amplitude-frequency response of the recording instruments and the geophones (particularly the low cut of the ground-roll filter). In any case, the compensation takes us part-way toward a reflection wavelet, representing the source pulse filtered and shaped by only the earth. The second task is to remove the effect of the source pulse itself. At sea it is practical to do this deterministically — to measure the output of the source guns directly, and to construct a good approximation to the composite outgoing pulse. Alternatively, in deep water, it is sometimes possible to isolate the direct sea-floor reflection recorded by the near group, and to use this as a similar measure of the outgoing pulse. In either case, knowledge of the source signature, of which Figure 2 is representative, allows us to compensate, to a degree, the imperfections and variations of the source.

Figure 2

The third task is to compensate the effect of the propagation in the earth. For this we usually have no means of making a direct measurement. We must resort to a statistical approach — one which estimates the average form of all the reflections on the trace. To the extent that we are able to do this, we may then operate on the trace to change this average form to a more desirable form. Traditionally, this process is called deconvolution. The total effect of wavelet processing is to replace a long, distorted wavelet by one that is as close to a spike as the earth and technology allow. The resulting section ( Figure 3 ) is better resolved.

38

Page 39: Basical Processing

Basic Seismic Processing

Figure 3

The events are easier to time. Additional geologic features become evident. The section is more interpretable. And it more closely approximates a record of the interfaces in the earth.

The Processing Flow Figure 1 illustrates the processing flow diagram that is applicable to improving signal-to-noise ratio.

Figure 1

39

Page 40: Basical Processing

Basic Seismic Processing

The first pass, just to see the data, is summarized by the shaded steps. We quickly see, however, that this is usually inadequate. The intermediate processes (unshaded) represent our attempts to improve the signal-to-noise ratio and to refine the reflection waveform. The seismic section that we now have is better, but we have much left to do. Quite often, the processor can, with no sacrifice of accuracy, combine several steps in one run. This is true because not all the steps require visual inspection of the data for proper selection of the variables. It is wise, however, to let each pause in the processing be accompanied by a quality control (OC) display. This permits us to see the effects of our processing and tells us that we are on the right track. Exp_01:The processor has designed a filter with a low-cut frequency of 12 Hz, a high-cut of 55 Hz, and low-cut and high-cut slopes of 18 dB/oct and 72 dB/oct. The predominant noise sources have frequencies of 8.5 Hz and 78 Hz. The nominal attenuation at those frequencies is by a factor of approximately a)     9 and 36 b)     8 and 4000 c)     3 and 63 d)     1.4 and 1.4 e)     Something else Exp_02:The water depth along a line starts at 165 m and slopes down to 240 m. The velocity of sound in sea water is 1500 m/s. At what range of reflection times does the first ocean-bottom multiple appear? a)     0.110 s to 0.160s b)     0.220 s to 0.320 s c)     0.330 S to 0.480 s d)     0.440 s to 0.640 s e)     Something else CMD Data ApplicationThe Utility of Cmp Data In discussing on the improvement of the signal-to-noise ratio of a section, we maintain as our data set the near-trace section that we use just to get a first look at the data. We remember, however, that we can construct essentially zero-offset sections from common-mid-point data. Furthermore, the multiplicity of data acquired by the cmp

technique allows us to get  -type signal enhancement as one of our benefits. The second stage of sophistication in processing, after improving signal-to-noise ratio, is an analysis of how cmp data can be used to best advantage. The technique of common-mid-point recording is summarized briefly here. In Figure 1 , a shot is recorded into the many geophone groups of a spread.

40

Page 41: Basical Processing

Basic Seismic Processing

Figure 1

Each recorded trace then represents a source-receiver pair, and the trace is in turn represented by a mid-point. The mid-point, we remember, is a surface position halfway between the shot and the center of the geophone group. One such mid-point and the source-receiver pair it represents are shown in Figure 2.

Figure 2.

By the time we take the next shot, both shot and spread have moved, or rolled along. We find that the mid-point of Figure 2 now lies halfway between another source and receiver ( Figure 3 ).

41

Page 42: Basical Processing

Basic Seismic Processing

Figure 3

And as the line progresses, other source-receiver pairs are similarly disposed about this mid-point. This one surface position is now the common mid-point for many source-receiver pairs ( Figure 4 ), each of which is, of course, represented by a recorded trace.

Figure 4

While all the traces from Figure 4 have the same mid-point, they can be characterized by different offsets. This fact permits a new definition of signal — that which is common to recordings made at different offsets. Noise is, as usual, everything else. Before we can take advantage of our cmp multiplicity, we have to arrange all the traces by their mid-point coordinates. This cmp sort is done early in the processing; the suite of traces thus brought together is a cmp gather ( Figure 5 ).

42

Page 43: Basical Processing

Basic Seismic Processing

Figure 5

We mute the first breaks as we did on the near-trace section, and are now ready to see the strength of the cmp technique.

Nmo Determination The nature of a common-mid-point gather is such that reflections must be common from trace to trace. After all, except in the case of strong dips, they come from substantially the same area of the reflector. In Figure 1 , we see the traces of five successive cmp gathers; each gather contains at least 15 versions of the reflection information derived from different source-receiver pairs of a split spread.

43

Page 44: Basical Processing

Basic Seismic Processing

Figure 1

We can see reflections common from trace to trace of each gather, but they are not yet aligned in time; we have yet to account for the effect of the different offsets. As source-to-receiver offset increases, so does the length of the travel path ( Figure 2 ) and, therefore, the travel time.

Figure 2

We remember that this increase in reflection time ( Figure 3 ) is known as normal moveout (nmo). Before we can sum the traces of a gather, we need to make all the reflections align horizontally; we need to remove the moveout. But first, we have to

44

Page 45: Basical Processing

Basic Seismic Processing

determine just what the nmo is.

Figure 3

At a particular offset, nmo is defined by the zero-offset reflection time and the appropriate velocity. To see why this is, we modify Figure 2 slightly by moving the source from its actual position to that of its image in the reflector ( Figure 4 ).

Figure 4

In the simple case of a reflector with no dip, this manipulation yields a right triangle, the sides of which are source-receiver offset (x), zero-offset reflection time (t0), and source-receiver reflection time (tx). If we assume a constant velocity, then we can easily get all three sides into units of time (since x is in units of length), and then apply the Pythagorean theorem governing right triangles. In this manner, we calculate the source-

45

Page 46: Basical Processing

Basic Seismic Processing

receiver reflection time, from which we subtract the zero-offset reflection time to get the normal moveout  The usual form of the derived equation is

This equation is fundamental, and we must either memorize it or be ready to derive it instantly. For horizontal reflectors, then, the determination of normal moveout (at a certain reflection time and offset) is tantamount to the determination of a velocity. The operation of determining the normal moveout for purposes of nmo correction is therefore called velocity analysis. The name is sometimes deceptive; what we are really determining is a normal moveout. The term velocity analysis is widely used, however, and we shall use it here. The procedure is one of selecting a trial velocity, determining the nmo pattern that results, removing this amount of nmo from the gather, and measuring, in some way, the resulting alignment. We do this for a number of trial velocities and select the one that gives the best alignment. This is the one that provides the best fit to the observed moveout pattern at that zero-offset time. Although it would be ideal to make nmo determinations for every cmp, this is not ordinarily done. Velocity analysis is an iterative procedure that must manipulate many reflections on many traces — it takes a long time and is very expensive. So, we space the analysis locations out, performing one at appropriate places along the line perhaps every kilometer or so. For the intermediate cmp locations, we interpolate our results. Although there are hazards in doing this, it is usually acceptable.

Nmo Correction and Stacking Once we determine the normal moveout, we effect the correction by removing that amount of time from the reflection time. The nmo correction is a time shift, and always negative. Furthermore, since moveout and, therefore, its removal are time-varying, we call these time shifts dynamic corrections. After correction, events line up across the gather ( Figure 1 ). By removing the effect of the offset, we simulate having all the sources and receivers at the mid-point.

46

Page 47: Basical Processing

Basic Seismic Processing

Figure 1

Because nmo corrections are dynamic, it is possible that the amount of time-shift varies within one reflection wavelet. At a given offset, nmo decreases with time; the early part of a waveform is therefore subject to a greater time-shift than the later part. The effect ( Figure 2 ) is called nmo stretch, and has the appearance of a change in frequency. It is worst at early times and long offsets, where the rate-of-change of nmo is largest.

Figure 2

Clearly, nmo stretch is a problem when it comes time to sum the traces of the gatner. The solution is to apply a mute to the corrected gather ( Figure 2 ), so that reflections with an unacceptable degree of stretch are not included in the summation. As with all

47

Page 48: Basical Processing

Basic Seismic Processing

mutes, this is applied with a ramp. Optionally, we might automatically mute any data for which the nmo stretch exceeds a given value (e.g. 30%). The final gather ( Figure 2 ) is now ready for summation, to yield one stacked trace. The summation procedure is known as cmp stacking or, usually, simply stacking; the entire suite of stacked traces constitutes a stacked section. And the velocity implicit in the nmo correction applied before stacking is called the stacking velocity. We see in Figure 3

Figure 3

the cmp-stacked section corresponding to our near-trace section of Figure 4 and Figure 5 .

Figure 4

48

Page 49: Basical Processing

Basic Seismic Processing

Clearly, the multiplicity of data greatly improves the section.

Figure 5

The Processing Flow The processing flow diagram has grown again ( Figure 1 ), this time to take advantage of all the data we collected.

Figure 1

To that end, we have replaced the near-trace selection with a cmp gather. We do not perform dynamic corrections and stack right away, however; datum corrections and spreading corrections, among others, must be performed on the individual traces of a gather.

49

Page 50: Basical Processing

Basic Seismic Processing

The order of steps represented in Figure 1 is not immutable. Indeed, processing must be tailored to the data. Figure 1 is one logical approach, however, particularly with respect to the stacking process. Operations such as deconvolution, dereverberation and filtering are usually performed before stack, in order to make velocity analysis more accurate. Additional deconvolution and filtering may take place after stack as well. A simple summation of traces is a linear operation; it does not matter (except in cost) whether other operations are performed before or after the summing, although filtering and deconvolution make velocity analysis easier and more accurate. Normal moveout corrections, on the other hand, are not linear; they stretch the data. This we can reduce by severe muting. Therefore, there is a compromise to be made between the signal-to-noise enhancement provided by muting early and the economies made possible by muting late. Final filters are invariably applied after stack. Ex01:A shot goes off into a 48-trace spread. The near offset is 150 m and the group interval is 50 m. Assuming a constant velocity of 1828 m/s, determine the normal moveout at the far offset for a reflector 2742 m deep. a)     274 ms b)     297 ms c)     308 ms d)     313 ms e)     Something else Ex02:A shot goes off into a 48-trace spread. The velocity is 1828 m/s and the reflector is 2742 m deep. Determine the nmo at the far offset.

a)     122 ms b)     157 ms c)     170 ms d)     183 ms e)     Something else

Note that the reflectors of Exercises 5 and 6 both have the same zero-offset reflection time. A faster velocity results in less normal moveout, for a given to.Ex03:A shot goes off into a 48-trace spread. The reflection from a horizon at 914 m depth has an apparent frequency of 25 Hz. After the nmo correction, what is its apparent frequency at the far offset? (Hint: A 25 Hz wave has a period of 40 ms, and there is no distortion of the wave before nmo correction.) a)     15 Hz b)     24 Hz c)     25 Hz d)     35 Hz e)     Something else Residual Static Correction Refinements in Seismic Processing Residual Static Corrections Datum corrections should account for elevation changes and for inconsistencies in the near-surface. The former can be specified to good precision; the low-velocity layer, however, is another matter, particularly where we do not have an uphole time at every geophone group location. Furthermore, we imply in Figure 1 that the travel path from datum to receiver is vertical; this is not always true.

50

Page 51: Basical Processing

Basic Seismic Processing

Figure 1

Finally, if the crew uses a surface source, chances are that we can make only approximate weathering corrections, based on estimates of weathering velocity and depth. The result of this imprecision is a series of small, unsystematic errors in our original field static corrections. These errors are called residual statics, and removal of them improves data continuity and interpretability ( Figure 2 and Figure 3 ).

Figure 2

Residual statics exist in spite of the fact that we have used all the elevation and weathering information available to us.

51

Page 52: Basical Processing

Basic Seismic Processing

Figure 3

Our only recourse in determining them, therefore, is measurement of reflection times. This task is made possible by the existence of common-mid-point data, and by the redundancy which it affords. Within the current context, the utility of cmp data is illustrated in Figure 4 and Figure 5 , for the simplest possible case of 2-fold coverage.

Figure 4

The first cmp gather is shown in Figure 4 (part a),

52

Page 53: Basical Processing

Basic Seismic Processing

Figure 5

the second in Figure 4 (part b), and so on. Next, we invoke the assumption of surface consistency that a unique residual static may be associated with each ground point. For the moment, we also assume that the residual static is the same whether the ground point is occupied by a source or by a receiver. Then, having applied datum statics and nmo corrections to the two reflection paths of Figure 4 , we measure the difference in reflection times between them. Ideally, of course, this difference is zero. In practice, its value t' is the algebraic sum of the residual statics r at ground points 2 and 3, minus the algebraic sum of the residual statics r at ground points 1 and 4, plus the difference in the errors n of normal move-out (the residual nmo 's). Thus we have one equation

where n' is the residual nmo at an offset of one group interval and n"' is that at an offset of three group intervals (This equation is most often written with a structured term as well, to account for structured variations along the horizon- see Yilmaz (1987), p. 196). We have one equation with six unknowns. Next we do the same for the second cmp gather ( Figure 4 ). We have

provided we can assume that the errors of nmo do not change rapidly along the line. We now have two equations with seven unknowns. Continuing to the third and fourth cmp gathers ( Figure 5 ), we obtain four equations with nine unknowns. Continuing further does not help; we shall never obtain as many equations as unknowns. If we repeat this exercise with 3-fold coverage, however, we find that after considering the fourth gather we have 12 equations and 12 unknowns; the problem becomes solvable. As we go to higher fold, we find that we do not have to assume that the source residual and receiver residual at the same ground point are equal, and that slow variation of the residual nmo can be allowed. Indeed, at high folds of coverage, the system is over-determined; we have more equations than unknowns, and we then adopt a set of solutions which is a best fit to the data. Naturally, the process is not perfect. Noise, coupled with slight irregularities along the travel paths, causes imperfections in the results. The procedure is therefore iterative, our goal being to minimize those imperfections. We seek to split the difference, so to speak, and determine that suite of source and receiver residuals that best fits all the statistics. Migration

53

Page 54: Basical Processing

Basic Seismic Processing

Quite often, our plotted reflection does not represent accurately the location of the reflector. This is because of the principle that angle of reflection must equal angle of incidence. In the case of coincident source and receiver (which our methods simulate), this means perpendicular incidence and reflection ( Figure 1 , the travel path for dipping reflectors). Since all events derived from one cmp are plotted vertically below that cmp, we wish to move all reflections that are not horizontal. The process is called migration.

Figure 1

Migration is the repositioning or replotting of reflections so that their spatial relationships are correct. In a sense, we move updip that part of the trace with the dipping reflector on it. Effectively, we move the reflection from the trace that recorded it to the trace that would have recorded it if the source had been on the reflector, and the travel path were vertically upward ( Figure 2 ).

54

Page 55: Basical Processing

Basic Seismic Processing

Figure 2

The process of migration is accomplished in several ways, one of which is easy to understand. In Figure 3 ,

Figure 3

we see the actual seismic path to a dipping reflector, and in Figure 4 , we see the error introduced when we plot the reflection below.

55

Page 56: Basical Processing

Basic Seismic Processing

Figure 4

The process of migration is accomplished in several ways, one of which is easy to understand. In Figure 3 , we see the actual seismic path to a dipping reflector, and in Figure 4 , we see the error introduced when we plot the reflection below the observation point. If all we have is the one observation at mid-point 1, and we have nothing else to tell us the dip, then all we know is that we have a reflection at a certain time. In that case, it could have come from anywhere along a surface (for now, let’s say a circle, assuming zero offset and constant velocity) representing that constant time. So in Figure 5 , we actually put that reflector at all its possible sources around the circle.

Figure 5

Then, in Figure 6 , we do the same for the trace from mid-point 2, mid-point 3, and so on.

56

Page 57: Basical Processing

Basic Seismic Processing

Figure 6

Our first thought is that this would produce total confusion. But we find that the circles reinforce in just one zone, and that zone is the true position of the reflector. In practice, we obtain a migrated section ( Figure 7 ),

Figure 7

in which we see only the reflection moved to its correct position; the distracting "smiles" of Figure 6 , which we would expect to see all over the section, are usually seen only after the last reflection, when the section becomes dominated by noise. The noise is a migration artifact, caused by lack of destructive interference. The relationship between the true dip and the apparent dip is given by: sin tan (Robinson, 1983). In areas of gentle dip, failure to migrate a line is seldom so egregious an error that whole fields are missed. Migration does, however, enable the proper display of steeply dipping reflections, and is critical for the clarification of faults and unconformities. But we must be on guard: with 2-D migration, the migrated reflection stays in the plane of the section.

57

Page 58: Basical Processing

Basic Seismic Processing

If the dip has a component transverse to the line, this is erroneous. Proper migration of such data is afforded by 3-D techniques.

Inversion The seismic section is a representation of the interfaces in the earth — one reflection pulse for each reflecting interface. Sometimes the main concern of the interpreter is interfaces (for example, when he or she is searching for unconformities), but in the delineation of a reservoir, he is often more interested in layers. For this purpose, we need to correct our record of interfaces into a record of layers. The concept is illustrated in Figure 1 using very elementary spike pulses for the reflections.

Figure 1

In Figure 1 , we see the top and base of a reservoir layer. Next, in Figure 1 , we see the corresponding reflection trace; it contains a reflection from the top and a reflection from the base, but no explicit representation of the internal properties of the layer. Now we slide down the trace a window whose length exceeds that of the trace, and we add together all the values of the trace within that window. As the window slides on to the upper part of the trace ( Figure 1 ), all values are zero and there is no output ( Figure 1 ). As the window slides to include the first reflection ( Figure 1 ), the output rises abruptly, in an upward step. The output remains constant, on this step, until the window reaches the second reflection ( Figure 1 ). Then, in this illustrative example, the positive values of the first reflection are offset by the negative values of the second reflection, and the output falls abruptly, in a downward step. The output trace ( Figure 1 ) now contains a steady deflection corresponding to the layer. Note that the output of this procedure is a measure of acoustic impedance (density times velocity) as a function of time. The process of which this is a very elementary illustration is known as inversion. It may be made very sophisticated, and coupled to the migration process; in its widest sense, then, its object is to convert the seismic section to represent the properties of the earth's layers, correctly positioned in space.

 

58

Page 59: Basical Processing

Basic Seismic Processing 59