report reflection seismology processing ammann zuercher

28
Report Reection Seismology Processing (ProMAX) Benjamin Zuercher, Noel Ammann June 2, 2013

Upload: ferrando-nanez

Post on 17-Oct-2015

17 views

Category:

Documents


0 download

TRANSCRIPT

  • Report

    Reflection Seismology Processing (ProMAX)

    Benjamin Zuercher, Noel Ammann

    June 2, 2013

  • Contents

    1 Introduction 1

    2 General info about the data 1

    3 Overview of the processing flow 3

    4 Pre-stack processing 4

    4.1 Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    4.2 Amplitude scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    4.2.1 True amplitude recovery . . . . . . . . . . . . . . . . . . . . . . . . 5

    4.2.2 Automatic gain control . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4.3 Top mute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4.4 First break picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.5 Refraction statics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    4.6 Frequency filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4.7 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5 Stack processing 14

    5.1 CDP sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    5.2 Velocity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    5.3 NMO correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    5.4 Stacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    5.5 Residual statics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    5.6 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    6 Poststack processing 20

    6.1 Noise reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6.2 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6.3 Time to depth conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    7 Interpretation 24

    References 26

  • 1 Introduction

    In the context of the course Reflection Seismology Processing at ETH Zurich, a seismic

    dataset was given to be processed with the software ProMax 2D Version 5000.0.3.3 from

    Landmark Graphics Corporation. The used Computer runs with Linux Red Hat.

    The goal of this course was to get an insight into seismic data processing with the given

    software, learn the tools behind the processing and finally get a realistic image of the

    subsurface of an area in Northern Germany.

    To do so, velocities of the different layers have to be reconstructed as accurate as possible

    by processing the raw data through different steps. Reflections should be seen better

    after the processing due to an increased signal-to-noise ratio and an improvement of the

    resolution.

    2 General info about the data

    All important information about the geometry could be found on the recording sheet and

    had to be added to the data as a first step. The seismic survey line has a length of 14200m

    in total and a spread length of 6100m. The recording consists of 120 channels with each

    having 24 geophones coupled. A gap of 200m between channel 60 and channel 61 needed

    to be added as well to the data (see figure 1)

    Figure 1: Channel configuration for the data from Northern Germany. A total spread length of 6100meters includes 120 channels. Every 50 meters one channel is located except between channel 60 andchannel 61 where a gap of 200 meters is inserted. Every channel consists of 24 Geophones.

    The spacing between each geophone is 2m, hence one channel spacing is 50m. A group

    of geophones is always connected in the center (see figure 2).

    1

  • Figure 2: Geophone configuration of one channel. 24 geophones are coupled in the middle, each having2 meter spacing to the next one. There was not one line with 24 geophones but two lines (0 meterhorizontal spacing) with each 12 geophones and a geophone spacing of 4 meters.

    The recording has 285 stations (101-385) in total. The recording length is 6s and the

    sampling rate is 2ms. A notch out filter with 50 Hz was applied.

    The position of the channels for all shots (the whole seismic line) can be seen in figure

    3. The geophones stayed at the same place for the first few and the last few shots. The

    boundary between yellow and red represents the place where the source was. Because the

    place of the source changed and the channels stayed at the same place at the beginning,

    one can say that the source is rolling in the seismic line. The same can be said for the

    end of the seismic line (roll out).

    Figure 3: The seismic line showing the offset. Zero offset can be seen at the color boundary betweenyellow and red.

    The CDP fold is at the beginning of the seismic line very small but the value increases

    fast (roll in) and reaches then a maximum of 35. The values do not much vary in the

    middle of the seismic line, increase then once again shortly and will then decrease a lot

    to the end of the seismic line (roll out).

    2

  • Figure 4: Fold vs Common Depth Point

    3 Overview of the processing flow

    We can split our processing into four main steps. When the geometry is correctly set

    up, first corrections for all the shots can be done. After a deconvolution is done, it will

    be stacked, analyzed and improved and finally a migration is applied before it will be

    converted from time into depth. The following points summarizes all the processing flows

    we used:

    Pre-stack processing

    Editing (Kill traces)

    Amplitude scaling (Correct for attenuation)

    Top mute (Get rid of insignificant waves)

    First break picking (Gets the information for a good velocity model)

    Refraction statics (Correct for weathered layer and topography)

    Frequency filtering (Get rid of ambient noise)

    Deconvolution (Improve resolution)

    Stack processing

    CDP sort (Reflections are sorted into a CDP gather)

    3

  • Velocity analysis (Picking velocities at recognisable layers)

    NMO correction (Correct reflection arrival times)

    Stacking (Summarizing into a single output)

    Residual statistics (Velocity corrections in the shallower part)

    Iterations (Iteration of the whole stack processing flow to improve the stack)

    Post-stack processing

    Noise Reduction / Image enhancement (Using a filter to reduce noise)

    Migration (Convert the reflections into a more realistic geological image)

    Time to depth conversion (Convert the time-axis to a depth-axis)

    Interpretation

    4 Pre-stack processing

    Before we start with processing, an example shot gather is shown in figure 5.

    Figure 5: Example shot number 45 before processing. The shot contains a lot of noise and bad coupledtraces, hence the resolution is quite bad.

    4

  • 4.1 Editing

    Bad traces were killed. They were good recognizable, because of their high noises before

    the first breaks. If we not have them removed, the results from the first break picking

    would have been random and incorrect at these traces. The high noise is probably the

    cause from bad coupled geophones or an ambient noise close to this geophone.

    4.2 Amplitude scaling

    4.2.1 True amplitude recovery

    We need to apply an amplitude recovery due to attenuation and wavefront spreading

    effects [Yilmaz, 2001]. We use a mathematical function for this true amplitude recovery:

    A(t) = A0(t) tn, where A(t) is the output, A0(t) is the initial amplitude, t is the traveltime and n is the exponential term which we will vary until we have a suitable result. We

    tested values for n between 1.5 and 2.2 and found the best value to be 1.6. This value

    was chosen, because the reflections are now much better recognizable and if the n value is

    too high, the noise will be increased in the deeper parts and the upper reflections are less

    clearer recognizable and we dont want that. The maximum application time was chosen

    to be 2800ms, because no more reflections can be seen beneath this value.

    Figure 6: Shot number 45 after applying true amplitude recovery. The inserted exponential term has thevalue 1.6. Reflections are much more visible after this processing step.

    5

  • 4.2.2 Automatic gain control

    Automatic Gain control is a similar operator like the one described before, because is tries

    to compensate the attenuation of a waves which are propagating trough a medium. But

    it only will be applied in a certain time gate. This time is defined by an operator length

    and is now to be found. The operator length was tested between 500ms and 1700ms and

    the optimal value for our data is 1500ms. A higher value will strengthen the reflections

    and decrease the noise. Too high values will cause that deeper reflections vanish again in

    the noise.

    Figure 7: Shot number 45 after applying automatic gain control. An operator length of 1500ms was usedand so this flow caused that the reflections are now more highlighted than before.

    4.3 Top mute

    Basically, we are only interested in the reflection waves and therefore first arrival waves

    with high amplitudes can be removed from the screen with a top mute. The information

    will not be deleted, it just does not appear anymore on the screen when applying the top

    mute [ProMAX, 1999]. An example of a top mute is shown in figure 8.

    6

  • Figure 8: Shot number 45 with a top mute. The green line is the boundary where all data above wasremoved.

    4.4 First break picking

    First breaks give us helpful information to get a good velocity model of the subsurface.

    Therefore it is important that these first breaks are picked correctly. The inverse of the

    different slopes will give us the velocities. [Yilmaz, 2001]

    Figure 9: The inverse of the slopes from the first arrivals defines the velocities of the subsurface. V1 isthe velocity from the first layer, v2 is the velocity from the second layer.

    First, a time gate needed to be defined to say in which zone the first breaks are. A line

    was drawn along all channels approximately 50ms above and 100ms below the actual first

    break. Then we had to pick the first breaks for several shot gathers manually until the

    dataset was trained enough to apply the neural network to all shot gathers automatically.

    Each shot gather had then to be controlled and adjusted (figure 10).

    7

  • Figure 10: The red line shows the first breaks for one shot gather. First they were picked manuallyfor several shot gathers and after they were trained enough, this was done automatically using neuralnetwork.

    4.5 Refraction statics

    We need to do some refraction statics because the weathered surface layer may have

    velocity variations and together with the topography, it may cause some false delay times

    and therefore it will give troubles during further processing steps. Hence it should be

    corrected that it will not be interpreted wrong [Yilmaz, 2001, ProMAX, 1999]. To do all

    this we needed to change the server to one which provides a 8-bit Pseudo-color diplay.

    As a first step, it had to be defined which first breaks belong to which layer. Therefore a

    velocity had to be chosen (see figure 11) but this velocity is just a help for further steps.

    Only the end points of the drag lines were used as layer boundaries [ProMAX, 1999].

    Then we corrected the velocities in the refractor velocity mode to prevent velocities to be

    completely wrong at some stations. It was important that the value of the velocity v1

    was always higher than the v0 velocity, otherwise it had to be corrected (see figure 12).

    As a next step, the receiver delay times were displayed for the whole line. There are three

    different static solution methods and they behave differently, as it can be seen in figure 13.

    The GRM method had some more errors than the DRM and the STD methods. Thats

    why the diminishing residual matrices (DRM) method was chosen to be the most useful

    one.

    8

  • After the refractor depth model was displayed and checked, the output statics was added

    to the database.

    Figure 11: Defining a velocity as a help for further steps and layer boundaries in the refraction statics.The black line was drawn and a resulting velocity of 1657 m/s was found.

    Figure 12: Refractor velocity was corrected in a way, that no more crossing between the two velocitiesv0 and v1 existed.

    9

  • Figure 13: Three different statics solutions are shown. DRM, GRM and STD

    4.6 Frequency filtering

    We used an Ormsby bandpass filter to get rid of ambient noise. Our data contains still

    disturbing noise (e.g. ground roll), hence the typical range of frequencies where noise

    appears, is not useful for us and can be removed. Our filter looked as follow: 18-27-80-

    110. After applying it (figure 14), the shot gathers looked much better than before. High

    noisy amplitudes were removed, hence reflections are much clearer to see now.

    Figure 14: Shot number 45 after applying an Ormsby bandpass filter (18-27-80-110). High amplitude inthe middle are removed and therefore reflections ar much better to identify.

    10

  • 4.7 Deconvolution

    We applied deconvolution to improve the resolution and get rid of multiples. This is

    realized by compress the wavelet and trying to get the whole energy at the beginning of a

    reflection. We do that by estimating all effects from the earth, put these information into

    a linear filter and then design and apply inverse filters. [Yilmaz, 2001, ProMAX, 1999]

    There are three different kinds of deconvolution we used:

    Spiking deconvolution: The wavelet has to be minimum phase (energy is at thebeginning of the wavelet) and a zero-lag spike (turning the source into a specific

    frequency content) is used as an output. The used filter is called Wiener-Levinson.

    [Yilmaz, 2001, ProMAX, 1999]

    Predictive deconvolution: It implies that the wavelet has minimum phase. Thedesired output is a time-advanced form of the input series. When x(t) is an input,

    x(t + a) will be the output, where a is the prediction lag. The used filter type is

    the same as for the spiking deconvolution. Actually, if the prediction lag is equals

    zero, the predictive deconvolution is nothing else than the spiking deconvolution.

    [Yilmaz, 2001, ProMAX, 1999]

    Time variant spectral whitening: The TVSW algorithm operates in the frequencydomain and these frequencies are balanced with the purpose to obtain a better

    resolution. As the name says, the whitening can vary in time. In theory it works

    like that the dataset is transformed into the frequency domain, multiplied by the

    filter spectrum (bandpass) and then transformed back to time. An automatic gain

    control scalar is applied to all the traces and then both are added together. [Yilmaz,

    2001, ProMAX, 1999]

    As a first step, the autocorrelation was analyzed to define the best values for the decon

    operator length (how much of the autocorrelation is used) for the spiking and predictive

    deconvolution and the prediction lag for the predictive deconvolution. Several parameter

    were tested and the decon operator length was then chosen to be 128ms, because the

    higher amplitudes needed to be in the upper part and this was achieved best at this value.

    The prediction lag was chosen where the first zero crossing was for all channels more or

    less the same and this was found at 12ms (see figure 15). The sprectral balancing scalar

    length for the TVSW method was set to 11ms and the sprectral balancing frequencies are

    15,22,125,170.

    Figure 16 shows the frequencies before and after the three deconvolutions and it explains

    quite good, that the deconvolution tries to flatten the frequency spectrum.

    11

  • Figure 15: Autocorrelation for the predictive deconvolution using a prediction lag of 12ms and an operatorlength of 128ms.

    Figure 16: Frequency spectrum (top left) before deconvolution (top right) after spiking deconvolution(bottom left) after predictive deconvolution (bottom right) after TVSW

    We had to choose from one of the above describes methods and therefore a compar-

    ison is done. Figure 17 shows the spiking and the predictive deconvolution, Figure 18

    the TVSW method and the shot before deconvolution. It is clearly seen that the predic-

    tive deconvolution generated the strongest reflection amplitudes and the most flattened

    frequency spectrum, if it is compared to the other methods. The reflections in the spik-

    ing deconvolution and the TVSW method can be seen as well, but the endings of the

    lines disappear in the noise, hence it is more difficult to see the reflections. Therefore we

    decided to use the predictive deconvolution for our further processing.

    12

  • Figure 17: top: spiking deconvolution (operator length 128ms), bottom: predictive deconvolution (peratorlength 128ms, prediction lag 12 ms)

    Figure 18: top: TVSW method, bottom: before deconvolution

    13

  • 5 Stack processing

    Stacking is used to improve the overall quality of the data and is basically nothing else

    than adding together traces from a processed seismic record to increase the S/N ratio.

    [Yilmaz, 2001]

    Here we produced a first stack of the subsurface through CDP (common midpoint) sorting,

    velocity analysis using the volume viewer and NMO (normal moveout) correction. To

    improve the quality of the stack, residual statics were applied and the velocity model of

    the subsurface was updated. We repeated these steps twice to get two different stacks

    which we then could compare.

    5.1 CDP sort

    Before stacking, the seismic data is organized into a CDP gather, the 2D Supergather.

    This Supergather combines many CDPs. The result from a CDP sort is that the reflections

    carry the information on the same common midpoint below the subsurface. [ProMAX,

    1999]

    5.2 Velocity analysis

    Velocity analysis is an interactive tool which uses the above described CDP sort and it is

    used to determine the stacking velocities. The picked velocities should then improve the

    subsurface model. [ProMAX, 1999]

    The screen is divided into panels (see figure 19), hence the velocity can be picked by several

    criteria. We picked clear reflections which were correlated to high semblance values (red

    colored areas on the left side of the screen) and were in a good agreement with the gather

    panel and the dynamic stack. A velocity was taken for the shallower part of the subsurface

    which we had determined in the refraction statics (1700 m/s).

    A velocity model (figure 20) was computed after picking the stacking velocity.

    14

  • Figure 19: Interactive display of the stacking velocity analysis. From left to right: Semblance panel,gather panel, dynamic stack, velocity function stack panels

    Figure 20: Velocity model computed by the stacking velocities.

    15

  • 5.3 NMO correction

    When collecting seismic data with a recording instrument, a reflection typically arrives

    first at the nearest receiver station from the source. But an increasing offset between

    source and receiver results in a delay in the arrival time of the reflection (hyperbolic

    shape in a seismogram). The NMO correction is used in the processing to remove this

    offset dependency [Yilmaz, 2001, ProMAX, 1999]. An example of such a correction is

    shown in figure 21.

    Figure 21: Normal move out correction

    5.4 Stacking

    After a CDP sorting, a velocity analysis and a NMO correction, the whole seismic data

    is summarized into a single output trace called stack 1 (figure 22). This is the first image

    of our subsurface and especially the left part of the image shows quite good reflection

    horizons. However the image can still be improved a lot because some areas are still fuzzy

    and cant be clearly identified.

    16

  • Figure 22: Stack 1

    17

  • 5.5 Residual statics

    The velocities in the shallow part of the subsurface contain irregularities. As mentioned

    above, reflections have a hyperbolic shape. Residual statics corrects shifts in the velocity

    irregularities that led to non-hyperbolic shapes of the reflections and brings the travel

    times to align better [Yilmaz, 2001, ProMAX, 1999]. Both methods of Maximum Power

    Autostatics and Correlation Autostatics were tested on the non-stacked input data and

    Correlation Autostatics was the best suitable method. This method measures time shifts

    and tries to partition it into source and receiver statics [ProMAX, 1999].

    5.6 Iterations

    All the steps described above were repeated to improve stack 1. Figure 23 shows the

    velocity model after picking the velocities the second time in the velocity analysis. Stack

    1 and the position of the picked velocities is seen in the background.

    From the new velocity model we got stack 2 as a result. It is shown in figure 24 and

    an overall improvement can be seen. The structure in the middle got a clearly visible

    top, which wasnt the case before. The reflections are sharper, especially on the right.

    Discontinuities of incoherent horizons got corrected and the dipping events are more

    visible.

    Figure 23: Velocity model after residual statics 1 and velocity analysis 2. Stack 1 is displayed in thebackground and the location of the picked velocities is represented by the blue circles

    18

  • Figure 24: Stack 2

    19

  • 6 Poststack processing

    6.1 Noise reduction

    The stacked section still contains noise which obscures information. So it has to be

    reduced as much as possible without losing the seismic signals. That is why the data will

    be transformed into a domain, where noise and signal can be separated. Two types of noise

    reduction (F-X deconvolution and eigenvector filter) were tested. The F-X deconvolution

    with 9 filter samples and a horizontal window length of 90 ms improved the coherency of

    the reflections best, hence it was applied to the stack.

    6.2 Migration

    One of the last step in reflection processing is migration. It converts the seismic image to

    a more realistic geological subsurface image, it improves spatial resolution. For example

    dipping reflector move to their true subsurface position and diffractions collapse [Yilmaz,

    2001]. There are several types of migration, three of them were used:

    Kirchhoff migration: It is a technique that uses the integral form of the wave equa-tion. Its implementation represents stacking of the data along curves that trace the

    arrival time of energy scattered by image points in the Earth. It needs a (smoothed)

    root-mean-square input velocity in order to solve the integral form of the wave equa-

    tion (Kirchhoff equation). As to say, it uses the diffraction summation technique

    that sums seismic amplitudes along diffraction hyperbola and stores the energy in

    its apex. [Schlumberger, 2013]

    FD migration: Downward continuation is a method that helps estimating the valuesof seismic data in the studied subsurface, with the assumption of continuity of

    the field. FD Migration implements just this principle of downward continuation

    by solving the differential wave equation (in opposite to the Kirchhoff migration).

    There are two different ways (fast or steep) to do this migration. Fast FD migration

    needs only little computational time but can only handle flat dips, therefore steep

    FD migration is used because it can also handle steep dips. [Yilmaz, 2001, ProMAX,

    1999]

    FK migration (phase-shift): The FK migration (downward continuation in the f-kdomain) has the characteristics that it is very accurate for constant velocities but

    fails to image steep dips where large velocity variations occur. Due to the fact that

    a single velocity function is needed as data input, this migration method works very

    fast compared to others. [Yilmaz, 2001, ProMAX, 1999]

    20

  • Figure 25-27 are showing the three tested migrations. Finally, we decided to apply the

    steep FD migration to the stack because it contained smother structures and less artefacts

    than the others.

    Figure 25: Stack 2 applied with the steep FD migration

    21

  • Figure 26: Stack 2 applied with the Kirchhoff migration

    Figure 27: Stack 2 applied with the FK migration (phase shift)

    22

  • 6.3 Time to depth conversion

    Figure 28: Migrated stack converted into depth

    23

  • 7 Interpretation

    The time to depth conversion was the last step in our processing, hence we can see

    now structures of the subsurface from Northern Germany. These structures can now be

    interpreted as geological layers and fractures and of course the geological history can be

    guessed.

    Figure 29 shows the migrated stack with some interpretation in it. The green line shows

    a layer boundary at 1100m depth. The layer is broken in the middle (red line shows a

    fracture zone), a horst was built due to a thrust fault. The next clear layer boundary is

    drawn yellow at 1800 m and it shows a anticlinal structure in the middle. The purple

    lines are dipping layers and beneath them, the structures are not so clear any more, hence

    it was summarized as one shape (blue).

    Salt layers were created (probably in a chemical process in a drying out aquatic area) and

    afterwards, it was covered by clastic sediments. Due to buoyancy, caused by variation in

    density, the salt layer built a dome in the middle and this had an effect on the purple

    layers. They were pulled down at some points. After that, we had some erosion and new

    sediments were deposited. The salt dome had then once again a buoyancy which caused

    the anticlinal structure (yellow) and the fracture in the green layer. Finally, it was filled

    up with sediments again.

    24

  • Figure 29: Interpretation of the migrated stack.

    25

  • References

    ProMAX. Promax manual. Process help files, 1999.

    Schlumberger. glossary. http://www.glossary.oilfield.slb.com/, 2013.

    Ozdogan Yilmaz. Seismic data analysis: processing, inversion, and interpretation of

    seismic data. Number 10. SEG Books, 2001.

    26