aseg

4
ASEG 2007 – Perth, Western Australia 1 A Fast Approach to Magnetic Equivalent Source Processing Using an Adaptive Quadtree Mesh Discretization Kristofer Davis Yaoguo Li Center for Gravity, Electrical and Magnetic Studies Center for Gravity, Electrical and Magnetic Studies Dept. of Geophysics, Colorado School of Mines Dept. of Geophysics, Colorado School of Mines 1500 Illinois Street 1500 Illinois Street Golden, CO 80401 USA Golden, CO 80401 USA [email protected] [email protected] INTRODUCTION Equivalent sources are a layer of fictitious sources that are calculated to represent the observed magnetic total-field. This processing technique can allow for a regularly gridded dataset, based on physics, not minimum curvature, for most magnetic datasets. The layer is comprised of infinitely thin cells with a continuous, finite susceptibility within each cell that can reproduce the observed data. The linear problem is set up to be solved through inverse theory. The drawback to the inverse problem is computation cost and overall speed for large-scale problems. One way to minimize this cost is to reduce the number of model parameters (Ascher and Haber, 2001). We introduce an adaptive Quadtree mesh design that is chosen based on the total-gradient of the observed magnetic field to decrease the cost of inversion. In our synthetic example, equivalent source processing is performed using 625 cells. We achieve comparable processing results only using 216 cells in the source layer. We continue by presenting the inversion methodology of the equivalent source problem, the adaptive quadtree mesh design, and show a synthetic example. EQUIVALENT SOURCE PROCESSING The equivalent source processing technique uses a thin layer of source cells below the data to recreate the observed data. Rather than pure interpolation, this method uses the physics based on a susceptibility distribution. In theory, the vertical cell width using equivalent sources is infinitely small, but in practice the vertical cell width only needs to be a small fraction of the cell width in the horizontal directions. The susceptibilities are solved for via inversion using Tikhonov regularization. Accurately representing the behaviour of the magnetic field, the sources can then be forward modelled to observation locations. This allows for the even gridding of datasets. The reproduced data is also denoised of higher frequencies because of this process. Finally, the model can be easily extended and multiple datasets can be merged in order to create a larger, single dataset that is easier to process and interpret. Magnetic data can be calculated by knowing the geometry and physics of the source cells with relation to the observation locations and known susceptibilities. The result is a linear process described by d r r = κ G (1) where G is the sensitivity matrix, d is the created data, and κ is the susceptibilities of the model. We can forward model data by simple matrix-vector multiplication or we can solve for the sources by using linear inverse theory in order to appropriately solve for a layer of susceptibilities in order to reproduce the observed data. INVERSION METHODOLOGY The susceptibility can be solved for by minimizing a global objective function, Φ. The optimal solution is found when minimizing Φ such that m d Φ + Φ = Φ β min (2) when Ф d , the data misfit, is equal to Ф d * , the optimal data misfit. The optimal data misfit is the number of data points if the data has Gaussian errors and thus follows a χ 2 distribution, SUMMARY The use of equivalent source processing on magnetic datasets is important for the regular gridding and denoising of data before any other processing can occur. The processing technique is setup as an inverse problem and solved for susceptibilities to reproduce the observed data. The drawback to the inverse problem is computation cost and overall speed for large-scale problems. Since aeromagnetics has become common in exploration, it is rare that the datasets acquired are small in data volume or space, and can be handled rapidly on a single workstation. One way to minimize the computational cost is to reduce the number of model parameters. We present an equivalent source processing technique that minimizes the number of cells in the model domain via an adaptive quadtree mesh discretization. The mesh remains coarse where no significant anomalies are present, yet fines on the edges of observed anomalies. The transition from the fine to coarse mesh grid is based on the total-gradient of the dataset, placing smaller cells on the edges of the anomaly where the susceptibilities have the greatest variation spatially. We show that the algorithm can perform over four times as fast as traditional equivalent source processing with a regular cell mesh yet preserves the same accuracy. In this paper, we present a synthetic example for proof of concept. Key words: magnetics, quadtree, equivalent sources.

Upload: avisha-n-avisena

Post on 18-Dec-2015

223 views

Category:

Documents


0 download

DESCRIPTION

sak

TRANSCRIPT

  • ASEG 2007 Perth, Western Australia 1

    A Fast Approach to Magnetic Equivalent Source Processing Using an Adaptive Quadtree Mesh Discretization Kristofer Davis Yaoguo Li Center for Gravity, Electrical and Magnetic Studies Center for Gravity, Electrical and Magnetic Studies Dept. of Geophysics, Colorado School of Mines Dept. of Geophysics, Colorado School of Mines 1500 Illinois Street 1500 Illinois Street Golden, CO 80401 USA Golden, CO 80401 USA [email protected] [email protected]

    INTRODUCTION

    Equivalent sources are a layer of fictitious sources that are calculated to represent the observed magnetic total-field. This processing technique can allow for a regularly gridded dataset, based on physics, not minimum curvature, for most magnetic datasets. The layer is comprised of infinitely thin cells with a continuous, finite susceptibility within each cell that can reproduce the observed data. The linear problem is set up to be solved through inverse theory. The drawback to the inverse problem is computation cost and overall speed for large-scale problems. One way to minimize this cost is to reduce the number of model parameters (Ascher and Haber, 2001). We introduce an adaptive Quadtree mesh design that is chosen based on the total-gradient of the observed magnetic field to decrease the cost of inversion. In our synthetic example, equivalent source processing is performed using 625 cells. We

    achieve comparable processing results only using 216 cells in the source layer. We continue by presenting the inversion methodology of the equivalent source problem, the adaptive quadtree mesh design, and show a synthetic example.

    EQUIVALENT SOURCE PROCESSING The equivalent source processing technique uses a thin layer of source cells below the data to recreate the observed data. Rather than pure interpolation, this method uses the physics based on a susceptibility distribution. In theory, the vertical cell width using equivalent sources is infinitely small, but in practice the vertical cell width only needs to be a small fraction of the cell width in the horizontal directions. The susceptibilities are solved for via inversion using Tikhonov regularization. Accurately representing the behaviour of the magnetic field, the sources can then be forward modelled to observation locations. This allows for the even gridding of datasets. The reproduced data is also denoised of higher frequencies because of this process. Finally, the model can be easily extended and multiple datasets can be merged in order to create a larger, single dataset that is easier to process and interpret. Magnetic data can be calculated by knowing the geometry and physics of the source cells with relation to the observation locations and known susceptibilities. The result is a linear process described by d

    rr =G (1) where G is the sensitivity matrix, d is the created data, and is the susceptibilities of the model. We can forward model data by simple matrix-vector multiplication or we can solve for the sources by using linear inverse theory in order to appropriately solve for a layer of susceptibilities in order to reproduce the observed data.

    INVERSION METHODOLOGY The susceptibility can be solved for by minimizing a global objective function, . The optimal solution is found when minimizing such that

    md += min (2) when d, the data misfit, is equal to d*, the optimal data misfit. The optimal data misfit is the number of data points if the data has Gaussian errors and thus follows a 2 distribution,

    SUMMARY The use of equivalent source processing on magnetic datasets is important for the regular gridding and denoising of data before any other processing can occur. The processing technique is setup as an inverse problem and solved for susceptibilities to reproduce the observed data. The drawback to the inverse problem is computation cost and overall speed for large-scale problems. Since aeromagnetics has become common in exploration, it is rare that the datasets acquired are small in data volume or space, and can be handled rapidly on a single workstation. One way to minimize the computational cost is to reduce the number of model parameters. We present an equivalent source processing technique that minimizes the number of cells in the model domain via an adaptive quadtree mesh discretization. The mesh remains coarse where no significant anomalies are present, yet fines on the edges of observed anomalies. The transition from the fine to coarse mesh grid is based on the total-gradient of the dataset, placing smaller cells on the edges of the anomaly where the susceptibilities have the greatest variation spatially. We show that the algorithm can perform over four times as fast as traditional equivalent source processing with a regular cell mesh yet preserves the same accuracy. In this paper, we present a synthetic example for proof of concept. Key words: magnetics, quadtree, equivalent sources.

  • Equivalent sources using quadtree discretization Davis and Li

    ASEG 2007 Perth, Western Australia 2

    an assumption that works in most cases. In order to find the optimal data misfit, a Tikhonov parameter, , is chosen based on the optimal model weighting. The model objective function, m, contains the information of the model and linear equation describing the inverse problem (Li and Oldenburg, 1999). The data misfit is given by

    2)( obspredd dd

    rr = W (3)

    and Wd is a weighting matrix that contains the inverse of the standard deviation for each respective datum along its diagonal, normalizing the data vector by the respective errors. The model objective function is given by

    2 rmm W= (4)

    for a calculated and model weighting matrix, Wm. In 3D inversion, the model weighting includes the z-direction and depth weighting. Due to having only one layer, these quantities are not factored into the equivalent source. The minimization of the global objective function can now be written

    22)( rrr mobspred dd WW += (5) To minimize the global objective function, the linear conjugate gradient (CG) method is used after finding the proper via Tikhonov regularization. The Tikhonov regularization (or trade-off) parameter (Tikhonov and Arsenin, 1977) is important in the optimization process. The regularization parameter is chosen so the optimal solution is neither over-smoothing nor under-smoothing the data (i.e. fitting the noise or the signal). Traditionally, multiple regularization parameters are chosen and is minimized for each chosen parameter. An L-curve is chosen to find the proper regularization parameter; one that minimizes the data noise within the errors. The point that represents the highest curvature is chosen and the data misfit and model objective function for the respective point is where the trade-off parameter should be chosen on the L-curve. In order to find this point, the second derivatives of the L-curve are calculated and the point of highest curvature, c, is given by

    ( ) ( )[ ] 2/322

    )(md

    dmmdc

    = (6)

    such that

    ( )dd = ln , and ( )mm = ln (7a; 7b) Smoothing of the data and natural de-noising occurs when choosing the trade-off parameter at the point of highest curvature. QUADTREE DISCRETIZATION The quadtree mesh design is one that places larger cells where no or little signal is present and smaller cells to increase

    resolution where sources are located. The mesh starts as two cells and if one of the cells has a property value higher than a given threshold, it is split in half. This process continues until all of the cells are within the property threshold. This may mean if a source is large enough, cells within the source may be larger than the edges of the source. We use the total-gradient to determine where edges of anomalies are in order to discretize the cells to a higher resolution. The quadtree mesh discretization has been used mostly in geophysics in remote sensing applications (e.g. Gerstner, 1999). The quadtree method is useful in particular in large scale problems by minimizing the number of cells the model mesh contains. For some problems such as DC resistivity, the quadtree structure has a maximum of two neighboring cells (Eso and Oldenburg, 2007); however, I allow up to four neighboring cells as the traditional quadtree structure allows. The traditional quadtree must have 2n maximum number of cells in both the easting and the northing as well. We follow this procedure for ease of calculating the total-gradient, but then discard cells outside the data area before formulation of the sensitivity matrix. Since the even gridding of the total-gradient is required, this padding does not hinder the overall calculation speed. The total-gradient is also padded to 2n and linearly tapered as to not create artifacts on the edge of the dataset and calculated in the Fourier domain for efficiency. A threshold precent of the largest anomaly is chosen for the coarse to fine cell transition. The mesh starts with two cells total, and then the cells split in an iterative process depending on the total-gradient calculation. The total-gradient changes the most rapidly on the edges of sources, and therefore the cells are split where these changes occur beyond some given threshold. The optimum threshold for the gradient in order to split cells is most likely problem dependent; however, this is an area of future research and may be problem dependent. For the synthetic example a threshold of 7% of the maximum amplitude of the total-gradient was used. After the mesh discretization occurs, the mesh is reduced back to where the data is present and the resulting cells on the edge are kept. This creates a jagged mesh, but still covers the data area. The sensitivity matrix is calculated based on the nodal points of the quadtree mesh and the processing of the data is carried out just as it would with a regular mesh.

    SYNTHETIC EXAMPLE In order to test the algorithm, a synthetic example is created. The generated dikes are horizontal in dip with one striking 45 and one striking in the east direction. Flight lines are flown at 30 meters of elevation, 100 meters apart and the total-field magnetic response was calculated. The inducing field is 52,000 nT in strength at an inclination of 65 and a declination of 25. White Gaussian noise is added to each data set and the total-field data is shown in Figure 1 using minimum curvature as the plotting tool. On all of the figures, the white dots indicated where data was observed or calculated. A traditional equivalent source (ES) technique is used and the results are shown in Figure 2. The quadtree mesh equivalent source technique is also performed and the results are in Figure 3. Each ES result was forward modelled at an even grid interval of 10 meters.

  • Equivalent sources using quadtree discretization Davis and Li

    ASEG 2007 Perth, Western Australia 3

    The two equivalent source (ES) results are comparable, but the traditional ES is slower by a factor of four. Figure 4 shows compared difference in meshes. The quadtree ES uses 216 mesh cells to the 625 by the traditional method. By differing the number of cells (i.e. number of model parameters) in a linear problem, the computational cost could be linear. This is tested by running both the traditional and quadtree equivalent source programs and calculating an L-curve 100 times, ranging from 3 values to 1000 values on the curve. The results show (Figure 5) that by changing the number of model parameters, the computational time decreases linearly. Since the quadtree has larger cells, the anomaly wavelengths of theses cells are broader and therefore the ES needs smaller model weighting than the traditional technique. For a comparison of the L-curves, see Figure 6.

    Figure 1. Observed data with noise after minimum curvature gridding.

    Figure 2. Calculated data based on a traditional equivalent source algorithm.

    Figure 3. Calculated data based on the quadtree equivalent source algorithm.

    Figure 5. The ratio of time between quadtree and normal meshes for the synthetic dataset. The average is 5.2 times faster.

    DISCUSSION The total-gradient is calculated in the Fourier domain by multiplication. The threshold of the total-gradient in order to define the quadtree mesh is problem and user dependent. This interpretation may lead to either slower or faster results, but based on the inverse theory, will fit the data as well as it can with the user defined model parameters. The results for the quadtree and normal mesh equivalent source will differ because both are separate inversions. This is reaffirmed in Figure 6 with the two different Tikhonov curves.

    CONCLUSIONS The equivalent source methodology of the adaptive quadtree method remains the same as the traditional methodology. The problem is linear and is solved by minimizing a global objective function through Tikhonov regularization and the linear conjugate gradient method. By reducing the number of cells, the inversion cuts a significant amount of computation cost as compared to the traditional equivalent source technique. The threshold to split a cell is based on the total-gradient calculation performed in the Fourier domain. The synthetic example has shown that the quadtree technique results are comparable to the traditional equivalent source, but faster.

  • Equivalent sources using quadtree discretization Davis and Li

    ASEG 2007 Perth, Western Australia 4

    ACKNOWLEDGMENTS The authors would like to thank Dave Hale and Robert Eso for helpful discussions involving mesh generation. We would also like to thank the members of CGEM for their support as well as the companies of Gravity and Magnetics Research Consortium (GMRC) who funded this project.

    REFERENCES

    Ascher, U. M., and Haber, E., 2001, Grid refinement and scaling for distributed parameter estimation problems: Inverse Problems, 17, pp. 517-590. Eso, R., and Oldenburg, D., 2007, Efficient 2.5D resistivity modeling using a quadtree discretization: SAGEEP Proceedings, 20, pp. X-(X+9). Gerstner, T., 1999, Adaptive hierarchical methods for landscape representation and analysis: Lecture Notes in Earth Sciences, Berlin Springer Verlag, 78, pp. 75-92.

    Hansen, P. C., The L-curve and its use in the numerical treatment of inverse problems, in Computational Inverse Problems in Electrocardiology, P. Johnston (Ed.), Advances in Computation Bioengineering, 4, WIT Press, Southampton, 2000, pp.119-142. Li, Y., and Oldenburg, D., 1996, 3-D inversion of magnetic data: Geophysics, 61, pp. 394408. Nabighian, M. N., 1972, The analytic signal of two-dimensional magnetic bodies with polygonal cross-section: its properties and use for automated anomaly interpretation: Geophysics, 37, pp. 507-517. Tikhonov, A. N., and Arsenin, V. Y., 1977, Solution of Ill-posed Problems, Winston, Washington, D.C.

    Figure 4. A comparison of the normal mesh and quadtree mesh for equivalent source processing.

    Figure 6. A comparison of L-curves created by both the quadtree and normal equivalent source methods.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown

    /Description >>> setdistillerparams> setpagedevice