Near-Surface Geophysics [NS]

 CC:Hall E  Monday  1400h

Back to Basics: Inversion of Electrical Resistivity Imaging Data II Posters

Presiding:  R Knight, Stanford University; C Weiss, Virginia Tech


2D DC resistivity modelling using a quadtree discretization

* Eso, R A (, Department of Earth and Ocean Sciences, University of British Columbia, 6339 Stores Road, Vancouver, BC V6T1Z4, Canada
Oldenburg, D (, Department of Earth and Ocean Sciences, University of British Columbia, 6339 Stores Road, Vancouver, BC V6T1Z4, Canada

A quadtree discretization is a non-uniform semi-regular grid in which the size of each cell can be a power of 2 which results in a grid structure where each edge of a cell can be shared by either one, or two neighbouring cells. Local refinement and coarsening can then be used to generate highly efficient spatial discretizations. The DC resistivity equations are discretized using either a cell-centered and node-centered formulation and compared with a standard finite-volume type discretization. Application of the boundary-conditions at the termination of the finite grid away from the sources and receivers is a critical factor in accurate DC resistivity modelling. Using the quadtree discretization, the grid boundary can be placed at large distances relative to the source and receivers with the addition of very few model cells. Source independent no-flux boundary conditions can are applied to the entire boundary of the domain, resulting in a formulation of the forward problem which can be solved using efficiently using direct matrix factorization methods. The result is a very efficient, expedient and accurate numerical solution. The conductivity model is discretized separately from the potential-field grid, allowing representations of both the conductivity and potential which require a minimum number of cells yet are finely discretized where required. The Mojava injection experiment is inverted with a temporally-variable conductivity discretization that is spatially adaptive to the injection plume yet can still incorporate information obtained from preceding time-lapse measurements. The inverse problem is solved using the technique of general-measures, allowing the selection of robust measures of data-misfit and model- structure. A priori geological information is incorporated through the use of projected-gradient bound- constraints and by tailoring the model-objective function, with modification to allow for use with adaptive- gridding on the quadtree discretization. The Andrews basin bedrock experiment is inverted incorporating a priori geological information obtained through the surface DC measurements which provide an indication of bedrock conductivity.


Resistivity Imaging of Bedrock Constrained by Digital Image Processing Algorithms

* Elwaseif, M (, Rutgers-The State University, Dept of Earth and Env Sciences, Newark, NJ 07102, United States
Slater, L (, Rutgers-The State University, Dept of Earth and Env Sciences, Newark, NJ 07102, United States

The resistivity method is routinely used to image the shallow subsurface and common applications include mapping of near surface geology, characterization of contaminated sites, delineation of engineered structures, and locating archeological features. The smoothness constraint is most commonly employed as the default regularization method in commercially available software for resistivity image reconstruction based on least squares minimization. This regularization constraint is conceptually appropriate for a wide range of applications, particularly when the objective is to predict changes in resistivity due to variations in moisture and/or salinity across space and time. However, resistivity imaging is increasingly used to predict targets that are characterized by sharp, rather than gradational, resistivity contrasts (e.g. depth to bedrock as of interest here). In this case, the smoothness constraint is conceptually inappropriate as our a priori expectation is that such targets represent a sharp change in resistivity across some unknown boundary location. Here, we propose simple procedures for partly offsetting the pitfalls that result from applying the smoothness-based regularization to locate such targets, which combines: (1) an initial inversion using the standard smoothness constraint and a homogeneous starting model (as typically most often done in practice), (2) an image processing technique known as the watershed algorithm to subsequently predict the probable depth to bedrock from the smooth image, and (3) a second inversion step incorporating a disconnect in the regularization, defined by the probable depth to bedrock output from the watershed algorithm, to obtain an improved estimate of the variation in resistivity within and outside of the bedrock based on the incorporation of a priori information (i.e. the disconnect and the resistivity model obtained from the standard smooth inversion). We test this approach on the four synthetic variants on the depth to bedrock problem, as well as the field dataset from the H.J. Andrews Experimental Forest (Oregon). We do not apply our approach on the data from the monitoring of an infiltration test as sharp resistivity boundaries are not expected.


Assessing electrical resistivity model uncertainty using Bayesian inference

* Minsley, B (, U.S. Geological Survey, MS 964 - Denver Federal Center, Denver, CO 80225, United States

Model uncertainty in the DC resistivity parameter estimation problem is assessed using Bayesian inference, implemented with a Markov chain Monte Carlo sampling strategy. The primary goal of this approach is the ability to assess posterior model uncertainties without the influence of linearization or arbitrary regularization often required by least squares techniques. Several 1D 'parent' soundings representative of characteristic regions within a 2D dataset are first extracted and analyzed individually. Posterior distributions of layered earth parameters (i.e., layer thickness and resistivity) are produced for each sounding, which also contain a natural measure of model complexity by allowing the number of layers to vary as required by the data. These 'parent' soundings provide an initial estimate of uncertainty within the model, and are subsequently used as prior information for the analysis of the remaining soundings extracted from the 2D profile. The use of characteristic 'parent' datasets provides a constraint between soundings, though less explicitly than with more traditional laterally constrained inversion techniques. Model complexity, biases due to poorly chosen parameterization (e.g., number of layers), and the need for regularization are avoided by allowing the number of layers to vary for each sounding, where models with fewer layers are naturally favored as long as they fit the data within acceptable limits. There are certain disadvantages to this approach, however, particularly in cases where lateral variability invalidates the 1D assumption. The benefits and limitations of this method are illustrated with both synthetic and field 'back to basics' datasets.