Nonlinear Geophysics [NG]

NG33A

CC:Hall E Wednesday 1400h

CC:Hall E Wednesday 1400h

Nonlinear Processes in Geophysics Posters

NG33A-01

Multifractal Thermal Structure in the Western Philippine Sea Upper Layer

Upper layer (above 140 m depth) temperature in the western Philippine Sea near Taiwan was sampled using
a coastal monitoring buoy (CMB) with attached 15 thermistors during July 28 - August 7, 2005. The data were
collected every 10 minutes at 1, 3, 5, 10, 15, and 20 m using the CMB sensors, and every 15 seconds at 15
different depths between 25 m and 140 m in order to observe turbulent thermal structure. Internal waves and
solitons were also identified using the empirical orthogonal function analysis. Without the internal waves and
solitons, the power spectra, structure functions, and singular measures (representing the intermittency) of
temperature field satisfy the power law with multi-scale characteristics at all depths.
Without the internal waves and solitons (turbulence-dominated type), the temperature fluctuation has
maximum values at the surface, decreases with depth to mid-depths (60-65 m deep), and then increases with
depth to 140 m deep. Such depth dependent (decreasing then increasing) pattern preserves during the
internal wave propagation during 1000-1500 GMT July 29, 2005. However, this was altered during the internal
soliton propagation to a pattern that increases with depth from the surface to 60 m deep, decreases with depth
from 60 m deep to 100 m deep, and increases again with depth from 100 m to 140 m deep. The temperature
fluctuation enhances with the internal wave and soliton propagation. Between the two, the internal solitons
bring larger fluctuations.
Three types of thermal variability are identified: IW-turbulence, IS-turbulence, and turbulence-dominated. The
power spectra of temperature at all the depths have multi-scale characteristics. For the IW-turbulence type and
turbulence-dominated type, the spectral exponent is in the range of (1, 2) and thus the temperature field is
nonstationary with stationary increments. For the IS-turbulence type, the spectrum is quite different and the
spectral exponent is less than 1 for the low wavenumber domain. The structure function satisfies the power
law with multifractal characteristics for the IW-turbulence type and turbulence-dominated type, but not for the IS-
turbulence type. The internal waves increase the power of the structure function especially for high moments.
The internal solitons destroy the multifractal characteristics of the structure function. The power law is broken
approximately at the lag of 8 min, which is nearly half period of the IS (with frequency of 4 CPH).
The internal waves do not change the basic characteristics of the multifractal structure. However, the internal
solitons change the power exponent of the power spectra drastically especially in the low wave number
domain; break down the power law of the structure function; and increase the intermittency parameter. The
physical mechanisms causing these different effects are also presented.

http://faculty.nps.sed/pcchu

NG33A-02

The Space-Time Relations for Radiances and Reflectivities From TRMM and MTSAT Data

A basic property of fluid systems is that there exists a relatively well-defined lifetime for structures of a given size. This is the basic physics behind the "space-time" or Stommel" diagrammes, presented in textbooks as purely schematic, but never actually calculated empirically. Although such space-time relations are used all the time in meteorological measurements, they are usually implicit rather than explicit. For example a common problem in remote measurements is to decide how often data of a given resolution must be taken, the solution is usually ad hoc (e.g. the 3 hour intvervals for the of the 3B42 TRMM precipitation product). In this presentation, we show how to empirically determine the space-time diagrams for both TRMM reflectivities and MTSAT IR radiances and we discuss how this can be used to both understand the atmosphere but also to improve data sampling strategies.

NG33A-03

Effect of Spatial Scale on Temporal Scaling Properties of River Discharge

NG33A-04

QDF Surfaces and Their Parameterization for low Flows

QDF curves are currently being used as a concise way of representing the information contained in the joint probability distribution functions of the river-discharge stochastic process. The surfaces represented by the QDF relations can, under conditions described herein, cover the complete domain of discharges, including low flows and high flows. A parameterization of the QDF surfaces for low flows from a Mexican case study, as well as the influence of the watershed area, are being presented.

NG33A-05

Modelling of Geological Structures Using Emergence

A complex system based approach is used to model geological structures. Preliminary work is presented to show how mutually interacting agents can be used to probe local regions and obtain emergent behaviour of its geometrical properties. Models are built bottom up from the smaller components to simulate regions from camp scales to regional scales. In nature, very complex structures exhibiting discontinuous and heterogeneous features are common. Modelling such regions using conventional methods is cumbersome and influences between close proximity zones are generally not considered. Agents are able to detect local and global features in the entire model space, as detailed as the data set allows. These features are incorporated into the interpolation of a modeled zone if those features are coupled to that location. We attempt to see if opportunities exist for exploiting complex systems approaches in what is a classical knowledge driven modelling domain with high emphasis on expert interpretive methods. Geological maps (2D, 3D or 4D) are fundamentally an emergent result of an iterative mental process which focuses on reconciling disparate data. The end goal of our research is to point a way forward in which complexity can support the simulation of maps and thus support the interpretive workflow.

NG33A-06

How complex lava textures can be described by simple scaling characteristics.

Textural subsurface lava morphology is a final result of intense stress applied on lava during its emplacement. Pahoehoe and A'a Hawaiian textures are typical texture examples demonstrating different conditions of volcanic eruption styles. However a significant aspect of these different textures is the ubiquity of scaling properties, implying a scaling symmetry over a wide range of scales. While the existence of a scaling and multiscaling may reveal the continuity of a dynamic emplacement process over these scales producing small to large lava fields, it is also important to consider these properties in regard to their differential anisotropy - their anisotropy associated with scales. Indeed, a qualitative well-known difference between the two Hawaiian texture types is the differentially oriented structures present in the A'a and the ropy Pahoehoe flows. In such cases, generalized scale invariance (GSI) could provide new powerful insights for describing highly variable anisotropic phenomena. We will discuss this question through lava textures acquired in the visible as well as in the thermal infrared ranges.. In particular we may combine isotropic and anisotropic results to compare structures revealing stratification, rotation or both from more isotropic growing texture flows.

NG33A-07 **[WITHDRAWN]**

Nonlinear Interaction of Waves in Geomaterials

Progress of 1990s - 2000s in studying vibroacoustic nonlinearities in geomaterials is largely related to experiments in resonance samples of rock and soils. It is now a common knowledge that many such materials are very strongly nonlinear, and they are characterized by hysteresis in the dependence between the stress and strain tensors, as well as by nonlinear relaxation ("slow time"). Elastic wave propagation in such media has many peculiarities; for example, third harmonic amplitude is a quadratic (not cubic as in classical solids) function of the main harmonic amplitude, and average wave velocity is linearly (not quadratically as usual) dependent on amplitude. The mechanisms of these peculiarities are related to complex structure of a material typically consisting of two phases: a hard matrix and relatively soft inclusions such as microcracks and grain contacts. Although most informative experimental results have been obtained in rock in the form of resonant bars, few theoretical models are yet available to describe and calculate waves interacting in such samples. In this presentation, a brief overview of structural vibroacoustic nonlinearities in rock is given first. Then, a simple but rather general approach to the description of wave interaction in solid resonators is developed based on accounting for resonance nonlinear perturbations which are cumulating from period to period. In particular, the similarity and the differences between traveling waves and counter-propagating waves are analyzed for materials with different stress-strain dependences. These data can be used for solving an inverse problem, i.e. characterizing nonlinear properties of a geomaterial by its measured vibroacoustic parameters. References: 1. L. Ostrovsky and P. Johnson, Riv. Nuovo Chimento, v. 24, 1-46, 2007 (a review); 2. L. Ostrovsky, J. Acoust. Soc. Amer., v. 116, 3348-3353, 2004.

NG33A-08

Wavelet Statistical Analysis of Low-Latitude Geomagnetic Measurements

Following previous works by our group (Papa et al., JASTP, 2006), where we analyzed a series of records acquired at the Vassouras National Geomagnetic Observatory in Brazil for the month of October 2000, we introduced a wavelet analysis for the same type of data and for other periods. It is well known that wavelets allow a more detailed study in several senses: the time window for analysis can be drastically reduced if compared to other traditional methods (Fourier, for example) and at the same time allow an almost continuous accompaniment of both amplitude and frequency of signals as time goes by. This advantage brings some possibilities for potentially useful forecasting methods of the type also advanced by our group in previous works (see for example, Papa and Sosman, JASTP, 2008). However, the simultaneous statistical analysis of both time series (in our case amplitude and frequency) is a challenging matter and is in this sense that we have found what we consider our main goal. Some possible trends for future works are advanced.

NG33A-09

Volcano-seismic Crisis

In this article we report on the implementation of an automatic system for discriminating landslide seismic signals on Stromboli island (southern Italy). This is a critical point for monitoring the evolution of this volcanic island, where at the end of 2002 a violent tsunami occurred, triggered by a big landslide. We have devised a supervised neural system to discriminate among landslide, explosion-quake, and volcanic microtremor signals. We first preprocess the data to obtain a compact representation of the seismic records. Both spectral features and amplitude-versus-time information have been extracted from the data to characterize the different types of events. As a second step, we have set up a supervised classification system, trained using a subset of data (the training set) and tested on another data set (the test set) not used during the training stage. The automatic system that we have realized is able to correctly classify 99% of the events in the test set for both explosion-quake/ landslide and explosion-quake/microtremor couples of classes, 96% for landslide/ microtremor discrimination, and 97% for three-class discrimination (landslides/ explosion- quakes/microtremor). Finally, to determine the intrinsic structure of the data and to test the efficiency of our parametrization strategy, we have analyzed the preprocessed data using an unsupervised neural method. We apply this method to the entire dataset composed of landslide, microtremor, and explosion-quake signals. The unsupervised method is able to distinguish three clusters corresponding to the three classes of signals classified by the analysts, demonstrating that the parametrization technique characterizes the different classes of data appropriately.

NG33A-10

A New Probabilistic Seismic Hazard for mainland Spain. Main differences with the Building Code Hazard Map

A probabilistic seismic hazard analysis (PSHA) for mainland Spain that takes into account recent new results in seismicity, seismic zoning and strong ground attenuation no considered in the previous PSHAs studies published is presented. Those new input data has been obtained by us as a three steep project carried out in order to get a new hazard map for mainland Spain. We have used a new earthquake catalogue obtained for the area in which the earthquakes are given in moment magnitude through specific deduced relationships for our territory based on intensity data, Mezcua et al. (2004). Beside that, we also include a new seismogenetic zoning based in the recent partial zoning studies performed by different authors. Finally we have developed a new strong ground motion relationship for the area, Mezcua et al. (2008). With this new data a logic tree process has been defined to quantify the epistemic uncertainty related with those parts of the process. Finally, after a weighting scheme a mean hazard map for PGA on rock type condition for 10 % exceendence probability in 50 years is presented. In order to investigate main differences with the official hazard map from the Building Code we performed from one side the map of differences and also a map of impact expressed in % of the values obtained in relation with the presented in the official map. Main differences are in both directions: an overestimation (0.04g) of the official hazard map in those areas corresponding with the greatest PGA values corresponding to the south and southeastern part of the country due to the use of local attenuation relations and an underestimation for the rest of the country with a maximum of the order of 0.06g close to the maximum of the map in southern Spain.

NG33A-11

The Characterization of Seismicity Patterns Around Significant Volcanic Events by Assessing Their Scaling Properties and the Implications That Window Size Play in the Analysis

NG33A-12

Modification of the Pattern Informatics Approach to Intermediate-Term Earthquake Forecasting: Data Declustering

The Pattern Informatics (PI) approach, originally introduced by Tiampo et al. in 2002 [Europhys. Lett., 60 (3), 481-487,] Rundle et al., 2002 [PNAS 99, suppl. 1, 2514-2521] is one of the robust binary forecasting method of moderate magnitude earthquakes in the intermediate-term. The PI allows systematic characterization of rapidly evolving spatio-temporal seismicity patterns (M3) as angular drifts of a unit state vector in a high dimensional correlation space, and systematically identifies anomalous shifts in seismic activity with respect to the nearly stationary background. The fundamental premise of this approach to earthquake forecasting is that anomalous seismicity rate change in either seismic activation or quiescence is a good proxy for the stress change that leads to a seismic rupture of a given target magnitude. Also, natural seismicity in many parts of the world is often effectively ergodic and stationary, however, it is occasionally punctuated by ergodicity breaking aftershock clusters [Tiampo, et al., 2007, Phys. Rev. E, 75, 066107.] The effective ergodicity and stationarity of the background process is a necessary condition for the improvement of the PI method, which utilizes a linear operator. One approach for the improvement is to select the optimal mapping parameters by avoiding such ergodicity breaking processes in data [Tiampo, et al. 2008, submitted, Special Issue on Evison Symposium.] Alternatively, a remediating procedure is proposed in this study to minimize the effect of aftershocks by declustering the data without the use of additional and arbitrary windowing parameters. The PI's recovery performance of seismicity anomalies under the presence of pervasive aftershock clusters is evaluated by utilizing the Receiver Operating Characteristic (ROC) analysis technique.

NG33A-13

Trench Parallel Bouguer Anomaly (TPBA): A robust measure for statically detecting asperities along the forearc of subduction zones

During 1970s some researchers noticed that large earthquakes occur repeatedly at the same locations. These observations led to the asperity hypothesis. At the same times some researchers noticed that there was a relationship between the location of great interplate earthquakes and the submarine structures, basins in particular, over the rupture area in the forearc regions. Despite these observations there was no comprehensive and reliable hypothesis explaining the relationship. There were numerous cons and pros to the various hypotheses given in this regard. In their pioneering study, Song and Simons (2003) approached the problem using gravity data. This was a turning point in seismology. Although their approach was correct, appropriate gravity anomaly had to be used in order to reveal the location and extent of the asperities. Following the method of Song and Simons (2003) but using the Bouguer gravity anomaly that we called "Trench Parallel Bouguer Anomaly", TPBA, we found strong, logical, and convincing relation between the TPBA-derived asperities and the slip distribution as well as earthquake distribution, foreshocks and aftershocks in particular. Various parameters with different levels of importance are known that affect the contact between the subducting and the overriding plates, We found that the TPBA can show which are the important factors. Because the TPBA-derived asperities are based on static physical properties (gravity and elevation), they do not suffer from instabilities due to the trade-offs, as it happens for asperities derived in dynamic studies such as waveform inversion. Comparison of the TPBA-derived asperities with rupture processes of the well-studied great earthquakes, reveals the high level of accuracy of the TPBA. This new measure opens a forensic viewpoint on the rupture process along the subduction zones. The TPBA reveals the reason behind 9+ earthquakes and it explains where and why they occur. The TPBA reveals the areas that can generate tsunami earthquakes. It gives a logical dimension to the foreshock and aftershock distributions. Using the TPBA, we can derive the scenarios for the early 20th century great earthquakes for which limited data is available. We present cases from Aleutian and South America subduction zones. The TPBA explains why there should be no great earthquake in the down-dip of Shumagin, but that there should be a major tsunami earthquake for its up-dip. Our evidences suggest that the process has already started. We give numerous examples for South America, Aleutian-Alaska, and Kurile-Kamchatka subduction zones and we also look at Cascadia. Despite the possible various applications of the new measure, here we draw the attention to its most important application - the detection of critical asperities. Supplied with this new measure, in addition to the available seismological data, seismologists should be able to detect the critical asperities and follow the evolving rupture process. This paves the way for revealing systematically the great interplate earthquakes.

NG33A-14

Characterization of Fault Networks and Diffusion of Aftershock Epicenters From Earthquake Catalogs: Fuzzy C-means Clustering and a Modified ETAS Model

The information on three-dimensional geometry as well as the identification of active fault segments is critical
to our assessment of seismic risks. Numerical modeling of the aftershock locations, times and magnitudes
are also crucial to characterize a fault zone. In this study, a pattern recognition technique based on the Fuzzy C-
means clustering algorithm *(Bezdek, 1981) * is proposed to allow each earthquake to be associated with
different fault segments. The spatial covariance tensor for each cluster and the associated earthquakes are
used to find optimal anisotropic clusters and designate them as faults, similar to the OADC method *(Ouillon
et al., 2008)*. The location, size and orientation of the reconstructed faults segments are characterized using a
fuzzy covariance matrix *(Gustafson and Kessel, 1978)*. The output consists of a set of distinct fault
segments along with the associated earthquakes at different fuzzy membership grades *(Zadeh, 1965)*. A
resultant matrix consists of the fuzzy membership grade for different earthquakes and corresponding faults
segments specifying their degree of association with values from zero to one. The spatial distribution of
earthquakes of different magnitudes and membership grades for a fault segment is incorporated in an
anisotropic spatial kernel which characterizes the aftershock density at a distance vector in the ETAS model *(Kagan and Knopoff, 1987; Ogata, 1988)*. An optimal spatio-temporal distribution of aftershocks is obtained for
each fault segment without considering a priori distributions such as Gaussian or power law *(Helmstetter
et al., 2006; Helmstetter and Sornette, 2002)*. The model is tested on the aftershock sequence from the
Denali, 2002 earthquake in Alaska and the fault reconstruction results compared with the known faults in the
area. Therefore, a new method to incorporate the anisotropic nature of aftershock diffusion along with the
reconstruction of fault networks from seismicity catalogs is formulated in this work.

NG33A-15

Finding the Shadows: Local Variations in the Stress Field due to Large Magnitude Earthquakes

Stress shadows, regions of static stress decrease associated with large magnitude earthquake have typically been described through several characteristics or parameters such as location, duration, and size. These features can provide information about the physics of the earthquake itself, as static stress changes are dependent on the following parameters: the regional stress orientations, the coefficient of friction, as well as the depth of interest (King et al, 1994). Areas of stress decrease, associated with a decrease in the seismicity rate, while potentially stable in nature, have been difficult to identify in regions of high rates of background seismicity (Felzer and Brodsky, 2005; Hardebeck et al., 1998). In order to obtain information about these stress shadows, we can determine their characteristics by using the Pattern Informatics (PI) method (Tiampo et al., 2002; Tiampo et al., 2006). The PI method is an objective measure of seismicity rate changes that can be used to locate areas of increases and/or decreases relative to the regional background rate. The latter defines the stress shadows for the earthquake of interest, as seismicity rate changes and stress changes are related (Dieterich et al., 1992; Tiampo et al., 2006). Using the data from the PI method, we can invert for the parameters of the modeled half-space using a genetic algorithm inversion technique. Stress changes will be calculated using coulomb stress change theory (King et al., 1994) and the Coulomb 3 program is used as the forward model (Lin and Stein, 2004; Toda et al., 2005). Changes in the regional stress orientation (using PI results from before and after the earthquake) are of the greatest interest as it is the main factor controlling the pattern of the coulomb stress changes resulting from any given earthquake. Changes in the orientation can lead to conclusions about the local stress field around the earthquake and fault. The depth of interest and the coefficient of friction both have lesser effects on the stress field. It is also possible to record these changes over time, allowing for a pseudo-temporal analysis of the dynamic stress field as well. In order to further define changes in the local stress field in the future, we can introduce a non-uniform stress field into the forward model of the inversion. By constraining the values for the changes in the local stress field, the modeling of faults and their associated earthquakes, and perhaps even some of their secondary effects such as triggering, can be studied, furthering our understanding of the physics of earthquakes.

NG33A-16

Evaluation of earthquake clustering using the TM metric

The Thirulamai-Mountain (TM) metric was first developed to study ergodicity in fluids and glasses (Thirumalai
and Mountain, 1989; 1993) in terms of an effective ergodicity, where a large but finite time interval is
considered. Tiampo *et al*. (2007) employed the TM metric to earthquake systems to search for effective
ergodic periods, which are considered to be metastable equilibrium states that are disrupted by large events.
The physical meaning of the TM metric for seismicity will be addressed here in term of clustering of
earthquakes using seismic data from Southern California and two mines in Ontario - Canada. It is shown that
the TM metric is highly dependent not only on space/time seismicity clustering, but on the past seismic activity
of the region and time intervals considered as well. Highly clustered seismic activity in time/space is more
disruptive to effective ergodic periods in highly seismic active regions. This interpretation of the TM metric
indicates that disruption of the effective ergodic periods mentioned in Tiampo *et al*. (2007) for southern
California can then be attributed to the clustering on events in time and space in highly active seismic regions
rather than the occurrence of large events.