The Physics of damage
We present the results of investigating a generalization of the fiber bundle model of damage which allows us to consider variations which can interpolate between the fiber bundle models and hierarchical models. In particular we carefully consider the effects of noise and stress transfer range on the failure mode and the nature of the correct theoretical description for these models. Our primary results are that the failure mode of the system is strongly dependent on the stress transfer range and the magnitude of the noise. Moreover we find that equilibrium based approaches are insufficient to describe the failure process.
The Role of Off-Fault Damage in Earthquake Mechanics
The interaction between a dynamic mode II fracture on a fault plane and off-fault damage has been studied using high-speed photography. Fracture damage was created in photoelastic Homalite plates by thermal shock in liquid nitrogen, and rupture velocities were measured by imaging fringes at the tips. Three experimental configurations were investigated: an interface between two damaged Homalite plates, an interface between damaged and undamaged Homalite plates, and an interface between damaged Homalite and undamaged polycarbonate plates. In each case, the velocity was compared with that on a fault between the equivalent undamaged plates at the same load. Ruptures on the interface between two damaged Homalite plates travel at sub-Rayleigh velocities indicating that sliding on off-fault fractures dissipates energy, even though no new damage is created. Propagation on the interface between damaged and undamaged Homalite is asymmetric. Ruptures propagating in the direction for which the compressional lobe of their crack-tip stress field is in the damage (which we term the 'C' direction) are unaffected by the damage. In the opposite 'T' direction, the rupture velocity is significantly slower than the velocity in undamaged plates at the same load. Specifically, transitions to supershear observed using undamaged plates are not observed in the 'T' direction. Propagation on the interface between damaged Homalite and undamaged polycarbonate exhibits the same asymmetry, even though the elastically "favored" '+' direction coincides with the 'T' direction in this case. The scaling properties of the interaction between the crack-tip field and off-fault damage are explored using an analytic model for a non-singular slip-weakening slip-pulse formulated by Rice et al. (B.S.S.A, 2005) and verified using the velocity history of a slip pulse measured in the laboratory by Lu et al., (Proc. Nat. Acad. Sci.,2007) and a direct laboratory measurement of the interaction range using damage zones of various widths adjacent to the fault by Biegel et al. (J. Gephys. Res., 2008).
Scaling in a Two-Dimensional Model for Damage and Friction
We consider the physics of damage and failure in materials, specifically rock masses. We are investigating the failure of rock masses resulting from the complex physics of microscopic dynamical processes in rocks, as manifested in the nucleation and growth of defects, microcracks, damage, and macroscopic fracture. These processes are a result of the complex emergent dynamics of self-organizing geological materials which we analyze using the methods of statistical physics and large scale simulations employing both molecular dynamics and Monte Carlo methods. A particular example is the nucleation and growth of damage on sliding surfaces associated with frictional failure. Fully interacting fields of defects and damage are generally not included in most current models for material deformation. Instead, defect density and damage fields are assumed to be non-interacting or dilute, implying a strictly mean field approach. We use statistical physics methods to understand the dynamics of interacting defect and damage fields, made possible by the construction and use of statistical field theories, to greatly improve our predictive capability for the macroscopic failure of materials. As a particular example, we consider a model for damage with partial healing. In this model, we allow repeated failure of a frictional surface, but during the failure process, only partial healing of slipped points is permitted. Following termination of the avalanche process, full healing occurs. We find the existence of a new critical point, implying that the amount of healing represents a relevant scaling parameter for the dynamics. Scaling exponents are obtained that are consistent with mean field dynamics, as would be implied by the long range nature of the interactions. This model can also be extended to describe strain hardening, alternately described as temporary strengthening, followed by a relaxation back to the long term static failure threshold. For such problems the important quantities to compute are the nucleation rate, or its inverse, lifetime to failure. We also discuss applications of this research to rock deformation across a range of spatial and temporal scales.
Statistics of Earthquakes
The statistics of earthquake occurrence play essential roles in seismic hazard assessments. An example is Gutenberg-Richter frequency-magnitude statistics. This relation allows the use of smaller earthquakes to quantify the risk of larger earthquakes. An important distinction is the difference between interoccurrence and recurrence statistics. Interoccurrence statistics concerns all earthquakes in a region. Recurrence statistics concerns earthquakes occurring at a specified point on a fault. An essential question is the role of "characteristic" earthquakes. Do big earthquakes occur on big faults and little earthquakes occur on little faults? It will be argued that Weibull statistics with a coefficient of variation near 0.5 can be used for recurrence statistics, both recurrence times and recurrence magnitudes. Earthquake statistics can be used to provide earthquake simulations through the ETAS or BASS models. The relationship between these models will be presented along with simulations using the BASS model to establish the statistical variability of foreshock and aftershock occurrence.
Numerical Earthquake Simulators and Earthquake Forecasting: Do Similar Pasts in Simulation Data Imply Similar Futures?
Topologically realistic earthquake simulations are now possible using numerical codes such as Virtual California (VC). Currently, VC is written in modern object-oriented C++ code, and runs under MPI-II protocols on parallel HPC machines such as the NASA Columbia supercomputer. In VC, an earthquake fault system is modeled by a large number of Boundary Elements interacting by means of linear elasticity. A friction law is prescribed for each boundary element, and the faults are driven at a stressing rate that is consistent with their observed long-term average offset rate. We have carried out simulations for earthquakes on models of California's fault system for simulation runs over time intervals from tens of thousands of years to millions of years. We then use a data "scoring" technique to determine which times in the simulations are most similar to today (2009), as judged by having similar (simulated) paleoseismic histories. Using the top-scoring 1% of "optimal" times, we compute, for example, the probabilities for occurrence of M > 6.7 magnitude earthquakes. We then determine the probabilities for participation of each boundary element in more than 1 event having M > 6.7 over the next 30 years (note that the threshold M > 6.7 is arbitrary, and the method can be adapted to events of any reasonable magnitude). A major question that we address in this study is: Given that 2 or more optimal times have similar event histories, how similar are their event futures over the next 30 years? This question, which is of primary concern in all forecasting methods, has not been addressed previously in earthquake forecasting studies. In this talk, we will address this question using VC simulations. We present a method to compute not only 30-year forecast probabilities and their uncertainties, but also outline a method for comparing the statistics of all optimal 30-year forecast windows to determine statistically the extent of their similarity.
A Geometry-Driven Mode of Supershear Rupture Transition
A number of recent seismological studies have investigated mechanisms by which rupture may accelerate from subshear to supershear speeds, partly addressing also the implications for near-field ground motion. Such mechanisms range from the Burridge-Andrews transition [Burridge, 1973; Andrews, 1976] to a barrier- induced transition. [Dunham et al., 2003]. All previously-explored supershear transition mechanisms have involved the level of stress and/or friction on the fault, rather than the geometry of the fault. Indeed, observations of supershear rupture in nature tend to be on long planar fault segments [Bouchon and Vallée, 2003; Dunham and Archuleta, 2004], suggesting that geometrical fault complexity impedes super-shear rupture speeds However, recent dynamic models of potential earthquakes on the North Anatolian Fault under the Sea of Marmara, Turkey [Oglesby et al., 2008] have displayed a geometry-induced mechanism by which the rupture front of an earthquake may temporarily jump to supershear speed. We find that this mechanism is due to the dynamic unclamping of one fault segment by an adjacent, differently oriented segment that ruptures earlier. Dynamic unclamping can reduce the failure stress of a fault segment over a limited area, facilitating supershear rupture propagation in that area. Outside this area of significant unclamping, the rupture speed reduces back to its typical (subshear) speed. We also find that while the rupture front (i.e., the locus of points that are beginning to slip at a given time) can have a very heterogeneous speed on faults with complex fault geometry, the velocity of the slip rate maximum (the locus of points that are experiencing their maximum slip rate) is typically much more homogeneous. As far we have examined, these effects are present only for a limited range of pre-stress levels, but the results may have implications for understanding the supershear rupture transition, as well as the source of seismic radiation in geometry complex earthquake ruptures.
Spatiotemporal clustering of aftershock sequences
Describing and modeling the spatiotemporal organization of seismicity and understanding the underlying physical mechanisms of earthquake triggering have been proven challenging. This is especially true for aftershock sequences. To share new light on this issue, we follow a recently proposed method to characterize spatiotemporal clustering of seismicity by networks of recurrent events [Geophys. Res. Lett. 33, L1304, 2006] and compare the properties of aftershock sequences like Parkfield to synthetic catalogs generated by the epidemic type aftershock sequence (ETAS) model.
Dynamic fracture energy of rocks
The fracture energy is an important parameter governing earthquake rupturing processes. The value of fracture energy estimated from field data varies over several orders of magnitude, and is several orders of magnitude higher than the surface energy of rocks. I developed a method to measure fracture energy of rocks in the laboratory using solely the energetic principle. The application of the method to granites and sandstones shows that the fracture energy is higher than the surface energy that is based on static fracture measurements and its value is in the range of field estimations. I measured the surface roughness of the fracture surface and found that the true fracture surface area is roughly a constant. I then conclude that the off-fault damage (micro- cracks) provides an effective way to damp the fracture energy. This observation supports the postulation original proposed by and his co-workers in 2002.