Model Selection for Probabilistic Seismic Hazard Assessments (PSHA)
Uncertainties regarding the selection of source models but in particular of ground motion models are often the dominant contributors to the overall uncertainties in seismic hazard estimates for nuclear facilities. Information theory provides a powerful theoretical framework which can guide this selection process in a consistent way and thus help reduce overall epistemic uncertainties. From an information-theoretic perspective, the appropriateness of models can be expressed in terms of their relative entropy (Kullback- Leibler distance) and hence in physically meaningful units (bits). In contrast to hypothesis testing, information- theoretic model selection does not require ad-hoc decisions regarding significance levels nor does it require the models to be mutually exclusive and collectively exhaustive. The key ingredient, the Kullback-Leibler distance, can be estimated from the statistical expectation of log-likelihoods of observations for the models under consideration. Here, a data-driven ground-motion model selection based on Kullback-Leibler distance differences is illustrated for a combination of observations of response spectra and macroseismic intensities for data from California. Information theory allows for a unified treatment of both quantities. Kullback-Leibler distance based model selection can also be extended to seismicity model selection based on earthquake catalogs. In this case, the probability density functions for the source-site distances have to be used to calculate the sample log-likelihood values.
Ground Motion Issues for Nuclear Power Plants in Eastern North America
The resurgence of interest in nuclear power in eastern North America (ENA) has raised the profile of earthquake ground-motion issues that are of particular relevance to the assessment of the seismic safety of nuclear power plants in ENA. These issues include: (i) the assessment of hazard and its uncertainty, at low probabilities, for sites in stable continental regions - where large events may have repeat times greatly exceeding the period of historic record, and where the relationship between seismicity and geological host structures is ambiguous; and (ii) the use of Uniform Hazard Spectra in the assessment and design of nuclear power plants. Both of these issues will be overviewed in this presentation, with a focus on the second issue. The second issue arises chiefly because design of nuclear facilities (including all current facilities in ENA) has traditionally been based on testing of engineering structures and systems against a "standard scaled response spectrum" (sometimes called a Newmark-Hall spectrum after its orginal authors); this standard shape is contained in both Canadian and U.S. standards for nuclear power plant design. It is well known that, for hard rock sites in ENA, this standard shape differs markedly from the shape determined from probabilistic analysis and provided as a Uniform Hazard Spectrum (UHS) for a specified probability level. The UHS for ENA rock sites is greatly enriched in high frequency energy (>8 Hz) and depleted in low frequency energy (<3 Hz), relative to standard spectra that have been used for ENA nuclear power plant design. I discuss the problems that have arisen due to this mismatch in spectral shape, and their possible solutions.
Geological and seismological survey for new design-basis earthquake ground motion of Kashiwazaki-Kariwa NPS
At about 10:13 on July 16, 2007, a strong earthquake named 'Niigata-ken Chuetsu-oki Earthquake' of Mj6.8 on Japan Meteorological Agencyfs scale occurred offshore Niigata prefecture in Japan. However, all of the nuclear reactors at Kashiwazaki-Kariwa Nuclear Power Station (KKNPS) in Niigata prefecture operated by Tokyo Electric Power Company shut down safely. In other words, automatic safety function composed of shutdown, cooling and containment worked as designed immediately after the earthquake. During the earthquake, the peak acceleration of the ground motion exceeded the design-basis ground motion (DBGM), but the force due to the earthquake applied to safety-significant facilities was about the same as or less than the design basis taken into account as static seismic force. In order to assess anew the safety of nuclear power plants, we have evaluated a new DBGM after conducting geomorphological, geological, geophysical, seismological survey and analyses. [Geomorphological, Geological and Geophysical survey] In the land area, aerial photograph interpretation was performed at least within the 30km radius to extract geographies that could possibly be tectonic reliefs as a geomorphological survey. After that, geological reconnaissance was conducted to confirm whether the extracted landforms are tectonic reliefs or not. Especially we carefully investigated Nagaoka Plain Western Boundary Fault Zone (NPWBFZ), which consists of Kakuda-Yahiko fault, Kihinomiya fault and Katakai fault, because NPWBFZ is the one of the active faults which have potential of Mj8 class in Japan. In addition to the geological survey, seismic reflection prospecting of approximate 120km in total length was completed to evaluate the geological structure of the faults and to assess the consecutiveness of the component faults of NPWBFZ. As a result of geomorphological, geological and geophysical surveys, we evaluated that the three component faults of NPWBFZ are independent to each other from the viewpoint of geological structure, however we have decided to take into consideration simultaneous movement of the three faults which is 91km long in seismic design as a case of uncertainty. In the sea area, we conducted seismic reflection prospecting with sonic wave in the area stretching for about 140km along the coastline and 50km in the direction of perpendicular to the coastline. When we analyze the seismic profiles, we evaluated the activities of faults and foldings carefully on the basis of the way of thinking of 'fault-related-fault' because the sedimentary layers in the offing of Niigata prefecture are very thick and the geological structures are characterized by foldings. As a result of the seismic reflection survey and analyses, we assess that five active faults (foldings) to be taken into consideration to seismic design in the sea area and we evaluated that the F-B fault of 36km will have the largest impact on the KKNPS. [Seismological survey] As a result of analyses of the geological survey, data from NCOE and data from 2004 Chuetsu Earthquake, it became clear that there are factors that intensifies seismic motions in this area. For each of the two selected earthquake sources, namely NPWBFZ and F-B fault, we calculated seismic ground motions on the free surface of the base stratum as the design-basis ground motion (DBGM) Ss, using both empirical and numerical ground motion evaluation method. PGA value of DBGM is 2,300Gal for unit 1 to 4 located in the southern part of the KKNPS and 1,050Gal for unit 5 to 7 in the northern part of the site.
Use of Geologic and Paleoflood Information for INL Probabilistic Flood Hazard Decisions
The Big Lost River is a western U.S., closed basin stream which flows through and terminates on the Idaho National Laboratory. Historic flows are highly regulated, and peak flows decline downstream through natural and anthropomorphic influences. Glaciated headwater regions were the source of Pleistocene outburst floods which traversed the site. A wide range of DOE facilities (including a nuclear research reactor) require flood stage estimates for flow exceedance probabilities over a range from 1/100/yr to 1/100,000/yr per DOE risk based standards. These risk management objectives required the integration of geologic and geomorphic paleoflood data into Bayesian non parametric flood frequency analyses that incorporated measurement uncertainties in gaged, historical, and paleoflood discharges and non exceedance bounds to produce fully probabilistic flood frequency estimates for annual exceedance probabilities of specific discharges of interest. Two-dimensional hydraulic flow modeling with scenarios for varied hydraulic parameters, infiltration, and culvert blockages on the site was conducted for a range of discharges from 13-700 m3/s. High-resolution topographic grids and two-dimensional flow modeling allowed detailed evaluation of the potential impacts of numerous secondary channels and flow paths resulting from flooding in extreme events. These results were used to construct stage probability curves for 15 key locations on the site consistent with DOE standards. These probability curves resulted from the systematic inclusion of contributions of uncertainty from flood sources, hydraulic modeling, and flood-frequency analyses. These products also provided a basis to develop weights for logic tree branches associated with infiltration and culvert performance scenarios to produce probabilistic inundation maps. The flood evaluation process was structured using Senior Seismic Hazard Analysis Committee processes (NRC-NUREG/CR-6372) concepts, evaluating and integrating the inputs of multiple investigators, with extensive use of external peer review. The paleoflood-based results were adopted as the official basis for flood hazard decisions at INL.
Site Response Spectral Accelerations Based on Site Specific Geologic Control of Randomization Parameters can be Higher Than Spectral Accelerations Derived Using Randomization Models Based on Larger Coefficients of Variation Derived Over Larger Areas
The randomization of soil properties for use in seismic site response models is an accepted practice in seismic design. The application of this methodology in the absence of robust geologic control can result in inappropriately low design spectra. Site response spectra computed using data from a broad area and intended for site specific design were derived using randomization procedures that incorporate variability in soil unit thicknesses, depth to basalt and shear wave velocity (Coefficient of Variation = 0.39 to 0.50). The resulting spectral accelerations are 40% lower than the spectral accelerations derived using site specific geologic models based on geologic, cross hole and down hole shear wave velocities and borehole geophysical data over a smaller area (COV = 0.08 to 0.12). In this case a relatively constant depth and soil thickness of 44 feet (Vs approximately 1,300 feet per second) on basalt (Vs approximately 3,500 feet per second) across the site contrasted sharply with the previous randomization assumptions which assumed greater basalt depth variability. Although the lower site specific soil property COVs contributed to a site specific increase in ground motion amplification, the primary factor was likely the relatively flat basalt surface which produced site amplification effects exceededing the previous randomization estimates by 40%. The bias towards higher COVs seems to be driven by a need to account for aleatory uncertainty (especially in soil structure interaction modeling) and the engineering preference for smoother response spectra. Discussion and research may be needed to account for what appears to be an arbitrary increase in aleatory uncertainty to account for a decrease in epistemic uncertainty which occurs as a result of better site characterization methods and models.
Regulatory Issues and Challenges in Developing Seismic Source Characterizations for New Nuclear Power Plant Applications in the US
An integral component of the safety analysis for proposed nuclear power plants within the US is a probabilistic seismic hazard assessment (PSHA). Most applications currently under NRC review followed guidance provided within NRC Regulatory Guide 1.208 (RG 1.208) for developing seismic source characterizations (SSC) for their PSHA. Three key components of RG 1.208 guidance is that applicants should: (1) use existing PSHA models and SSCs accepted by the NRC as SSC as a starting point for their SSCs; (2) evaluate new information and data developed since acceptance of the starting model to determine if the model should be updated; and (3) follow guidelines set forth by the Senior Seismic Hazard Analysis Committee (SSHAC) (NUREG/CR-6372) in developing significant updates (i.e., updates should capture SSC uncertainty through representing the "center, body, and range of technical interpretations" of the informed technical community). Major motivations for following this guidance are to ensure accurate representations of hazard and regulatory stability in hazard estimates for nuclear power plants. All current applications with the NRC have used the EPRI-SOG source characterizations developed in the 1980s as their starting point model, and all applicants have followed RG 1.208 guidance in updating the EPRI- SOG model. However, there has been considerable variability in how applicants have interpreted the guidance, and thus there has been considerable variability in the methodology used in updating the SSCs. Much of the variability can be attributed to how different applicants have interpreted the implications of new data, new interpretations of new and/or old data, and new "opinions" of members of the informed technical community. For example, many applicants and the NRC have wrestled with the challenge of whether or not to update SSCs in light of new opinions or interpretations of older data put forth by one member of the technical community. This challenge has been further complicated by: (1) a given applicant's uncertainty in how to revise the EPRI-SOG model, which was developed using a process similar to that dictated by SSHAC for a level 3 or 4 study, without conducting a resource-intensive SSHAC level 3 or higher study for their respective application; and (2) a lack of guidance from the NRC on acceptable methods of demonstrating that new data, interpretations, and opinions are adequately represented within the EPRI-SOG model. Partly because of these issues, initiative was taken by the nuclear industry, NRC and DOE to develop a new base PSHA model for the central and eastern US. However, this new SSC model will not be completed for several years and does not resolve many of the fundamental regulatory and philosophical issues that have been raised during the current round of applications. To ensure regulatory stability and to provide accurate estimates of hazard for nuclear power plants, a dialog must be started between regulators and industry to resolve these issues. Two key issues that must be discussed are: (1) should new data and new interpretations or opinions of old data be treated differently in updated SSCs, and if so, how?; and (2) how can new data or interpretations developed by a small subset of the technical community be weighed against and potentially combined with a SSC model that was originally developed to capture the "center, body and range" of the technical community?
Uncertainties in Physically-based Probabilistic Seismic Hazard and Risk Analysis
We examine uncertainties in physically-based probabilistic seismic hazard analysis (Pb-PSHA). By "physically based," we refer to replacing parts of traditional probabilistic seismic hazard analysis (PSHA) with computations derived from physics and an understanding of the earthquake process. Over the past forty years the state-of-the-practice for PSHA has been based upon estimating annual frequency of exceedance for a single ground motion parameter or spectrum by regression on recorded data (empirical approach). The relation of a single parameter, or even a spectra, to risk to structures is not well known. The computational capability and understanding of the physics of earthquakes has progressed to where we can begin to compute actual seismograms that will affect structures. This also will allow for true probabilistic risk analysis (PRA) where the risk is determined by the full time history effect on structures. However, in order to provide a statistically defensible Pb-PSHA a full inclusion of the aleatory and epistemic uncertainty in such a calculation must be made. Aleatory uncertainty in rupture models arises from randomness in the source that is not included in the models, and aleatory uncertainty in wave propagation models is randomness in the geologic structure that is not included in the wave propagation models. Epistemic uncertainty is the lack of knowledge of what actual earthquake will occur or what actual geologic model exists. We examine the epistemic uncertainty of using different quasi-dynamic rupture models and of different rupture parameters for individual models. Further, proper statistical sampling of rupture parameters is analyzed. We also examine the statistics of randomizing geologic models and using finite difference codes to calculate high frequency wave propagation. We estimate aleatory uncertainty in geologic models by examining the miss-fit of observed and synthesized microearthquake recordings. We examine the aleatory uncertainty of rupture models by analyzing the miss-fit of synthesized and observed strong motion records.
A Dramatic Increase in Seismic Observations in the Central and Eastern US
The USArray Transportable Array (TA) is a network of 400 seismograph stations that is systematically moving
west-to-east across the contiguous United States. The TA is part of the National Science Foundation's multi-
disciplinary EarthScope program. The TA has already occupied over 700 stations in the western US, and is
continuing its multi-year migration towards the Atlantic coast before heading for Alaska. The stations use a
grid-like deployment with 70 km separation between stations. At any given time there are approximately 400
stations operational, occupying a nominal 800 km by 2000 km "footprint." Each station is operated for two
TA stations consist of three component broadband seismometers, with a few sites in the westernmost United
States also including three component strong motion instruments. The instruments are installed about two
meters below the surface, in thermally stable vaults. All stations transmit continuous data in near-real-time,
and the data are freely distributed through the IRIS Data Management Center. TA stations can be upgraded to
incorporate high frequency or strong motion instrument. Organizations can also "adopt" stations after
installation by reimbursing the cost of the hardware, so that the stations become permanent.
The TA is presently operating in the swath of the country extending from Texas to Montana. From 2010 to 2013
the TA will occupy ~800 sites in the central and eastern US. The array will be centered on the New Madrid, MO
region during the bicentennial of the 1811-1812 earthquakes. During the TA deployment every existing or
planned nuclear plant in the eastern US will be within 70 km of at least four new seismic stations. Thus, this
station deployment in the eastern half of the US presents an unprecedented opportunity for improving source
characterization, modeling the regional velocity and attenuation structure, and mapping seismic zones down to
low magnitude thresholds.
We will provide an overview of TA installation plans, instrumentation, and data so that scientists and decision
makers are better prepared to capitalize on the unique opportunity presented by the TA moving through the
central and eastern US. We will provide examples of TA station performance, as well as examples of data
quality and seismic detection thresholds observed in the western US.