11 resultados para Wenchuan Earthquake
em Digital Commons - Michigan Tech
Resumo:
Large earthquakes may strongly influence the activity of volcanoes through static and dynamic processes. In this study, we quantify the static and dynamic stress change on 27 volcanoes in Central America, after the Mw 7.6 Costa Rica earthquake of 5 September 2012. Following this event, 8 volcanoes showed signs of activity. We calculated the static stress change due to the earthquake on hypothetical faults under these volcanoes with Coulomb 3.3. For the dynamic stress change, we computed synthetic seismograms to simulate the waveforms at these volcanoes. We then calculated the Peak Dynamic Stress (PDS) from the modeled peak ground velocities. The resulting values are from moderate to minor changes in stress (10-1-10-2 MPa) with the PDS values generally an order of magnitude larger than the static stress change. Although these values are small, they may be enough to trigger a response by the volcanoes, and are on the order of stress changes implicated in many other studies of volcano and earthquake triggering by large earthquakes. This study provides insight into the poorly-constrained mechanism for remote triggering.
Resumo:
Within the Yellowstone National Park, Wyoming, the silicic Yellowstone volcanic field is one of the most active volcanic systems all over the world. Although the last rhyolite eruption occurred around 70,000 years ago, Yellowstone is still believed to be volcanically active, due to high hydrothermal and seismic activity. The earthquake data used in this study cover the period of time between 1988 and 2010. Earthquake relocations and a set of 369 well-constrained, double-couple, focal mechanism solutions were computed. Events were grouped according to location and time to investigate trends in faulting. The majority of the events has oblique, normal-faulting solutions. The overall direction of extension throughout the 0.64 Ma Yellowstone caldera looks nearly ENE, consistently with the direction of alignments of volcanic vents within the caldera, but detailed study revealed spatial and temporal variations. Stress-field solutions for different areas and time periods were calculated from earthquake focal mechanism inversion. A well-resolved rotation of σ3 was found, from NNE-SSW near the Hebgen Lake fault zone, to ENE-WSW near Norris Junction. In particular, the σ3 direction changed throughout the years in the Norris Junction area, from being ENE-WSW, as calculated in the study by Waite and Smith (2004), to NNE-SSW, while the other σ3 directions are mostly unchanged over time. The Yellowstone caldera was subject to periods of net uplift and subsidence over the past century, explained in previous studies as caused by expanding or contracting sills, at different depths. Based on the models used to explain these deformation periods, we investigated the relationship between variability in aseismic deformation and seismic activity and faulting styles. Focal mechanisms and P and T axes were divided into temporal and depth intervals, in order to identify spatial or temporal trends in deformation. The presence of “chocolate tablet” structures, with composite dilational faults, was identified in many stages of the deformation history both in the Norris Geyser Basin area and inside the caldera. Strike-slip component movement was found in a depth interval below a contracting sill, indicating the movement of magma towards the caldera.
Resumo:
The objective for this thesis is to outline a Performance-Based Engineering (PBE) framework to address the multiple hazards of Earthquake (EQ) and subsequent Fire Following Earthquake (FFE). Currently, fire codes for the United States are largely empirical and prescriptive in nature. The reliance on prescriptive requirements makes quantifying sustained damage due to fire difficult. Additionally, the empirical standards have resulted from individual member or individual assembly furnace testing, which have been shown to differ greatly from full structural system behavior. The very nature of fire behavior (ignition, growth, suppression, and spread) is fundamentally difficult to quantify due to the inherent randomness present in each stage of fire development. The study of interactions between earthquake damage and fire behavior is also in its infancy with essentially no available empirical testing results. This thesis will present a literature review, a discussion, and critique of the state-of-the-art, and a summary of software currently being used to estimate loss due to EQ and FFE. A generalized PBE framework for EQ and subsequent FFE is presented along with a combined hazard probability to performance objective matrix and a table of variables necessary to fully implement the proposed framework. Future research requirements and summary are also provided with discussions of the difficulties inherent in adequately describing the multiple hazards of EQ and FFE.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
Statistical analyses of temporal relationships between large earthquakes and volcanic eruptions suggest seismic waves may trigger eruptions even over great (>1000 km) distances, although the causative mechanism is not well constrained. In this study the relationship between large earthquakes and subtle changes in volcanic activity was investigated in order to gain greater insight into the relationship between dynamic stresses propagated by surface waves and volcanic response. Daily measurements from the Ozone Monitoring Instrument (OMI), onboard the Aura satellite, provide constraints on volcanic sulfur-dioxide (SO2) emission rates as a measure of subtle changes in activity. Time series of SO2 emission rates were produced from OMI data for thirteen persistently active volcanoes from 1 October 2004 to 30 September 2010. In order to quantify the affect of earthquakes at teleseismic distances, we modeled surface-wave amplitudes from the source mechanisms of moment magnitude (Mw) ≥7 earthquakes, and calculated the Peak Dynamic Stress (PDS). We assessed the influence of earthquakes on volcanic activity in two ways: 1) by identifying increases in the SO2 time series data and looking for causative earthquakes and 2) by examining the average emission rate before and after each earthquake. In the first, the SO2 time series for each volcano was used to calculate a baseline threshold for comparison with post-earthquake emission. Next, we generated a catalog of responses based on sustained SO2 emission increases above this baseline. Delay times between each SO2 response and each prior earthquake were analyzed using both the actual earthquake catalog, and a randomly generated catalog of earthquakes. This process was repeated for each volcano. Despite varying multiple parameters, this analysis did not demonstrate a clear relationship between earthquake-generated PDS and SO2 emission. However, the second analysis, which was based on the occurrence of large earthquakes indicated a response at most volcanoes. Using the PDS calculations as a filtering criterion for the earthquake catalog, the SO2 mass for each volcano was analyzed in 28-day windows centered on the earthquake origin time. If the average SO2 mass after the earthquake was greater than an arbitrary percentage of pre-earthquake mass, we identified the volcano as having a response to the event. This window analysis provided insight on what type of volcanic activity is more susceptible to triggering by dynamic stress. The volcanoes with very open systems included in this study, Ambrym, Gaua, Villarrica, Erta Ale and, Turrialba, showed a clear response to dynamic stress while the volcanoes with more closed systems, Merapi, Semeru, Fuego, Pacaya, and Bagana, showed no response.
Resumo:
Maderas volcano is a small, andesitic stratovolcano located on the island of Ometepe, in Lake Nicaragua, Nicaragua with no record of historic activity. Twenty-one samples were collected from lava flows from Maderas in 2010. Selected samples were analyzed for whole-rock geochemical data using ICP-AES and/or were dated using the 40Ar/39Ar method. The results of these analyses were combined with previously collected data from Maderas as well as field observations to determine the eruptive history of the volcano and create a geologic map. The results of the geochemical analyses indicate that Maderas is a typical Central American andesitic volcano similar to other volcanoes in Nicaragua and Costa Rica and to its nearest neighbor, Concepción volcano. It is different from Concepción in one important way – higher incompatible elements. Determined age dates range from 176.8 ± 6.1 ka to 70.5 ± 6.1 ka. Based on these ages and the geomorphology of the volcano which is characterized by a bisecting graben, it is proposed that Maderas experienced two clear generations of development with three separate phases of volcanism: initial build-up of the older cone, pre-graben lava flows, and post-graben lava flows. The ages also indicate that Maderas is markedly older than Concepción which is historically active. Results were also analyzed regarding geologic hazards. The 40Ar/39Ar ages indicate that Maderas has likely been inactive for tens of thousands of years and the risk of future volcanic eruptions is low. However, earthquake, lahar and landslide hazards exist for the communities around the volcano. The steep slopes of the eroded older cone are the most likely source of landslide and lahar hazards.
Resumo:
Water distribution systems are important for life saving facilities especially in the recovery after earthquakes. In this paper, a framework is discussed about seismic serviceability of water systems that includes the fragility evaluation of water sources of water distribution networks. Also, a case study is brought about the performance of a water system under different levels of seismic hazard. The seismic serviceability of a water supply system provided by EPANET is evaluated under various levels of seismic hazard. Basically, the assessment process is based on hydraulic analysis and Monte Carlo simulations, implemented with empirical fragility data provided by the American Lifeline Alliance (ALA, 2001) for both pipelines and water facilities. Represented by the Seismic Serviceability Index (Cornell University, 2008), the serviceability of the water distribution system is evaluated under each level of earthquakes with return periods of 72 years, 475 years, and 2475 years. The system serviceability under levels of earthquake hazard are compared with and without considering the seismic fragility of the water source. The results show that the seismic serviceability of the water system decreases with the growing of the return period of seismic hazard, and after considering the seismic fragility of the water source, the seismic serviceability decreases. The results reveal the importance of considering the seismic fragility of water sources, and the growing dependence of the system performance of water system on the seismic resilience of water source under severe earthquakes.
Resumo:
How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.
Resumo:
Several deterministic and probabilistic methods are used to evaluate the probability of seismically induced liquefaction of a soil. The probabilistic models usually possess some uncertainty in that model and uncertainties in the parameters used to develop that model. These model uncertainties vary from one statistical model to another. Most of the model uncertainties are epistemic, and can be addressed through appropriate knowledge of the statistical model. One such epistemic model uncertainty in evaluating liquefaction potential using a probabilistic model such as logistic regression is sampling bias. Sampling bias is the difference between the class distribution in the sample used for developing the statistical model and the true population distribution of liquefaction and non-liquefaction instances. Recent studies have shown that sampling bias can significantly affect the predicted probability using a statistical model. To address this epistemic uncertainty, a new approach was developed for evaluating the probability of seismically-induced soil liquefaction, in which a logistic regression model in combination with Hosmer-Lemeshow statistic was used. This approach was used to estimate the population (true) distribution of liquefaction to non-liquefaction instances of standard penetration test (SPT) and cone penetration test (CPT) based most updated case histories. Apart from this, other model uncertainties such as distribution of explanatory variables and significance of explanatory variables were also addressed using KS test and Wald statistic respectively. Moreover, based on estimated population distribution, logistic regression equations were proposed to calculate the probability of liquefaction for both SPT and CPT based case history. Additionally, the proposed probability curves were compared with existing probability curves based on SPT and CPT case histories.
Resumo:
Sustainable development has only recently started examining the existing infrastructure, and a key aspect of this is hazard mitigation. To examine buildings under a sustainable perspective requires an understanding of a building's life-cycle environmental costs, including the consideration of associated environmental impacts induced by earthquake damage. Damage repair costs lead to additional material and energy consumption, leading to harmful environmental impacts. Merging results obtained from a seismic evaluation and life-cycle analysis for buildings will give a novel outlook on sustainable design decisions. To evaluate the environmental impacts caused by buildings, long-term impacts accrued throughout a building's lifetime and impacts associated with damage repair need to be quantified. A method and literature review for completing this examination has been developed and is discussed. Using software Athena and HAZUS-MH, this study evaluated the performance of steel and concrete buildings considering their life-cycle assessments and earthquake resistance. It was determined that code design-level greatly effects a building repair and damage estimations. This study presented two case study buildings and found specific results that were obtained using several premade assumptions. Future research recommendations were provided to make this methodology more useful in real-world applications. Examining cost and environmental impacts that a building has through, a cradle-to-grave analysis and seismic damage assessment will help reduce material consumption and construction activities from taking place before and after an earthquake event happens.