966 resultados para quantifying


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantifying scientific uncertainty when setting total allowable catch limits for fish stocks is a major challenge, but it is a requirement in the United States since changes to national fisheries legislation. Multiple sources of error are readily identifiable, including estimation error, model specification error, forecast error, and errors associated with the definition and estimation of reference points. Our focus here, however, is to quantify the influence of estimation error and model specification error on assessment outcomes. These are fundamental sources of uncertainty in developing scientific advice concerning appropriate catch levels and although a study of these two factors may not be inclusive, it is feasible with available information. For data-rich stock assessments conducted on the U.S. west coast we report approximate coefficients of variation in terminal biomass estimates from assessments based on inversion of the assessment of the model’s Hessian matrix (i.e., the asymptotic standard error). To summarize variation “among” stock assessments, as a proxy for model specification error, we characterize variation among multiple historical assessments of the same stock. Results indicate that for 17 groundfish and coastal pelagic species, the mean coefficient of variation of terminal biomass is 18%. In contrast, the coefficient of variation ascribable to model specification error (i.e., pooled among-assessment variation) is 37%. We show that if a precautionary probability of overfishing equal to 0.40 is adopted by managers, and only model specification error is considered, a 9% reduction in the overfishing catch level is indicated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The diet and daily ration of the shortfin mako (Isurus oxyrinchus) in the northwest Atlantic were re-examined to determine whether fluctuations in prey abundance and availability are reflected in these two biological variables. During the summers of 2001 and 2002, stomach content data were collected from fishing tournaments along the northeast coast of the United States. These data were quantified by using four diet indices and were compared to index calculations from historical diet data collected from 1972 through 1983. Bluefish (Pomatomus saltatrix) were the predominant prey in the 1972–83 and 2001–02 diets, accounting for 92.6% of the current diet by weight and 86.9% of the historical diet by volume. From the 2001– 02 diet data, daily ration was estimated and it indicated that shortfin makos must consume roughly 4.6% of their body weight per day to fulfill energetic demands. The daily energetic requirement was broken down by using a calculated energy content for the current diet of 4909 KJ/kg. Based on the proportional energy of bluefish in the diet by weight, an average shortfin mako consumes roughly 500 kg of bluefish per year off the northeast coast of the United States. The results are discussed in relation to the potential effect of intense shortfin mako predation on bluefish abundance in the region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The gregarine (Nematopsis spp.) infestation in Penaeus vannamei on a commercial shrimp pond is discussed focusing on quantifying the parasites and some attempts to control infestation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anthropogenic climate and land-use change are leading to irreversible losses of global biodiversity, upon which ecosystem functioning depends. Since total species' well-being depends on ecosystem goods and services, man must determine how much net primary productivity (NPP) may be appropriated and carbon emitted so as to not adversely impact this and future generations. In 2005, man ought to have only appropriated 9.72 Pg C of NPP, representing a factor 2.50, or 59.93%, reduction in human-appropriated NPP in that year. Concurrently, the carbon cycle would have been balanced with a factor 1.26, or 20.84%, reduction from 7.60 Gt C/year to 5.70 Gt C/year, representing a return to the 1986 levels. This limit is in keeping with the category III stabilization scenario of the Intergovernmental Panel for Climate Change. Projecting population growth to 2030 and its associated basic food requirements, the maximum HANPP remains at 9.74 ± 0.02 Pg C/year. This time-invariant HANPP may only provide for the current global population of 6.51 billion equitably at the current average consumption of 1.49 t C per capita, calling into question the sustainability of developing countries striving for high-consuming country levels of 5.85 t C per capita and its impacts on equitable resource distribution. © Springer Science+Business Media B.V. 2009.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stable isotope (SI) values of carbon (δ13C) and nitrogen (δ15N) are useful for determining the trophic connectivity between species within an ecosystem, but interpretation of these data involves important assumptions about sources of intrapopulation variability. We compared intrapopulation variability in δ13C and δ15N for an estuarine omnivore, Spotted Seatrout (Cynoscion nebulosus), to test assumptions and assess the utility of SI analysis for delineation of the connectivity of this species with other species in estuarine food webs. Both δ13C and δ15N values showed patterns of enrichment in fish caught from coastal to offshore sites and as a function of fish size. Results for δ13C were consistent in liver and muscle tissue, but liver δ15N showed a negative bias when compared with muscle that increased with absolute δ15N value. Natural variability in both isotopes was 5–10 times higher than that observed in laboratory populations, indicating that environmentally driven intrapopulation variability is detectable particularly after individual bias is removed through sample pooling. These results corroborate the utility of SI analysis for examination of the position of Spotted Seatrout in an estuarine food web. On the basis of these results, we conclude that interpretation of SI data in fishes should account for measurable and ecologically relevant intrapopulation variability for each species and system on a case by case basis.