977 resultados para loss, PBEE, PEER method, earthquake engineering
Resumo:
L’approccio performance-based nell’Ingegneria sismica è una metodologia di progetto che tiene esplicitamente in conto la performance dell’edificio tra i criteri progettuali. Nell’ambito dei metodi PBEE (Performance-Based Earthquake Engineering) di seconda generazione, quello proposto dal PEER (Pacific Earthquake Engineering Research Center) risulta essere il più diffuso. In esso la performance dell’edificio oggetto di studio viene valutata in termini quantitativi secondo le 3D’s (dollars, deaths, downtime – soldi, decessi, inutilizzo), quantità di notevole interesse per l’utente finale. Il metodo si compone di quattro step, indipendenti tra loro fino alla sintesi finale. Essi sono: l’analisi di pericolosità, l’analisi strutturale, l’analisi di danno, l’analisi delle perdite o di loss. Il risultato finale è la curva di loss, che assegna ad ogni possibile perdita economica conseguente all’evento sismico una probabilità di superamento nell’arco temporale di riferimento. Dopo la presentazione del metodo PEER, si è provveduto ad una sua applicazione su di un caso di studio, nella fattispecie un telaio piano di quattro campate, multipiano, in calcestruzzo armato, costruito secondo le norme del ’92. Per l’analisi di pericolosità si è fatto ricorso alle mappe di pericolosità disponibili sul sito INGV, mentre per l’analisi strutturale si è utilizzato il software open-source OpenSees. Le funzioni di fragilità e quelle di loss sono state sviluppate facendo riferimento alla letteratura scientifica, in particolare il bollettino Fib numero 68 “Probabilistic performance-based seismic design”. In questa sede ci si è concentrati unicamente sulla stima delle perdite economiche, tralasciando le altre due variabili decisionali. Al termine del procedimento si è svolta un’analisi di sensitività per indagare quali parametri influenzino maggiormente la curva di loss. Data la curva di pericolosità, il legame EDP(IM) e la deformazione ultima a collasso risultano essere i più rilevanti sul risultato dell’analisi.
Resumo:
This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.
The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.
A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.
Resumo:
At head of cover title: Generalized computer program.
Resumo:
This paper builds on previous work (Clark, 2009; Clark & Andrews 2011, 2014) to continue the debate around a seemingly universal question…“How can educational theory be applied to engineering education in such a way so as to make the subject more accessible and attractive to students? It argues that there are three key elements to student success; Relationships, Variety & Synergy (RVS). By further examining the purposefully developed bespoke learning and teaching approach constructed around these three elements (RVS) the discourse in this paper links educational theory to engineering education and in doing so further develops arguments for the introduction of a purposefully designed pedagogic approach for use in engineering education.
Resumo:
Based on the emergent findings of a pilot study which examined the issues around introducing Peer Mentoring into an Engineering School, this paper, which is very much a 'work in progress', describes and discusses results from the first year of what will be a three year exploratory study. Focusing on three distinctive concepts integral to the student experience, Relationships, Variety and Synergy, the study follows an Action Research Design in that it aims to find a realistic and workable solution to issues of attrition within the Engineering School in which the Project and Study are set. Starting with the research question "Does Peer Mentoring improve engineering students' transition into university?"', the Pilot Project and Study will run for three years, each year building on the lessons of the previous year.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.
As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.
Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.
Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
Resumo:
The city of Lorca (Spain) was hit on May 11th, 2011, by two consecutive earth-quakes of magnitudes 4.6 and 5.2 Mw, causing casualties and important damage in buildings. Many of the damaged structures were reinforced concrete frames with wide beams. This study quantifies the expected level of damage on this structural type in the case of the Lorca earth-quake by means of a seismic index Iv that compares the energy input by the earthquake with the energy absorption/dissipation capacity of the structure. The prototype frames investigated represent structures designed in two time periods (1994–2002 and 2003–2008), in which the applicable codes were different. The influence of the masonry infill walls and the proneness of the frames to concentrate damage in a given story were further investigated through nonlinear dynamic response analyses. It is found that (1) the seismic index method predicts levels of damage that range from moderate/severe to complete collapse; this prediction is consistent with the observed damage; (2) the presence of masonry infill walls makes the structure very prone to damage concentration and reduces the overall seismic capacity of the building; and (3) a proper hierarchy of strength between beams and columns that guarantees the formation of a strong column-weak beam mechanism (as prescribed by seismic codes), as well as the adoption of counter-measures to avoid the negative interaction between non-structural infill walls and the main frame, would have reduced the level of damage from Iv=1 (collapse) to about Iv=0.5 (moderate/severe damage)
Resumo:
The problem of determining optimal power spectral density models for earthquake excitation which satisfy constraints on total average power, zero crossing rate and which produce the highest response variance in a given linear system is considered. The solution to this problem is obtained using linear programming methods. The resulting solutions are shown to display a highly deterministic structure and, therefore, fail to capture the stochastic nature of the input. A modification to the definition of critical excitation is proposed which takes into account the entropy rate as a measure of uncertainty in the earthquake loads. The resulting problem is solved using calculus of variations and also within linear programming framework. Illustrative examples on specifying seismic inputs for a nuclear power plant and a tall earth dam are considered and the resulting solutions are shown to be realistic.