16 resultados para 091507 Risk Engineering (excl. Earthquake Engineering)
em CaltechTHESIS
Resumo:
This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.
The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.
A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.
Resumo:
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.
As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.
Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.
Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
Resumo:
There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.
Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.
Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.
Resumo:
Meeting the world's growing energy demands while protecting our fragile environment is a challenging issue. Second generation biofuels are liquid fuels like long-chain alcohols produced from lignocellulosic biomass. To reduce the cost of biofuel production, we engineered fungal family 6 cellobiohydrolases (Cel6A) for enhanced thermostability using random mutagenesis and recombination of beneficial mutations. During long-time hydrolysis, engineered thermostable cellulases hydrolyze more sugars than wild-type Cel6A as single enzymes and binary mixtures at their respective optimum temperatures. Engineered thermostable cellulases exhibit synergy in binary mixtures similar to wild-type cellulases, demonstrating the utility of engineering individual cellulases to produce novel thermostable mixtures. Crystal structures of the engineered thermostable cellulases indicate that the stabilization comes from improved hydrophobic interactions and restricted loop conformations by proline substitutions. At high temperature, free cysteines contribute to irreversible thermal inactivation in engineered thermostable Cel6A and wild-type Cel6A. The mechanism of thermal inactivation in this cellulase family is consistent with disulfide bond degradation and thiol-disulfide exchange. Enhancing the thermostability of Cel6A also increases tolerance to pretreatment chemicals, demonstrated by the strong correlation between thermostability and tolerance to 1-ethyl-3-methylimidazolium acetate. Several semi-rational protein engineering approaches – on the basis of consensus sequence analysis, proline stabilization, FoldX energy calculation, and high B-factors – were evaluated to further enhance the thermostability of Cel6A.
Resumo:
Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.
RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.
Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.
Resumo:
Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.
Resumo:
Multi-step electron tunneling, or “hopping,” has become a fast-developing research field with studies ranging from theoretical modeling systems, inorganic complexes, to biological systems. In particular, the field is exploring hopping mechanisms in new proteins and protein complexes, as well as further understanding the classical biological hopping systems such as ribonuclease reductase, DNA photolyases, and photosystem II. Despite the plethora of natural systems, only a few biologically engineered systems exist. Engineered hopping systems can provide valuable information on key structural and electronic features, just like other kinds of biological model systems. Also, engineered systems can harness common biologic processes and utilize them for alternative reactions. In this thesis, two new hopping systems are engineered and characterized.
The protein Pseudomonas aeruginosa azurin is used as a building block to create the two new hopping systems. Besides being well studied and amenable to mutation, azurin already has been used to successfully engineer a hopping system. The two hopping systems presented in this thesis have a histidine-attached high potential rhenium 4,7-dimethyl-1,10-phenanthroline tricarbonyl [Re(dmp)(CO)3] + label which, when excited, acts as the initial electron acceptor. The metal donor is the type I copper of the azurin protein. The hopping intermediates are all tryptophan, an amino acid mutated into the azurin at select sites between the photoactive metal label and the protein metal site. One system exhibits an inter-molecular hopping through a protein dimer interface; the other system undergoes intra-molecular multi-hopping utilizing a tryptophan “wire.” The electron transfer reactions are triggered by excitation of the rhenium label and monitored by UV-Visible transient absorption, luminescence decays measurements, and time-resolved Infrared spectroscopy (TRIR). Both systems were structurally characterized by protein X-ray crystallography.
Resumo:
Homologous recombination is a source of diversity in both natural and directed evolution. Standing genetic variation that has passed the test of natural selection is combined in new ways, generating functional and sometimes unexpected changes. In this work we evaluate the utility of homologous recombination as a protein engineering tool, both in comparison with and combined with other protein engineering techniques, and apply it to an industrially important enzyme: Hypocrea jecorina Cel5a.
Chapter 1 reviews work over the last five years on protein engineering by recombination. Chapter 2 describes the recombination of Hypocrea jecorina Cel5a endoglucanase with homologous enzymes in order to improve its activity at high temperatures. A chimeric Cel5a that is 10.1 °C more stable than wild-type and hydrolyzes 25% more cellulose at elevated temperatures is reported. Chapter 3 describes an investigation into the synergy of thermostable cellulases that have been engineered by recombination and other methods. An engineered endoglucanase and two engineered cellobiohydrolases synergistically hydrolyzed cellulose at high temperatures, releasing over 200% more reducing sugars over 60 h at their optimal mixture relative to the best mixture of wild-type enzymes. These results provide a framework for engineering cellulolytic enzyme mixtures for the industrial conditions of high temperatures and long incubation times.
In addition to this work on recombination, we explored three other problems in protein engineering. Chapter 4 describes an investigation into replacing enzymes with complex cofactors with simple cofactors, using an E. coli enolase as a model system. Chapter 5 describes engineering broad-spectrum aldehyde resistance in Saccharomyces cerevisiae by evolving an alcohol dehydrogenase simultaneously for activity and promiscuity. Chapter 6 describes an attempt to engineer gene-targeted hypermutagenesis into E. coli to facilitate continuous in vivo selection systems.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
The creation of novel enzyme activity is a great challenge to protein engineers, but nature has done so repeatedly throughout the process of natural selection. I begin by outlining the multitude of distinct reactions catalyzed by a single enzyme class, cytochrome P450 monooxygenases. I discuss the ability of cytochrome P450 to generate reactive intermediates capable of diverse reactivity, suggesting this enzyme can also be used to generate novel reactive intermediates in the form of metal-carbenoid and nitrenoid species. I then show that cytochrome P450 from Bacillus megaterium (P450BM3) and its isolated cofactor can catalyze metal-nitrenoid transfer in the form of intramolecular C–H bond amination. Mutations to the protein sequence can enhance the reactivity and selectivity of this transformation significantly beyond that of the free cofactor. Next, I demonstrate an intermolecular nitrene transfer reaction catalyzed by P450BM3 in the form of sulfide imidation. Understanding that sulfur heteroatoms are strong nucleophiles, I show that increasing the sulfide nucleophilicity through substituents on the aryl sulfide ring can dramatically increase reaction productivity. To explore engineering nitrenoid transfer in P450BM3, active site mutagenesis is employed to tune the regioselectivity intramolecular C–H amination catalysts. The solution of the crystal structure of a highly selective variant demonstrates that hydrophobic residues in the active site strongly modulate reactivity and regioselectivity. Finally, I use a similar strategy to develop P450-based catalysts for intermolecular olefin aziridination, demonstrating that active site mutagenesis can greatly enhance this nitrene transfer reaction. The resulting variant can catalyze intermolecular aziridination with more than 1000 total turnovers and enantioselectivity of up to 99% ee.
Resumo:
Thermoelectric materials have demanded a significant amount of attention for their ability to convert waste heat directly to electricity with no moving parts. A resurgence in thermoelectrics research has led to significant enhancements in the thermoelectric figure of merit, zT, even for materials that were already well studied. This thesis approaches thermoelectric zT optimization by developing a detailed understanding of the electronic structure using a combination of electronic/thermoelectric properties, optical properties, and ab-initio computed electronic band structures. This is accomplished by applying these techniques to three important classes of thermoelectric materials: IV-VI materials (the lead chalcogenides), Half-Heusler’s (XNiSn where X=Zr, Ti, Hf), and CoSb3 skutterudites.
In the IV-VI materials (PbTe, PbSe, PbS) I present a shifting temperature-dependent optical absorption edge which correlates well to the computed ab-initio molecular dynamics result. Contrary to prior literature that suggests convergence of the primary and secondary bands at 400 K, I suggest a higher convergence temperature of 700, 900, and 1000 K for PbTe, PbSe, and PbS, respectively. This finding can help guide electronic properties modelling by providing a concrete value for the band gap and valence band offset as a function of temperature.
Another important thermoelectric material, ZrNiSn (half-Heusler), is analyzed for both its optical and electronic properties; transport properties indicate a largely different band gap depending on whether the material is doped n-type or p-type. By measuring and reporting the optical band gap value of 0.13 eV, I resolve the discrepancy in the gap calculated from electronic properties (maximum Seebeck and resistivity) by correlating these estimates to the electron-to-hole weighted mobility ratio, A, in narrow gap materials (A is found to be approximately 5.0 in ZrNiSn).
I also show that CoSb3 contains multiple conduction bands that contribute to the thermoelectric properties. These bands are also observed to shift towards each other with temperature, eventually reaching effective convergence for T>500 K. This implies that the electronic structure in CoSb3 is critically important (and possibly engineerable) with regards to its high thermoelectric figure of merit.
Resumo:
Over the last century, the silicon revolution has enabled us to build faster, smaller and more sophisticated computers. Today, these computers control phones, cars, satellites, assembly lines, and other electromechanical devices. Just as electrical wiring controls electromechanical devices, living organisms employ "chemical wiring" to make decisions about their environment and control physical processes. Currently, the big difference between these two substrates is that while we have the abstractions, design principles, verification and fabrication techniques in place for programming with silicon, we have no comparable understanding or expertise for programming chemistry.
In this thesis we take a small step towards the goal of learning how to systematically engineer prescribed non-equilibrium dynamical behaviors in chemical systems. We use the formalism of chemical reaction networks (CRNs), combined with mass-action kinetics, as our programming language for specifying dynamical behaviors. Leveraging the tools of nucleic acid nanotechnology (introduced in Chapter 1), we employ synthetic DNA molecules as our molecular architecture and toehold-mediated DNA strand displacement as our reaction primitive.
Abstraction, modular design and systematic fabrication can work only with well-understood and quantitatively characterized tools. Therefore, we embark on a detailed study of the "device physics" of DNA strand displacement (Chapter 2). We present a unified view of strand displacement biophysics and kinetics by studying the process at multiple levels of detail, using an intuitive model of a random walk on a 1-dimensional energy landscape, a secondary structure kinetics model with single base-pair steps, and a coarse-grained molecular model that incorporates three-dimensional geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Our findings are consistent with previously measured or inferred rates for hybridization, fraying, and branch migration, and provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.
In Chapters 3 and 4, we identify and overcome the crucial experimental challenges involved in using our general DNA-based technology for engineering dynamical behaviors in the test tube. In this process, we identify important design rules that inform our choice of molecular motifs and our algorithms for designing and verifying DNA sequences for our molecular implementation. We also develop flexible molecular strategies for "tuning" our reaction rates and stoichiometries in order to compensate for unavoidable non-idealities in the molecular implementation, such as imperfectly synthesized molecules and spurious "leak" pathways that compete with desired pathways.
We successfully implement three distinct autocatalytic reactions, which we then combine into a de novo chemical oscillator. Unlike biological networks, which use sophisticated evolved molecules (like proteins) to realize such behavior, our test tube realization is the first to demonstrate that Watson-Crick base pairing interactions alone suffice for oscillatory dynamics. Since our design pipeline is general and applicable to any CRN, our experimental demonstration of a de novo chemical oscillator could enable the systematic construction of CRNs with other dynamic behaviors.
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.
Resumo:
No abstract.
Resumo:
The pattern of energy release during the Imperial Valley, California, earthquake of 1940 is studied by analysing the El Centro strong motion seismograph record and records from the Tinemaha seismograph station, 546 km from the epicenter. The earthquake was a multiple event sequence with at least 4 events recorded at El Centro in the first 25 seconds, followed by 9 events recorded in the next 5 minutes. Clear P, S and surface waves were observed on the strong motion record. Although the main part of the earthquake energy was released during the first 15 seconds, some of the later events were as large as M = 5.8 and thus are important for earthquake engineering studies. The moment calculated using Fourier analysis of surface waves agrees with the moment estimated from field measurements of fault offset after the earthquake. The earthquake engineering significance of the complex pattern of energy release is discussed. It is concluded that a cumulative increase in amplitudes of building vibration resulting from the present sequence of shocks would be significant only for structures with relatively long natural period of vibration. However, progressive weakening effects may also lead to greater damage for multiple event earthquakes.
The model with surface Love waves propagating through a single layer as a surface wave guide is studied. It is expected that the derived properties for this simple model illustrate well several phenomena associated with strong earthquake ground motion. First, it is shown that a surface layer, or several layers, will cause the main part of the high frequency energy, radiated from the nearby earthquake, to be confined to the layer as a wave guide. The existence of the surface layer will thus increase the rate of the energy transfer into the man-made structures on or near the surface of the layer. Secondly, the surface amplitude of the guided SH waves will decrease if the energy of the wave is essentially confined to the layer and if the wave propagates towards an increasing layer thickness. It is also shown that the constructive interference of SH waves will cause the zeroes and the peaks in the Fourier amplitude spectrum of the surface ground motion to be continuously displaced towards the longer periods as the distance from the source of the energy release increases.