47 resultados para Earthquake Rupture
Resumo:
This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN --- the Community Seismic Network --- which uses relatively low-cost sensors deployed by members of the community, and (2) SAF --- the Situation Awareness Framework --- which integrates data from multiple sources, including the CSN, CISN --- the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California --- and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.
Resumo:
Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.
Resumo:
The nature of the subducted lithospheric slab is investigated seismologically by tomographic inversions of ISC residual travel times. The slab, in which nearly all deep earthquakes occur, is fast in the seismic images because it is much cooler than the ambient mantle. High resolution three-dimensional P and S wave models in the NW Pacific are obtained using regional data, while inversion for the SW Pacific slabs includes teleseismic arrivals. Resolution and noise estimations show the models are generally well-resolved.
The slab anomalies in these models, as inferred from the seismicity, are generally coherent in the upper mantle and become contorted and decrease in amplitude with depth. Fast slabs are surrounded by slow regions shallower than 350 km depth. Slab fingering, including segmentation and spreading, is indicated near the bottom of the upper mantle. The fast anomalies associated with the Japan, Izu-Bonin, Mariana and Kermadec subduction zones tend to flatten to sub-horizontal at depth, while downward spreading may occur under parts of the Mariana and Kuril arcs. The Tonga slab appears to end around 550 km depth, but is underlain by a fast band at 750-1000 km depths.
The NW Pacific model combined with the Clayton-Comer mantle model predicts many observed residual sphere patterns. The predictions indicate that the near-source anomalies affect the residual spheres less than the teleseismic contributions. The teleseismic contributions may be removed either by using a mantle model, or using teleseismic station averages of residuals from only regional events. The slab-like fast bands in the corrected residual spheres are are consistent with seismicity trends under the Mariana Tzu-Bonin and Japan trenches, but are inconsistent for the Kuril events.
The comparison of the tomographic models with earthquake focal mechanisms shows that deep compression axes and fast velocity slab anomalies are in consistent alignment, even when the slab is contorted or flattened. Abnormal stress patterns are seen at major junctions of the arcs. The depth boundary between tension and compression in the central parts of these arcs appears to depend on the dip and topology of the slab.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
This thesis presents a technique for obtaining the response of linear structural systems with parameter uncertainties subjected to either deterministic or random excitation. The parameter uncertainties are modeled as random variables or random fields, and are assumed to be time-independent. The new method is an extension of the deterministic finite element method to the space of random functions.
First, the general formulation of the method is developed, in the case where the excitation is deterministic in time. Next, the application of this formulation to systems satisfying the one-dimensional wave equation with uncertainty in their physical properties is described. A particular physical conceptualization of this equation is chosen for study, and some engineering applications are discussed in both an earthquake ground motion and a structural context.
Finally, the formulation of the new method is extended to include cases where the excitation is random in time. Application of this formulation to the random response of a primary-secondary system is described. It is found that parameter uncertainties can have a strong effect on the system response characteristics.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
This thesis presents a civil engineering approach to active control for civil structures. The proposed control technique, termed Active Interaction Control (AIC), utilizes dynamic interactions between different structures, or components of the same structure, to reduce the resonance response of the controlled or primary structure under earthquake excitations. The primary control objective of AIC is to minimize the maximum story drift of the primary structure. This is accomplished by timing the controlled interactions so as to withdraw the maximum possible vibrational energy from the primary structure to an auxiliary structure, where the energy is stored and eventually dissipated as the external excitation decreases. One of the important advantages of AIC over most conventional active control approaches is the very low external power required.
In this thesis, the AIC concept is introduced and a new AIC algorithm, termed Optimal Connection Strategy (OCS) algorithm, is proposed. The efficiency of the OCS algorithm is demonstrated and compared with two previously existing AIC algorithms, the Active Interface Damping (AID) and Active Variable Stiffness (AVS) algorithms, through idealized examples and numerical simulations of Single- and Multi-Degree-of Freedom systems under earthquake excitations. It is found that the OCS algorithm is capable of significantly reducing the story drift response of the primary structure. The effects of the mass, damping, and stiffness of the auxiliary structure on the system performance are investigated in parametric studies. Practical issues such as the sampling interval and time delay are also examined. A simple but effective predictive time delay compensation scheme is developed.
Resumo:
This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed.
Resumo:
In the 1994 Mw 6.7 Northridge and 1995 Mw 6.9 Kobe earthquakes, steel moment-frame buildings were exposed to an unexpected flaw. The commonly utilized welded unreinforced flange, bolted web connections were observed to experience brittle fractures in a number of buildings, even at low levels of seismic demand. A majority of these buildings have not been retrofitted and may be susceptible to structural collapse in a major earthquake.
This dissertation presents a case study of retrofitting a 20-story pre-Northridge steel moment-frame building. Twelve retrofit schemes are developed that present some range in degree of intervention. Three retrofitting techniques are considered: upgrading the brittle beam-to-column moment resisting connections, and implementing either conventional or buckling-restrained brace elements within the existing moment-frame bays. The retrofit schemes include some that are designed to the basic safety objective of ASCE-41 Seismic Rehabilitation of Existing Buildings.
Detailed finite element models of the base line building and the retrofit schemes are constructed. The models include considerations of brittle beam-to-column moment resisting connection fractures, column splice fractures, column baseplate fractures, accidental contributions from ``simple'' non-moment resisting beam-to-column connections to the lateral force-resisting system, and composite actions of beams with the overlying floor system. In addition, foundation interaction is included through nonlinear translational springs underneath basement columns.
To investigate the effectiveness of the retrofit schemes, the building models are analyzed under ground motions from three large magnitude simulated earthquakes that cause intense shaking in the greater Los Angeles metropolitan area, and under recorded ground motions from actual earthquakes. It is found that retrofit schemes that convert the existing moment-frames into braced-frames by implementing either conventional or buckling-restrained braces are effective in limiting structural damage and mitigating structural collapse. In the three simulated earthquakes, a 20% chance of simulated collapse is realized at PGV of around 0.6 m/s for the base line model, but at PGV of around 1.8 m/s for some of the retrofit schemes. However, conventional braces are observed to deteriorate rapidly. Hence, if a braced-frame that employs conventional braces survives a large earthquake, it is questionable how much service the braces provide in potential aftershocks.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
This thesis consists of two separate parts. Part I (Chapter 1) is concerned with seismotectonics of the Middle America subduction zone. In this chapter, stress distribution and Benioff zone geometry are investigated along almost 2000 km of this subduction zone, from the Rivera Fracture Zone in the north to Guatemala in the south. Particular emphasis is placed on the effects on stress distribution of two aseismic ridges, the Tehuantepec Ridge and the Orozco Fracture Zone, which subduct at seismic gaps. Stress distribution is determined by studying seismicity distribution, and by analysis of 190 focal mechanisms, both new and previously published, which are collected here. In addition, two recent large earthquakes that have occurred near the Tehuantepec Ridge and the Orozco Fracture Zone are discussed in more detail. A consistent stress release pattern is found along most of the Middle America subduction zone: thrust events at shallow depths, followed down-dip by an area of low seismic activity, followed by a zone of normal events at over 175 km from the trench and 60 km depth. The zone of low activity is interpreted as showing decoupling of the plates, and the zone of normal activity as showing the breakup of the descending plate. The portion of subducted lithosphere containing the Orozco Fracture Zone does not differ significantly, in Benioff zone geometry or in stress distribution, from adjoining segments. The Playa Azul earthquake of October 25, 1981, Ms=7.3, occurred in this area. Body and surface wave analysis of this event shows a simple source with a shallow thrust mechanism and gives Mo=1.3x1027 dyne-cm. A stress drop of about 45 bars is calculated; this is slightly higher than that of other thrust events in this subduction zone. In the Tehuantepec Ridge area, only minor differences in stress distribution are seen relative to adjoining segments. For both ridges, the only major difference from adjoining areas is the infrequency or lack of occurrence of large interplate thrust events.
Part II involves upper mantle P wave structure studies, for the Canadian shield and eastern North America. In Chapter 2, the P wave structure of the Canadian shield is determined through forward waveform modeling of the phases Pnl, P, and PP. Effects of lateral heterogeneity are kept to a minimum by using earthquakes just outside the shield as sources, with propagation paths largely within the shield. Previous mantle structure studies have used recordings of P waves in the upper mantle triplication range of 15-30°; however, the lack of large earthquakes in the shield region makes compilation of a complete P wave dataset difficult. By using the phase PP, which undergoes triplications at 30-60°, much more information becomes available. The WKBJ technique is used to calculate synthetic seismograms for PP, and these records are modeled almost as well as the P. A new velocity model, designated S25, is proposed for the Canadian shield. This model contains a thick, high-Q, high-velocity lid to 165 km and a deep low-velocity zone. These features combine to produce seismograms that are markedly different from those generated by other shield structure models. The upper mantle discontinuities in S25 are placed at 405 and 660 km, with a simple linear gradient in velocity between them. Details of the shape of the discontinuities are not well constrained. Below 405 km, this model is not very different from many proposed P wave models for both shield and tectonic regions.
Chapter 3 looks in more detail at recordings of Pnl in eastern North America. First, seismograms from four eastern North American earthquakes are analyzed, and seismic moments for the events are calculated. These earthquakes are important in that they are among the largest to have occurred in eastern North America in the last thirty years, yet in some cases were not large enough to produce many good long-period teleseismic records. A simple layer-over-a-halfspace model is used for the initial modeling, and is found to provide an excellent fit for many features of the observed waveforms. The effects on Pnl of varying lid structure are then investigated. A thick lid with a positive gradient in velocity, such as that proposed for the Canadian shield in Chapter 2, will have a pronounced effect on the waveforms, beginning at distances of 800 or 900 km. Pnl records from the same eastern North American events are recalculated for several lid structure models, to survey what kinds of variations might be seen. For several records it is possible to see likely effects of lid structure in the data. However, the dataset is too sparse to make any general observations about variations in lid structure. This type of modeling is expected to be important in the future, as the analysis is extended to more recent eastern North American events, and as broadband instruments make more high-quality regional recordings available.
Resumo:
This thesis aims at a simple one-parameter macroscopic model of distributed damage and fracture of polymers that is amenable to a straightforward and efficient numerical implementation. The failure model is motivated by post-mortem fractographic observations of void nucleation, growth and coalescence in polyurea stretched to failure, and accounts for the specific fracture energy per unit area attendant to rupture of the material.
Furthermore, it is shown that the macroscopic model can be rigorously derived, in the sense of optimal scaling, from a micromechanical model of chain elasticity and failure regularized by means of fractional strain-gradient elasticity. Optimal scaling laws that supply a link between the single parameter of the macroscopic model, namely the critical energy-release rate of the material, and micromechanical parameters pertaining to the elasticity and strength of the polymer chains, and to the strain-gradient elasticity regularization, are derived. Based on optimal scaling laws, it is shown how the critical energy-release rate of specific materials can be determined from test data. In addition, the scope and fidelity of the model is demonstrated by means of an example of application, namely Taylor-impact experiments of polyurea rods. Hereby, optimal transportation meshfree approximation schemes using maximum-entropy interpolation functions are employed.
Finally, a different crazing model using full derivatives of the deformation gradient and a core cut-off is presented, along with a numerical non-local regularization model. The numerical model takes into account higher-order deformation gradients in a finite element framework. It is shown how the introduction of non-locality into the model stabilizes the effect of strain localization to small volumes in materials undergoing softening. From an investigation of craze formation in the limit of large deformations, convergence studies verifying scaling properties of both local- and non-local energy contributions are presented.
Resumo:
Shockwave lithotripsy is a noninvasive medical procedure wherein shockwaves are repeatedly focused at the location of kidney stones in order to pulverize them. Stone comminution is thought to be the product of two mechanisms: the propagation of stress waves within the stone and cavitation erosion. However, the latter mechanism has also been implicated in vascular injury. In the present work, shock-induced bubble collapse is studied in order to understand the role that it might play in inducing vascular injury. A high-order accurate, shock- and interface-capturing numerical scheme is developed to simulate the three-dimensional collapse of the bubble in both the free-field and inside a vessel phantom. The primary contributions of the numerical study are the characterization of the shock-bubble and shock-bubble-vessel interactions across a large parameter space that includes clinical shockwave lithotripsy pressure amplitudes, problem geometry and tissue viscoelasticity, and the subsequent correlation of these interactions to vascular injury. Specifically, measurements of the vessel wall pressures and displacements, as well as the finite strains in the fluid surrounding the bubble, are utilized with available experiments in tissue to evaluate damage potential. Estimates are made of the smallest injurious bubbles in the microvasculature during both the collapse and jetting phases of the bubble's life cycle. The present results suggest that bubbles larger than 1 μm in diameter could rupture blood vessels under clinical SWL conditions.
Resumo:
The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.
The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.
The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.
In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.
In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.
The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.
Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.
Resumo:
In this thesis, I develop the velocity and structure models for the Los Angeles Basin and Southern Peru. The ultimate goal is to better understand the geological processes involved in the basin and subduction zone dynamics. The results are obtained from seismic interferometry using ambient noise and receiver functions using earthquake- generated waves. Some unusual signals specific to the local structures are also studied. The main findings are summarized as follows:
(1) Los Angeles Basin
The shear wave velocities range from 0.5 to 3.0 km/s in the sediments, with lateral gradients at the Newport-Inglewood, Compton-Los Alamitos, and Whittier Faults. The basin is a maximum of 8 km deep along the profile, and the Moho rises to a depth of 17 km under the basin. The basin has a stretch factor of 2.6 in the center decreasing to 1.3 at the edges, and is in approximate isostatic equilibrium. This "high-density" (~1 km spacing) "short-duration" (~1.5 month) experiment may serve as a prototype experiment that will allow basins to be covered by this type of low-cost survey.
(2) Peruvian subduction zone
Two prominent mid-crust structures are revealed in the 70 km thick crust under the Central Andes: a low-velocity zone interpreted as partially molten rocks beneath the Western Cordillera – Altiplano Plateau, and the underthrusting Brazilian Shield beneath the Eastern Cordillera. The low-velocity zone is oblique to the present trench, and possibly indicates the location of the volcanic arcs formed during the steepening of the Oligocene flat slab beneath the Altiplano Plateau.
The Nazca slab changes from normal dipping (~25 degrees) subduction in the southeast to flat subduction in the northwest of the study area. In the flat subduction regime, the slab subducts to ~100 km depth and then remains flat for ~300 km distance before it resumes a normal dipping geometry. The flat part closely follows the topography of the continental Moho above, indicating a strong suction force between the slab and the overriding plate. A high-velocity mantle wedge exists above the western half of the flat slab, which indicates the lack of melting and thus explains the cessation of the volcanism above. The velocity turns to normal values before the slab steepens again, indicating possible resumption of dehydration and ecologitization.
(3) Some unusual signals
Strong higher-mode Rayleigh waves due to the basin structure are observed in the periods less than 5 s. The particle motions provide a good test for distinguishing between the fundamental and higher mode. The precursor and coda waves relative to the interstation Rayleigh waves are observed, and modeled with a strong scatterer located in the active volcanic area in Southern Peru. In contrast with the usual receiver function analysis, multiples are extensively involved in this thesis. In the LA Basin, a good image is only from PpPs multiples, while in Peru, PpPp multiples contribute significantly to the final results.