10 resultados para Long Valley Region (Mono County)

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Pacoima area is located on an isolated hill in the northeast section of the San Fernando, the northeast portion of the Pacoima Quadrangle, Los Angeles County, California. Within it are exposed more than 2300 feet of Tertiary rocks, which comprise three units of Middle Miocene (?) age, and approximately 950 feet of Jurassic (?) granite basement. The formations are characterized by their mode of occurrence, marine and terrestial origin, diverse lithology, and structural features.

The basement complex is composed of intrusive granite, small masses of granodiorite and a granodiorite gneiss with the development of schistosity in sections. During the long period of erosion of the metamorphics, the granitic rocks were exposed and may have provided clastic constituents for the overlying formations.

As a result of rapid sedimentation in a transitional environment, the Middle Miocene Twin Peaks formation was laid down unconformably on the granite. This formation is essentially a large thinning bed of gray to buff pebble and cobble conglomerate grading to coarse yellow sandstone. The contact of conglomerate and granite is characterized by its faulted and depositional nature.

Beds of extrusive andesite, basalt porphyry, compact vesicular amygdaloidal basalts, andesite breccia, interbedded feldspathic sands and clays of terrestial origin, and mudflow breccia comprise the Pacoima formation which overlies the Twin Peaks formation unconformably. A transgressing shallow sea accompanied settling of the region and initiated deposition of fine clastic sediments.

The marine Topanga (?) formation is composed of brown to gray coarse sandstone grading into interbedded buff sandstones and gray shales. Intrusions of rhyolitedacite and ash beds mark continued but sporatic volcanism during this period.

The area mapped represents an arch in the Tertiary sediments. Forces that produced the uplift of the granite structural high created stresses that were relieved by jointing and faulting. Vertical and horizontal movement along these faults has displaced beds, offset contacts and complicated their structure. Uplift and erosion have exposed the present sequence of beds which dip gently to the northeast. The isolated hill is believed to be in an early stage of maturity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The condensation of phenanthroline-5,6-dione (phendione) with polyamines is a versatile synthetic route to a wide variety of chelating ligands. Condensation with 2,3- napthalene diamine gives benzo[i]dipyrido[3,2-a:2',3'-c]phenazine (bdppz) a ligand containing weakly-coupled orbitals of benzophenazine (bpz) and 2,2' -bipyridinde(bpy) character. The bpy character gives Re and Ru complexes excited-state redox properties; intramolecular electron transfer (ET) takes place to the bpz portion of the ligand. The charge-separated state so produced has an extraordinarily-long 50 µs lifetime. The slow rate of charge recombination arises from a combination of extremely weak coupling between the metal center and the bpz acceptor orbital and Marcus "inverted region" behavior. Molecular orbital calculations show that only 3% the electron density in the lowest unoccupied molecular orbital lies on the bpy atoms of bdppz, effectively trapping the transferred electron on the bpz portion. The rate of charge recombination decreases with increasing driving force, showing that these rates lie in the inverted region. Comparison of forward and back ET rates shows that donor-acceptor coupling is four orders of magnitude greater for photoinduced electron transfer than it is for thermal charge recombination.

Condensation of phendione with itself or tetramines gives a series of binucleating tetrapyridophenazine ligands of incrementally-varying coordination-site separation. When a photoredox-active metal center is attached, excited-state energy and electron transfer to an acceptor metal center at the other coordination site can be studied as a function of distance. A variety of monometallic and homo- and heterodimetallic tetrapyridophenazine complexes has been synthesized. Electro- and magnetochemistry show that no ground-state interaction exists between the metals in bimetallic complexes. Excited-state energy and electron transfer, however, takes place at rates which are invariant with increasing donor-acceptor separation, indicating that a very efficient coupling mechanism is at work. Theory and experiment have suggested that such behavior might exist in extended π-systems like those presented by these ligands.

Condensation of three equivalents of 4,5-dimethyl-1,2-phenylenediamine with hexaketocyclohexane gives the trinucleating ligand hexaazahexamethyltrinapthalene (hhtn). Attaching two photredox-active metal centers and a third catalytic center to hhtn provides means by which multielectron photocatalyzed reactions might be carried out. The coordination properties of hhtn have been examined; X-ray crystallographic structure determination shows that the ligand's constricted coordination pocket leads to distorted geometries in its mono- and dimetallic derivatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blazars are active galaxies with a jet closely oriented to our line of sight. They are powerful, variable emitters from radio to gamma-ray wavelengths. Although the general picture of synchrotron emission at low energies and inverse Compton at high energies is well established, important aspects of blazars are not well understood. In particular, the location of the gamma-ray emission region is not clearly established, with some theories favoring a location close to the central engine, while others place it at parsec scales in the radio jet.

We developed a program to locate the gamma-ray emission site in blazars, through the study of correlated variations between their gamma-ray and radio-wave emission. Correlated variations are expected when there is a relation between emission processes at both bands, while delays tell us about the relative location of their energy generation zones. Monitoring at 15 GHz using the Owens Valley Radio Observatory 40 meter telescope started in mid-2007. The program monitors 1593 blazars twice per week, including all blazars detected by the Fermi Gamma-ray Space Telescope (Fermi) north of -20 degrees declination. This program complements the continuous monitoring of gamma-rays by Fermi.

Three year long gamma-ray light curves for bright Fermi blazars are cross-correlated with four years of radio monitoring. The significance of cross-correlation peaks is investigated using simulations that account for the uneven sampling and noise properties of the light curves, which are modeled as red-noise processes with a simple power-law power spectral density. We found that out of 86 sources with high quality data, only three show significant correlations (AO 0235+164, B2 2308+34 and PKS 1502+106). Additionally, we find a significant correlation for Mrk 421 when including the strong gamma-ray/radio flare of late 2012. In all four cases radio variations lag gamma-ray variations, suggesting that the gamma-ray emission originates upstream of the radio emission. For PKS 1502+106 we locate the gamma-ray emission site parsecs away from the central engine, thus disfavoring the model of Blandford and Levinson (1995), while other cases are inconclusive. These findings show that continuous monitoring over long time periods is required to understand the cross-correlation between gamma-ray and radio-wave variability in most blazars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bulk n-lnSb is investigated at a heterodyne detector for the submillimeter wavelength region. Two modes or operation are investigated: (1) the Rollin or hot electron bolometer mode (zero magnetic field), and (2) the Putley mode (quantizing magnetic field). The highlight of the thesis work is the pioneering demonstration or the Putley mode mixer at several frequencies. For example, a double-sideband system noise temperature of about 510K was obtained using a 812 GHz methanol laser for the local oscillator. This performance is at least a factor or 10 more sensitive than any other performance reported to date at the same frequency. In addition, the Putley mode mixer achieved system noise temperatures of 250K at 492 GHz and 350K at 625 GHz. The 492 GHz performance is about 50% better and the 625 GHz is about 100% better than previous best performances established by the Rollin-mode mixer. To achieve these results, it was necessary to design a totally new ultra-low noise, room-temperature preamp to handle the higher source impedance imposed by the Putley mode operation. This preamp has considerably less input capacitance than comparably noisy, ambient designs.

In addition to advancing receiver technology, this thesis also presents several novel results regarding the physics of n-lnSb at low temperatures. A Fourier transform spectrometer was constructed and used to measure the submillimeter wave absorption coefficient of relatively pure material at liquid helium temperatures and in zero magnetic field. Below 4.2K, the absorption coefficient was found to decrease with frequency much faster than predicted by Drudian theory. Much better agreement with experiment was obtained using a quantum theory based on inverse-Bremmstrahlung in a solid. Also the noise of the Rollin-mode detector at 4.2K was accurately measured and compared with theory. The power spectrum is found to be well fit by a recent theory of non- equilibrium noise due to Mather. Surprisingly, when biased for optimum detector performance, high purity lnSb cooled to liquid helium temperatures generates less noise than that predicted by simple non-equilibrium Johnson noise theory alone. This explains in part the excellent performance of the Rollin-mode detector in the millimeter wavelength region.

Again using the Fourier transform spectrometer, spectra are obtained of the responsivity and direct detection NEP as a function of magnetic field in the range 20-110 cm-1. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance frequency tor magnetic fields as low as 3 KG at bath temperatures of 2.0K. The spectra also display the well-known peak due to the cyclotron resonance of electrons bound to impurity states. The magnitude of responsivity at both peaks is roughly constant with magnet1c field and is comparable to the low frequency Rollin-mode response. The NEP at the peaks is found to be much better than previous values at the same frequency and comparable to the best long wavelength results previously reported. For example, a value NEP=4.5x10-13/Hz1/2 is measured at 4.2K, 6 KG and 40 cm-1. Study of the responsivity under conditions of impact ionization showed a dramatic disappearance of the impurity electron resonance while the conduction electron resonance remained constant. This observation offers the first concrete evidence that the mobility of an electron in the N=0 and N=1 Landau levels is different. Finally, these direct detection experiments indicate that the excellent heterodyne performance achieved at 812 GHz should be attainable up to frequencies of at least 1200 GHz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluvial systems form landscapes and sedimentary deposits with a rich hierarchy of structures that extend from grain- to valley scale. Large-scale pattern formation in fluvial systems is commonly attributed to forcing by external factors, including climate change, tectonic uplift, and sea-level change. Yet over geologic timescales, rivers may also develop large-scale erosional and depositional patterns that do not bear on environmental history. This dissertation uses a combination of numerical modeling and topographic analysis to identify and quantify patterns in river valleys that form as a consequence of river meandering alone, under constant external forcing. Chapter 2 identifies a numerical artifact in existing, grid-based models that represent the co-evolution of river channel migration and bank strength over geologic timescales. A new, vector-based technique for bank-material tracking is shown to improve predictions for the evolution of meander belts, floodplains, sedimentary deposits formed by aggrading channels, and bedrock river valleys, particularly when spatial contrasts in bank strength are strong. Chapters 3 and 4 apply this numerical technique to establishing valley topography formed by a vertically incising, meandering river subject to constant external forcing—which should serve as the null hypothesis for valley evolution. In Chapter 3, this scenario is shown to explain a variety of common bedrock river valley types and smaller-scale features within them—including entrenched channels, long-wavelength, arcuate scars in valley walls, and bedrock-cored river terraces. Chapter 4 describes the age and geometric statistics of river terraces formed by meandering with constant external forcing, and compares them to terraces in natural river valleys. The frequency of intrinsic terrace formation by meandering is shown to reflect a characteristic relief-generation timescale, and terrace length is identified as a key criterion for distinguishing these terraces from terraces formed by externally forced pulses of vertical incision. In a separate study, Chapter 5 utilizes image and topographic data from the Mars Reconnaissance Orbiter to quantitatively identify spatial structures in the polar layered deposits of Mars, and identifies sequences of beds, consistently 1-2 meters thick, that have accumulated hundreds of kilometers apart in the north polar layered deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.

In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.

Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.

An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.

Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.

The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.

Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.

In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.

At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.

Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.